You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This epic tracks the foundational end-to-end data flow of the HashHive platform: the path by which agents acquire work, execute attacks, submit crack results, and propagate those results back through the system. Without this pipeline, the platform cannot track progress, distribute results, or provide any meaningful operational feedback.
This is a CipherSwarm parity epic derived from gap analysis. All P0 issues here are blockers for core platform functionality.
Scope
This epic covers:
Agent Prerequisites — Benchmarking and cracker binary management that gate agents from being fully operational
Attack Prerequisites — Reusable attack templates required before campaigns can be configured efficiently
Hash Result Pipeline — The core crack submission, status reporting, and zap (deduplication) system
Hash Data Operations — Extensions of the result pipeline: ETA calculation, attack complexity, SuperHashlists, export/import
Agents must benchmark before accepting tasks. The benchmark data (per-agent, per-hash-type throughput) feeds directly into the keyspace distribution scheduler in Epic #118. Without benchmark data, proportional chunk sizing is impossible.
CipherSwarm ref: HashcatBenchmark model (6.4 KB), POST /api/v1/client/agents/{id}/submit_benchmark
The server tracks hashcat binary versions and serves presigned download URLs to agents. Ensures all agents run compatible cracker versions without manual per-rig maintenance.
Reusable attack configurations (mode + parameters + resource references) that pre-fill campaign attacks. Required before crackable uploads (#112) and attack playbooks (#115) can be built.
#98 — Hash Item Storage, Crack Result Tracking & Zap System ⭐ Core
This is the single most critical issue in this epic. Agents submit cracks (submit_crack), report status (submit_status), and fetch already-cracked hashes to skip (get_zaps). Without this, the platform cannot record any results.
Each attack tracks its own state machine (pending → running → completed/exhausted/failed/paused) and has its total keyspace/duration estimated. Complexity feeds progress bars and ETA display.
CipherSwarm ref: Attack model (15.3 KB), CalculateMaskComplexityJob
Calculates estimated campaign completion from current throughput, remaining keyspace, and fleet size. Recalculates in real-time as WebSocket status updates arrive.
#102 — Hash List Export, Pre-Cracked Import & Global Search
Export cracked pairs in CSV/potfile format, import pre-cracked hashes from external tools, and search for a specific hash across all lists in a project.
Overview
This epic tracks the foundational end-to-end data flow of the HashHive platform: the path by which agents acquire work, execute attacks, submit crack results, and propagate those results back through the system. Without this pipeline, the platform cannot track progress, distribute results, or provide any meaningful operational feedback.
This is a CipherSwarm parity epic derived from gap analysis. All P0 issues here are blockers for core platform functionality.
Scope
This epic covers:
Child Issues
P0 — Core Parity Blockers
P1 — Important CipherSwarm Features
Dependency Graph
Key Technical Context
#93 — Agent Benchmarking
Agents must benchmark before accepting tasks. The benchmark data (per-agent, per-hash-type throughput) feeds directly into the keyspace distribution scheduler in Epic #118. Without benchmark data, proportional chunk sizing is impossible.
POST /api/v1/client/agents/{id}/submit_benchmark#94 — Cracker Binary Management
The server tracks hashcat binary versions and serves presigned download URLs to agents. Ensures all agents run compatible cracker versions without manual per-rig maintenance.
#95 — Attack Templates
Reusable attack configurations (mode + parameters + resource references) that pre-fill campaign attacks. Required before crackable uploads (#112) and attack playbooks (#115) can be built.
#98 — Hash Item Storage, Crack Result Tracking & Zap System ⭐ Core
This is the single most critical issue in this epic. Agents submit cracks (
submit_crack), report status (submit_status), and fetch already-cracked hashes to skip (get_zaps). Without this, the platform cannot record any results.#99 — Attack Complexity & State Machine
Each attack tracks its own state machine (
pending → running → completed/exhausted/failed/paused) and has its total keyspace/duration estimated. Complexity feeds progress bars and ETA display.#100 — Campaign ETA Calculator
Calculates estimated campaign completion from current throughput, remaining keyspace, and fleet size. Recalculates in real-time as WebSocket status updates arrive.
#101 — SuperHashlists
When multiple hash lists share the same hash type, SuperHashlists merge them for deduplication — crack once, propagate to all source lists.
#102 — Hash List Export, Pre-Cracked Import & Global Search
Export cracked pairs in CSV/potfile format, import pre-cracked hashes from external tools, and search for a specific hash across all lists in a project.
Acceptance Criteria
Related Epics
Source
All issues in this epic were identified via CipherSwarm/Ouroboros gap analysis and tagged with the
gap-analysislabel.