A Python-based regression orchestrator with parallel dispatch, automatic coverage merging, and HTML dashboards — reducing a team's daily simulation overhead from 4+ hours to under 20 minutes.
A six-person verification team was manually launching simulation regressions, waiting for results, manually merging coverage databases, and hand-building status reports. This consumed more than four engineering-hours per person, per day — on a team of six, that's over 24 hours of engineering time lost every single day to process overhead, not design work.
The volume of testcases had grown organically across several projects. There was no centralized orchestration: engineers tracked run status in spreadsheets, coverage was merged by ad-hoc shell scripts, and dashboards were absent. Iteration speed — the core metric for a verification team — had collapsed.
We built a Python-based regression framework from the ground up, designed to replace every manual step — job dispatch, status monitoring, coverage merging, and reporting — with a single command invocation.
Regression manifest format. Defined a simple YAML-based manifest for describing test lists, seed groups, simulator arguments, and coverage collection settings. Engineers declare intent; the framework handles execution.
Parallel job dispatch. The orchestrator distributes jobs across available compute using a configurable worker-pool model — local cores or farm slots. Peak throughput scales with infrastructure without changing any test configuration.
Live status monitoring. A real-time terminal dashboard shows run state (queued / running / pass / fail / timeout) per job. Engineers see progress without polling log files or tracking spreadsheets.
Automatic coverage merging. On completion, all per-seed databases are merged automatically using the team's EDA tool API. No manual merge scripts, no version conflicts, no forgotten seeds.
HTML dashboard generation. A self-contained HTML report is generated for every regression run: pass/fail summary, per-test timing, coverage totals per group, trend deltas versus the prior run, and a full log viewer with filtering. Shareable by link, no tool access required.
After deployment, the daily regression workflow for each engineer was reduced from 4+ hours to under 20 minutes — a single command to launch, and a dashboard link to review results when the run completes. Coverage is merged and the HTML report is ready with no further action.
The team's iteration cadence went from one regression cycle per day (gated by manual overhead) to multiple cycles per day. Coverage closure tracking, previously a weekly snapshot, became a per-run metric. The framework has since been adopted on two subsequent projects within the same organisation with zero modifications to the core orchestrator.
If your team is spending hours on process instead of engineering, we can build the automation infrastructure that gives that time back. Let's talk.