Scoring guidance
The goal is not theoretical precision. The goal is proportional discovery — enough exploration to produce a reliable spec, no more.
For each intake, ask:
Then assign a value from 1 to 3 for each dimension.
Practical advice
DD6 classifies the problem space, not the implementation. A problem can be complex (DD6 high) but produce a simple implementation (CIRK low). They measure different things.
Under-discovery causes more damage than over-discovery. If you are unsure whether Domain is 2 or 3, score 3. The worst case is one extra discovery session. The alternative is a bad spec that wastes an agent's execution.
DD6 scoring improves with information. After the first session, re-score. If the problem turns out simpler than expected, reduce depth. If it turns out more complex, increase it.
Contrast
Both may be described as "medium priority" in a backlog. DD6 reveals they need radically different discovery investment.
Update copy on marketing page
Multi-tenant data isolation
Depth mapping
DD6 scores map to a recommended level of exploration before spec generation.
| Score | Domain | Depth | Sessions | Human involvement |
|---|---|---|---|---|
| 6–8 | Clear | None | 0 | Optional spot-check |
| 9–11 | Complicated | Shallow | 1–2 | Expert review |
| 12–14 | Complex | Standard | 2–4 | Stakeholder + expert input |
| 15–17 | Deep | Deep | 4+ | Continuous collaboration |
| 18 | Chaotic | Emergency | — | Human-led stabilization |
Methodology routing
DD6 acts as a methodology router — it determines which discovery approach fits the problem.
| Depth | Recommended approach |
|---|---|
| None | Template-based spec generation |
| Shallow | ISO 29148 checklist + BDD scenarios |
| Standard | Structured phases (intent → scope → risk) + hypothesis tracking |
| Deep | Opportunity Solution Tree + AI Bubbles + multi-session iteration |
| Emergency | Incident triage protocol — stabilize, root-cause, plan |
DD6 depth determines discovery investment. Discovery sessions produce the context that makes CIRK scoring accurate.