Scoring guidance

Score pragmatically and consistently.

The goal is not theoretical precision. The goal is proportional discovery — enough exploration to produce a reliable spec, no more.

Scoring flow

For each intake, ask:

I How clear is what the requester wants?
D How much specialist knowledge is needed?
S How many perspectives must converge?
T Can an agent verify the outcome?
P Have we done something like this before?
B Are the problem edges defined?

Then assign a value from 1 to 3 for each dimension.

Practical advice

Three rules for better scoring.

Score the problem, not the solution

DD6 classifies the problem space, not the implementation. A problem can be complex (DD6 high) but produce a simple implementation (CIRK low). They measure different things.

When in doubt, score higher

Under-discovery causes more damage than over-discovery. If you are unsure whether Domain is 2 or 3, score 3. The worst case is one extra discovery session. The alternative is a bad spec that wastes an agent's execution.

Revisit after the first discovery session

DD6 scoring improves with information. After the first session, re-score. If the problem turns out simpler than expected, reduce depth. If it turns out more complex, increase it.

Contrast

Same backlog priority, different discovery.

Both may be described as "medium priority" in a backlog. DD6 reveals they need radically different discovery investment.

Update copy on marketing page

I1 D1 S1 T1 P1 B1
Skip

Multi-tenant data isolation

I2 D3 S2 T2 P3 B2
Deep

Depth mapping

From score to discovery depth.

DD6 scores map to a recommended level of exploration before spec generation.

Score Domain Depth Sessions Human involvement
6–8 Clear None 0 Optional spot-check
9–11 Complicated Shallow 1–2 Expert review
12–14 Complex Standard 2–4 Stakeholder + expert input
15–17 Deep Deep 4+ Continuous collaboration
18 Chaotic Emergency Human-led stabilization

Methodology routing

Different depths, different approaches.

DD6 acts as a methodology router — it determines which discovery approach fits the problem.

Depth Recommended approach
None Template-based spec generation
Shallow ISO 29148 checklist + BDD scenarios
Standard Structured phases (intent → scope → risk) + hypothesis tracking
Deep Opportunity Solution Tree + AI Bubbles + multi-session iteration
Emergency Incident triage protocol — stabilize, root-cause, plan

DD6 depth determines discovery investment. Discovery sessions produce the context that makes CIRK scoring accurate.