Independent judgement for decision safety.
Independent judgement for decision safety.
Services are bounded assessment formats, each tied to a distinct decision risk.
This is an independent assessment of whether your product metrics and behavioural signals can safely support the decisions being made. Each service exists to answer a single question.
This is for teams who already have analytics, but no longer trust what the numbers mean under scrutiny.
The deliverable is an assessment artefact: judgement, assumptions, interpretive dependencies, and failure modes — written to be defensible under scrutiny.
Services are defined by the kind of decision risk being assessed — not by tools, artefacts produced, or hours of work.
If you already know the shape of your problem, pick the closest service shape. If you do not, the canonical Product Metrics Assessment will be a good starting point.
Product Metrics Assessment
- Metric definitions and stability over time
- Alignment between stated intent and measured behaviour
- Interpretive disagreement across teams
- Dependence on undocumented assumptions
- Exposure under internal or external scrutiny
- Metrics exist but feel unreliable or contradictory
- Teams hesitate to make decisions based on dashboards
- Definitions are debated rather than decisions being made
- Confidence has eroded without a clear failure point
Signal Integrity Assessment
- Consent boundaries and the behaviour they hide or distort
- Session definitions versus actual user intent
- Identity assumptions across surfaces or time
- Silent data loss or fragmentation
- Aggregation assumptions that no longer hold
- Metrics appear technically correct but feel misleading
- Numbers change without a clear product explanation
- Teams suspect “we’re not seeing the full picture”
- Decisions rely on behaviour that may not be fully observable
Decision Readiness Assessment
- Defensibility of metrics under questioning
- Clarity of assumptions at leadership or board level
- Interpretive risk during audits, reviews, or transitions
- Areas where confidence relies on informal knowledge
- Organisational scale or visibility is increasing
- Leadership or ownership is changing
- Decisions will be reviewed externally
- Metrics must be justified, not just consumed
Metric Transition Assessment
- Continuity of metric meaning across systems
- Implicit redefinition during migrations
- Comparability and historical interpretation risk
- Assumptions introduced by “like-for-like” mappings
- Migrating analytics platforms
- Changing telemetry schemas
- Consolidating reporting sources
- Re-baselining metrics after change
Each service is time- and scope-bounded. The engagement completes when the assessment artefact is delivered.
This page describes the service shapes; the engagement scope is defined during scoping and captured in the assessment’s scope statement.
Follow-up validation is a separate assessment with its own scope and artefact.
- Deliverable is a written assessment artefact (judgement, assumptions, interpretive dependencies, failure modes).
- Inputs are minimal and determined during scoping (definitions, dashboards as evidence, telemetry descriptions, decision context).
- Outputs are descriptive rather than prescriptive, and are written to be defensible under scrutiny.
- Ownership of execution does not transfer.
- Configure tools, instrumentation, or platforms
- Redesign metrics or select KPIs
- Build or improve dashboards/reports
- Execute migrations, reconciliation, or backfills
- Provide templates, frameworks, playbooks, or training
- Provide ongoing analytics support by default
Stopping is treated as a quality safeguard, not a failure mode. An assessment may be paused or concluded if it can no longer be conducted responsibly.
- Required access cannot be provided.
- Scope expands beyond assessment or implies execution ownership.
- Independence cannot be maintained.
- Political or organisational constraints prevent honest examination.
An assessment cannot be responsibly performed in every context. The constraint is usually decision ownership, observability, or access.
- No meaningful telemetry or metrics exist yet.
- The organisation is seeking implementation, optimisation, or ongoing analytics support.
- There is no identifiable decision owner for the judgement being sought.
- The goal is validation rather than examination.