Back to Home

The AVS Rubric: Methodology

How we measure whether your trust infrastructure is strong enough for AI product growth to compound.

Why Trust Infrastructure Determines AI Product Growth

AI-native products face a unique market constraint: buyers can't predict how your product will behave before purchasing.

Unlike traditional SaaS where workflows are deterministic ("click here, get this result"), AI outputs vary based on context, model versions, and user inputs. This unpredictability creates a trust gap that breaks traditional growth loops before they can compound.

The Adaptive Value System (AVS) Rubric measures whether your trust infrastructure is strong enough for growth to accelerate.

The Trust Stack

The rubric assesses eight trust dimensions organized in a hierarchical stack. Gaps in lower layers cascade upward, making upper layers unstable.

Layer 4: Enterprise Readiness
Can the economic buyers easily find what they need to approve the deal?
Layer 3: Operational Controls
Can customers control spend & avoid surprises?
Layer 2: Pricing Architecture
Can customers predict what they'll pay?
Layer 1: Product-ICP Clarity
Do you know who you serve & what success looks like?

The Eight Dimensions

Click any dimension to expand its details.

Layer 1: Product-ICP Clarity

Layer 2: Pricing Architecture

Layer 3: Operational Controls

Layer 4: Enterprise Readiness

How It Works

Input: Publicly Observable Signals

You enter your company URL. The AVS Rubric Agent crawls up to 25 pages across your public digital presence, prioritizing high-intent surfaces using a weighted scoring engine:

Homepage

Primary positioning, outcome claims, ICP signals

Pricing page

Tier structure, unit definitions, overage behavior, billing options

Documentation & API reference

Cost calculators, usage examples, metering details, quickstarts

Blog & changelog

Product updates, case studies with quantified outcomes

Trust center

Security controls, compliance certifications, audit surfaces

Use cases & case studies

Workflow specificity, customer outcomes, proof artifacts

Terms of service

Overage policies, limit behaviors, renewal/cancellation terms

Community & investor content

Public demos, architecture posts, community evidence

Evidence Quality Rules

Not all public content counts as evidence. The rubric enforces strict quality filters:

  • Rejected as evidence: copyright footers, cookie banners, navigation menus, social media links, generic legal boilerplate, partner logos without context, job postings, and auto-generated content.
  • Marketing slogans rejected unless accompanied by specific, concrete details (metrics, features, workflows, pricing numbers).
  • Every citation must be specific: page + concrete fact (e.g., "Pricing page lists 3 tiers: Free, Pro ($49/mo), Enterprise (custom)").
  • Duplicate evidence is counted once — the same fact on multiple pages does not inflate confidence.

The report reflects what your prospects can actually see — not what you believe you're communicating. This is the trust infrastructure gap.

Output

You receive a trust infrastructure report with a total score (0–16, mapped to a maturity band) including:

  • Overall AVS ScoreSum of 8 dimension scores (each 0–2), categorized as Nascent, Emerging, Established, or Advanced
  • Dimension breakdownIndividual 0–2 scores with subtest-level detail for each of the 8 dimensions
  • Strengths & weaknessesWhat's working, what's missing, and what it enables or blocks
  • Trust breakpointsSpecific gaps that are actively blocking trust and growth
  • Confidence labelsHow certain each assessment is, based on evidence quality
  • 90-day focusPrioritized actions with measurable outcomes to close trust gaps

The 0–2 Scoring System

Each dimension is scored on a 0–2 scale using a deterministic, subtest-based methodology. Scores are not subjective ratings — they are computed from evidence.

Score Definitions

0Not Present

The trust signal is absent or too vague to be actionable. Fewer than 3 of 6 subtests pass.

This is a critical gap — prospects cannot evaluate this aspect of your product.

1Emerging

Partial evidence exists but key elements are missing. 3–4 of 6 subtests pass, or a hard gate caps the score.

Foundation exists but isn't complete enough to build trust at scale.

2Strong

Clear, specific, and verifiable evidence across the dimension. 5–6 of 6 subtests pass with no gate failures.

This dimension is actively building trust and enabling growth.

How Scores Are Computed

Each dimension uses 6 subtests (5 for Buyer & Budget Alignment). Each subtest is binary: pass (1) or fail (0). The subtests evaluate specific, observable criteria — not subjective impressions.

Points → Score Mapping

0–2 pts

Score = 0

3–4 pts

Score = 1

5–6 pts

Score = 2

Hard Gates

Certain subtests act as hard gates — if they fail, the dimension score is capped regardless of how many other subtests pass. Gates enforce that critical capabilities (like auditability, measurability, or stated outcomes) cannot be bypassed.

Example: In the "Value Unit" dimension, if the unit definition lacks auditability (no dashboard breakdown or export logs), the score is capped at 1 — even if all other subtests pass. A billable unit that customers can't verify isn't production-grade.

Overall Score & Maturity Bands

The total AVS score is the sum of all 8 dimension scores (max 16). This maps to a maturity band:

Nascent

0–4

Emerging

5–8

Established

9–12

Advanced

13–16

Scores are computed from an evidence ledger — every fact is tracked with its source, reliability, and page reference. This makes scores reproducible and auditable.

Confidence Labels

Each dimension includes a confidence score indicating assessment certainty:

High Confidence≥ 75%

Clear, unambiguous evidence found. This is a confirmed finding.

Act on this immediately.

Medium Confidence45–74%

Partial evidence with some ambiguity. May indicate inconsistent messaging.

Investigate further — needs human validation.

Low Confidence< 45%

Minimal or conflicting evidence. Automated assessment may be missing context.

Don't act alone — signal is weak.

Example interpretations

Product North Star (0.82) — "Your primary outcome metric is not stated on your pricing page, documentation, or product marketing. This is a definitive gap."
Cost Driver Mapping (0.58) — "Some cost information is visible, but the relationship between customer workflows and billing is unclear."
Safety Rails (0.38) — "Limited public information available on budget controls. May require reviewing the logged-in dashboard."

A High Confidence gap should trigger immediate action — it's blocking trust. A Low Confidence finding might just mean the AI couldn't access the right information.

What Traditional Analytics Miss

Funnels and product analytics tell you what users do. They don't tell you whether users can predict what will happen before they commit.

Can buyers forecast cost before purchasing?
Can they predict product behavior in their context?
Can they verify they're getting value for their spend?
Can they control risk (caps, limits, fallbacks)?

When the answer is "no," growth loops leak:

  • • Sign up but won't invite teammates — uncertainty about cost allocation
  • • Activate but won't share outputs — can't predict if it works for recipient
  • • Renew once but won't expand — fear of surprise bills
  • • First success but won't scale — no confidence in consistency

The AVS Rubric identifies these gaps before they show up in your retention curve — by measuring whether trust infrastructure exists in your public signals.

Why This Matters for AI Products

Traditional SaaS could rely on free trials and generous freemium tiers to build trust through experience. AI products break this model:

1. Free trials are expensive

LLM inference costs, GPU time, and API call expenses make generous free tiers economically unsustainable. Trust must exist before trial, not during it.

2. One good output doesn't guarantee the next

AI output quality varies by input, context, model version, and even time of day. A single successful trial doesn't give buyers confidence the product will work reliably at scale.

Trust must be built through signals — transparent pricing, clear constraints, explicit guardrails, documented failure modes — rather than just experience.

AVS measures whether those signals exist.

Questions about the methodology? Book a 30-min session to discuss your specific context.