Say Hello

THE 7-DIMENSION
AI VISIBILITY SCORE.

The AI Visibility Score is the single number on the front page of every audit I ship. It compresses 7 measurable dimensions into a 0-10 composite. Here is what each dimension actually measures, why it is weighted the way it is, and what a score of 4 vs 8 actually means.

Dimensions
7
Score Scale
0 - 10
AI Surfaces
4
Audit Length
18 pages

WHY A SINGLE COMPOSITE SCORE.

Audit deliverables that present 40 separate numbers are harder to act on than a deliverable that presents one. The single composite forces a verdict: is this site broken (0-3), patchy (4-5), solid (6-7), or dominant (8-10)? Once you have the verdict, you can dig into the dimensions to understand why.

The 0-10 scale is deliberate. Scoring on 0-7 (matching the dimension count) would imply that dimensions are equal weight and binary pass/fail. They are not. Some dimensions can drop a site to zero visibility on their own (bot accessibility); others contribute incrementally (linking). The 0-10 composite preserves that asymmetry while staying intuitive.

Dimension 01.
Structured data correctness

SCHEMA.

What it measures: presence and correctness of JSON-LD structured data on every public page. Specifically: Organization and Person at the site root with stable @id values, BreadcrumbList on every non-root page, Article on every blog or research post with headline / author / datePublished / dateModified, and a single @graph array per page (not multiple disjoint blocks).

Why it matters: schema is roughly 9% of AI fetcher traffic across the research network, but its share of factual citations is far higher. When an AI model needs an entity name, an author, a price, or a publication date, it pulls from schema first. Bad schema can actively contradict your visible content and corrupt citations.

Common failure modes: JS-injected schema invisible to non-JS crawlers, missing @id resolution causing entity duplication, schema-content contradictions (price mismatch is the classic), the "schema everything" trap (15+ low-engagement types).

0 - 3
No schema or broken schema.
4 - 5
Schema present but scattered, missing IDs, or JS-injected.
6 - 7
Clean JSON-LD with a single @graph, correct types, server-rendered.
8 - 10
All of the above plus consistent cross-page identity resolution and zero contradictions.
Dimension 02.
The most-consumed endpoint

RSS.

What it measures: presence, validity, and richness of feed endpoints. Specifically: a valid RSS 2.0 or Atom feed at a discoverable URL, <link rel="alternate"> declared in HTML <head>, full content in <content:encoded> (not just titles or excerpts), accurate RFC-822 timestamps, and unrestricted access in robots.txt.

Why it matters: RSS is the single most-consumed endpoint by AI fetchers - roughly 40% of all logged requests across the 47-site network. Bots probe feeds first to understand the shape of your site, then decide what HTML to pull. A site with no feed gets crawled less. A site with a stripped feed (titles only) gets crawled even less.

Common failure modes: CMS defaults that strip feeds to titles + excerpt, malformed XML from plugin conflicts, dates in the feed that do not match dates in the page, feed paths blocked by overzealous robots.txt.

0 - 3
No feed or broken feed.
4 - 5
Feed exists but truncated content or broken dates.
6 - 7
Full-content feed, valid XML, accurate timestamps, discoverable from <head>.
8 - 10
All of the above plus per-section feeds, image enclosures, and sub-1-hour latency on new posts.
Dimension 03.
What bots actually receive

HTML.

What it measures: the quality of the rendered HTML that AI fetchers actually receive. Specifically: semantic HTML (proper <article>, <section>, <header>, headings hierarchy), unique meta descriptions per page, OpenGraph completeness, content present in the initial HTML response (not lazy-rendered by JS), reasonable content density (not infinite scroll skeleton).

Why it matters: AI fetchers consume roughly 25% of their traffic as raw HTML. Most of them either do not execute JavaScript or treat post-render content as second-class. If your homepage is a Single Page App skeleton with content injected after hydration, AI bots see an empty page.

Common failure modes: SPA frameworks without server-side rendering, missing or duplicate meta descriptions across pages, content rendered into <div> soup instead of semantic tags, OpenGraph that contradicts the page title.

0 - 3
JS-rendered content, AI bots see empty pages.
4 - 5
Static HTML but missing meta, weak semantics, duplicate descriptions.
6 - 7
Clean semantic HTML, unique meta per page, complete OpenGraph.
8 - 10
All of the above plus consistent heading hierarchy, content above-the-fold, and zero render-blocking dependencies.
Dimension 04.
The silent-failure dimension

BOT ACCESSIBILITY.

What it measures: can AI bots actually reach your content. Specifically: robots.txt with explicit Allow: rules for major AI user-agents (GPTBot, ClaudeBot, PerplexityBot, OAI-SearchBot, ChatGPT-User, Google-Extended, etc.), no Cloudflare bot fight mode silently dropping AI traffic, no WAF rules blocking by user-agent string, no rate-limiting that returns 429s on normal AI crawl patterns.

Why it matters: roughly 30% of sites in the broader research sample were silently blocking at least one major AI bot without the owner realising. Bot accessibility is the dimension where a single misconfigured rule can drop you from a 7 to a 0. If the bots cannot reach you, every other dimension is theoretical.

Common failure modes: robots.txt copy-pasted from a 2020 template that predates AI bots, Cloudflare's "Block AI crawlers on all pages" toggle accidentally enabled, WAF rules with user-agent contains "bot" blanket blocks, aggressive rate-limiters configured for human traffic that catastrophically fail on bot crawl bursts.

0 - 3
At least one major AI bot is silently blocked.
4 - 5
Bots reach the site but rate-limits cause partial failures.
6 - 7
Explicit allow rules, no rate-limit failures, all major bots reach all public content.
8 - 10
All of the above plus per-bot logging in place, anomaly detection, and documented bot-allowlist policy.
Dimension 05.
The compounding behavioural lever

FRESHNESS.

What it measures: publishing cadence and date accuracy. Specifically: how recently the site has shipped new content, accuracy of lastmod dates in sitemap.xml, accuracy of dateModified in Article schema, distribution of content age across the site, presence of an active blog or news section.

Why it matters: sites publishing weekly get AI bot traffic at roughly 3x the rate of sites publishing less often. Sites that go silent for 60+ days see their AI bot traffic drop to near-baseline within roughly two weeks. The freshness signal compounds over time and decays quickly. It is the single biggest behavioural lever for ongoing AI visibility.

Common failure modes: sitemap.xml with lastmod dates that lie (set to today on every regenerate), schema dates that do not match visible dates, blog sections that look active but have not shipped in 6 months, "evergreen" content with no dateModified updates ever.

0 - 3
No new content in 90+ days, broken dates.
4 - 5
Irregular publishing, partially accurate dates.
6 - 7
At least monthly publishing, accurate dates across sitemap and schema.
8 - 10
Weekly or better publishing, perfect date integrity, evergreen content actively maintained with real dateModified updates.
Dimension 06.
Structural health and discovery

LINKING.

What it measures: the structural health of internal links and discovery. Specifically: sitemap.xml completeness (every public page included), absence of orphaned pages (pages not linked from anywhere), internal link density per page, anchor text quality (descriptive, not "click here"), absence of broken internal links, breadcrumbs that match URL structure.

Why it matters: sitemap.xml is roughly 14% of AI fetcher traffic. Bots use it to triangulate site structure, freshness, and topical clustering. Internal links signal authority distribution and topic coherence. A site with a complete sitemap and clean internal linking is significantly easier for AI models to navigate than a site that relies on hub pages alone.

Common failure modes: sitemap.xml missing 30%+ of pages because of stale generation, internal links pointing to redirected or dead URLs, anchor text that is identical across hundreds of links ("read more"), orphaned pages that exist in the CMS but no page links to them.

0 - 3
No sitemap or sitemap missing most pages.
4 - 5
Sitemap present but partial, internal links exist but many broken.
6 - 7
Complete sitemap, no broken internal links, descriptive anchor text.
8 - 10
All of the above plus topical clustering, breadcrumb consistency, and zero orphaned pages.
Dimension 07.
The only output-side dimension

CITATION PRESENCE.

What it measures: the only output-side dimension. The other 6 dimensions measure the conditions under which AI models can ingest your site. This dimension measures whether they actually do. Specifically: appearances in ChatGPT, Claude, Perplexity, and Google AI for 30+ target queries selected with the client, captured with screenshots, scored on whether your site is cited, in what context, and how the citation is rendered.

Why it matters: a site can score 8 on every input dimension and still earn no citations if the queries it would naturally answer are not the queries its audience is asking. Citation presence forces the audit to confront the gap between "technically optimised" and "actually cited." It is the dimension that ties methodology back to outcome.

Common failure modes: sites citing well in ChatGPT but never in Perplexity (different bot behaviour), sites cited only in adjacent topics that do not convert, sites cited but with stripped attribution (the "ghost citation" pattern), sites that earn citations but with outdated facts because freshness scored low.

0 - 3
Zero citations across all 4 surfaces for the target queries.
4 - 5
Sporadic citations, one or two surfaces only.
6 - 7
Consistent citations across at least 3 surfaces, attribution intact.
8 - 10
Dominant citation share across all 4 surfaces for the majority of target queries.

HOW DIMENSIONS ROLL UP.

Each dimension is scored 0-10 and weighted into the composite. Approximate weights:

Bot Accessibility
~18%
Freshness
~17%
RSS
~16%
Citation Presence
~15%
HTML
~12%
Schema
~12%
Linking
~10%

These weights are not pulled out of thin air. They are calibrated against the consumption hierarchy observed in the 47-site research network and the failure modes that correlate most strongly with low citation share.

WHAT THE SCORE ACTUALLY MEANS.

0 - 3
Broken
At least one foundational dimension is failing. Bot accessibility is usually the culprit. Functionally invisible to AI search.
4 - 5
Patchy
Technical baseline roughly in place but multiple dimensions weak. Citations sporadic. Meaningful upside from a focused 30-90 day fix list.
6 - 7
Solid
Most dimensions healthy, citations consistent, competitive in its niche. Improvements here are marginal optimisation, not basic visibility.
8 - 10
Dominant
All input dimensions strong, high citation share across all 4 AI surfaces, default reference in its category. Maintenance is the main job.
SEVEN DIMENSIONS.
ONE NUMBER. NO HAND-WAVING.

WHAT THIS SCORE IS NOT.

A few honest caveats so the score is not over-interpreted:

THE BOTTOM LINE.

Seven dimensions, scored on a 0-10 composite, weighted by what the 47-site research network has actually measured to matter. The score is meant to be auditable: every number can be traced back to a specific finding, a specific log entry, or a specific citation screenshot. If a number in your audit does not line up with what you can verify on your own site, that is a defect, not a feature - and the engagement is refunded.

If you want to see the score applied to your site, the audit is the productized way to get it. The methodology behind the score is what you have just read. The dimensions are not proprietary - the discipline of measuring all seven consistently, against your real logs and your real citation data, is what makes the audit worth the price.

Get The Score Applied To Your Site

FROM METHODOLOGY
TO YOUR DASHBOARD.

The audit takes the score from theory to your specific site. Your real logs, your real citation data, the same 7 dimensions, the same weights, ranked fix list at the end.