THE PROMISE THAT SHOULD WORRY YOU.
AI visibility consultants increasingly offer outcome guarantees: "we will get you cited in ChatGPT", "30% citation share in 90 days", "top 3 in AI Overviews for X queries." The promises are concrete, the contracts are signed, the invoices get paid.
The promises are also mechanically dishonest. AI engines are opaque. The vendor cannot reliably control what gets cited; nobody can. Anyone telling you otherwise is selling certainty they do not have.
OPACITY.
AI citation outcomes depend on five mechanisms (article N. 05) interacting across four surfaces. Each surface has its own ranking signals, training data, summarisation algorithms, and shifting model versions. The vendor optimising your site has zero visibility into how the engine actually decides what to cite.
What the vendor CAN do: improve the input dimensions (Bing index, server-rendered HTML, entity grounding, freshness signals, topical specificity). What the vendor CANNOT do: guarantee that improving the inputs will produce a specific citation outcome. The output is downstream of mechanisms the vendor does not control.
INPUT DIMENSIONS.
The seven dimensions of the AI Visibility Score (article N. 03 - schema, RSS, HTML, bot accessibility, freshness, linking, citation presence) are all measurable and most are directly controllable. A vendor can guarantee:
- Your robots.txt allows the right bots.
- Your schema is correctly implemented and server-rendered.
- Your RSS feed is enriched and validates.
- Your sitemap is accurate.
- Your content cadence meets the freshness threshold.
OUTPUT.
Whether ChatGPT cites you for a specific query depends on factors the vendor cannot move:
- Whether your site is in Bing's index AT THE TIME of the query (which depends on Bing's crawl cycle, not your vendor).
- Whether OpenAI's current model weights surface your domain for that query.
- Whether the user's session has any personalisation in play.
- Whether the engine version that runs the query is the same one that ran it last week.
- Whether competitors with stronger entity grounding (Wikipedia) get the slot first.
THE HONEST FRAMING.
What I sell: a measurable improvement to the seven dimensions of the AI Visibility Score, validated against real bot logs and citation tracking. The expected outcome is increased citation likelihood across all four surfaces, but I do not commit to specific citations or specific share percentages.
What I do guarantee: the methodology will be applied correctly, the deliverables will be specific enough to hand to a developer, and if any number in the audit does not line up with what you can verify on your own site, the engagement is refunded.
This is what an honest engagement looks like. Outcome guarantees in opaque systems are signs of a vendor who is either confused about what they can deliver or willing to misrepresent their capabilities.
WHAT TO ASK VENDORS WHO GUARANTEE.
If you are evaluating a vendor offering outcome guarantees:
- Ask them to explain the mechanism by which they will deliver the guarantee.
- Ask what they consider the failure mode (refund? rework? walk-away clause?).
- Ask how they measure the outcome (their own tracking? a third-party tool? screenshots?).
- If the answers involve hand-waving ("our proprietary methodology"), the guarantee is theatre.
THE BOTTOM LINE.
AI citation outcomes are not guaranteeable. The mechanism does not exist. The honest engagement is to commit to input improvements, measure them, and let the citation outcomes emerge from the input quality. If you are working with a vendor who guarantees outcomes, you are paying for a promise the vendor structurally cannot keep. That is not a marketing problem; it is an integrity problem.