Home Services About Results Blog SEO in Guwahati Pricing Contact

I Audited My Own SEO Site for AI Visibility — Here’s What I Found (And What’s Broken)

Most “AI SEO” content right now is theory. Agencies are selling six-figure GEO retainers without ever publishing a single end-to-end audit of a real site — including their own. So I ran the full AI visibility audit on seorevive.in — the site you’re reading this on — and I’m publishing what I found, broken parts and all.

The honest preamble

A few things to get out of the way before the findings:

The 4-phase framework I use

Every AI visibility audit I run covers four phases. The first three are on-site and can be done from the URL alone. The fourth requires browsers or APIs.

  1. Schema audit — what structured data exists, what’s missing, what’s broken
  2. Content quotability audit — which pages have liftable content that LLMs prefer
  3. Author authority audit — does the brand have a coherent entity graph
  4. Live LLM citation testing — query the AI engines, see who’s cited (Phase 2)

For each phase, I score on a 0–10 scale. Here’s where seorevive.in landed:

Dimension Score What it means
Schema markup depth8/10Strong — but missing HowTo, WebSite, SpeakableSpecification
Content quotability7/10HCU post is genuinely citable. Others thinner.
Author authority signals4/10The biggest gap. sameAs has only LinkedIn.
Entity authority (off-site)2/10Zero directory listings, zero brand mentions.
Citation hooks6/10Present on HCU post, sparse elsewhere.
Technical AI accessibility9/10AI bot allowlist, llms.txt, sitemap clean.
First-party data / research3/10Claims of “100+ campaigns” but no detailed case studies.

Overall AI visibility readiness: 6.5 / 10.

That’s a real, measured score. Not a sales pitch. The headline finding is uncomfortable: I built a technically excellent site, but it has almost no signal to AI engines that it exists as an entity worth citing. Let me explain.

Finding 1: The schema is strong (and most sites get this wrong)

When LLMs decide what to quote, they prefer extracting from pages with rich structured data. Not just “have a schema” — but the right schema for the content type.

What I had on seorevive.in:

What was missing:

These are 10–30 minute fixes per item. I’m shipping them this week.

The lesson for any site: rich schema is necessary but not sufficient. And the missing schema types matter more than you’d expect — HowTo specifically is a citation magnet for “how do I X” queries, which is a huge slice of what people ask LLMs.

Finding 2: The flagship content is citation-ready

Of the 7 blog posts on the site, the HCU recovery playbook scores highest on quotability — 8/10. Here’s why it works for AI citation:

  1. It cites specific dates with sources. March 2024 (when HCU was folded into the core algorithm). May 2024 (the API leak). August 15 2024, December 12 2024 (specific core updates). LLMs love dates because they reduce hallucination risk when re-quoting.
  2. It names specific Google attributes from the leak. siteAuthority, siteFocusScore, hostAge, contentEffort — these are unique enough strings that an LLM citing them needs a source.
  3. The “In plain English” callout box is structured for lifting. Four definitive bullets, each a self-contained statement. That’s exactly the pattern AI extractors prefer.
  4. The FAQ has six fully self-contained Q&As. Each answer stands alone. An LLM can quote one without needing surrounding context.
  5. The lead has a contrarian thesis. “Most recovery advice you’ll find online was written before that change” is a strong opinion claim. LLMs surface contrarian positions more than safe ones because they’re more informative.

The other six posts? They’re decent traditional SEO content. But for AI citation specifically, they need work — fewer definitive statements, fewer stat anchors, no “in plain English” structures, weaker FAQs. That’s the next 60 days of content upgrades for me.

If you’re auditing your own content for AI citation, here’s the test: pick one paragraph at random. Could an LLM quote it word-for-word and have it stand alone? If yes, that paragraph is citable. If no — if it depends on surrounding context or hedges every claim — rewrite it.

Finding 3: The author entity is the rate-limiting step

Here’s where it got uncomfortable.

My Person schema is rich. knowsAbout has 13 SEO sub-topics. knowsLanguage covers English, Hindi, Assamese. workLocation spans India, US, Canada. I look like a credible expert on paper.

But my sameAs array has exactly one entry: LinkedIn.

That’s the entity authority gap. AI engines build their understanding of “is this person a real expert” by following the sameAs graph — the cross-references that prove identity continuity across the web. With one entry, I look like a brand-new persona with no track record, regardless of whether I have 8 years of real experience.

What I should have (and don’t):

This is the single biggest deficit on the site. Without external profiles confirming my identity, the rich Person schema is hanging in the air with nothing to anchor to.

If you have a personal brand inside your SEO or marketing site, audit your sameAs graph right now. Count the entries. If it’s under 5, you’re invisible to AI engines as a real person.

Finding 4: Off-site is the real bottleneck

This was the hardest finding to write honestly.

I scored 2/10 on off-site entity authority. That’s because — despite a brand-new, technically excellent site — seorevive.in has:

When ChatGPT decides what to cite for “best SEO agency in India for small business,” it pulls from sources it knows. My site, despite being technically sophisticated, doesn’t yet appear in those sources. The entity isn’t recognised yet.

This is the unsexy truth about AI visibility: 70% of the work is on-site, but 100% of the citation happens off-site. You can have perfect schema and never be cited if your brand isn’t distributed across the surfaces LLMs trust.

This isn’t unique to my site. It’s the default state of every new domain. The fix is the slow work: directory submissions, real participation on Reddit and Quora, guest posts on top-tier publications, podcast appearances. Months of grinding. There’s no schema-only shortcut.

What I’m shipping in the next 30 days

I made a prioritised list of 12 fixes. Here’s the queue, with effort estimates:

# Fix Effort Expected impact
1Add HowTo schema to HCU recovery post10 minHigh
2Add WebSite schema to homepage5 minMedium
3Expand author sameAs graph (Twitter, Medium, Crunchbase)90 minHigh
4Add quotable founder-bio paragraph to /about body15 minMedium
5Add SpeakableSpecification to HCU FAQ10 minMedium
6Add ImageObject metadata to embedded charts20 minMedium
7Submit to Clutch + GoodFirms + Sortlist90 minHigh
8Add 2–3 anonymised case studies with real numbers to /results4–6 hrsHigh
9Add “TL;DR” + “Cite this article” structure to all posts1 hrMedium
10Pitch one guest post on a tier-1 SEO publication8–15 hrsVery high
11Start LinkedIn cadence (4–5 posts/week)2–3 hrs/wkCompounding
12Build first small original research piece10–20 hrsVery high

The first six are Tier 1 — all under 3 hours of total work and shipping this week.

What this means for your site

If you want to run this audit on your own site, here’s the framework:

Phase 1: Schema audit (45 minutes)

Open your homepage’s HTML source. Search for application/ld+json. List every schema type present. Check the schema.org documentation for what’s missing given your site type. Score 0–10.

Phase 2: Content quotability (1–2 hours)

Pick your top 5 pages by traffic. For each, count definitive statements, stat anchors, FAQ blocks, comparison tables. If they’re sparse, your content is decorative — pretty but not liftable.

Phase 3: Author authority (30 minutes)

If your site has named authors, find their Person schema. Count the sameAs entries. Under 5? You’re invisible as a person. Then check whether they have bylines on any cited publication.

Phase 4: Off-site reality check (15 minutes)

Open ChatGPT or Perplexity. Type “best [your category] in [your geo].” Are you mentioned? Type “[your brand] review.” What comes back? If you’re not in the results, your entity isn’t yet recognised.

That’s the whole methodology. Took me about 4 hours total on seorevive.in, including writeup.

What I’m not promising

A few honest disclaimers, since this is meant to be a practitioner audit and not a sales pitch:

What’s next

I’ll publish the follow-up — “60 Days After: Did the AI Citation Fixes Move Anything?” — once the changes have been live for two months. Baseline citation testing happens this week, before any fixes ship. That way the delta is measurable.

If you want a similar audit on your own site, you can reach out. I’m running a small number of free audits in May and June in exchange for permission to publish anonymised findings as a case study.

Frequently asked questions

What is an AI visibility audit?

An AI visibility audit assesses how well a website is positioned to be cited by AI search engines (ChatGPT, Claude, Perplexity, Google AI Overviews, Bing Copilot). It examines on-site signals (schema markup, content quotability, author authority) and off-site signals (directory listings, brand mentions, external bylines, social profiles). The output is a scorecard, prioritised fixes, and a tracking plan for monthly re-testing.

How is AI SEO different from traditional SEO?

Traditional SEO optimises for Google’s organic rankings. AI SEO optimises for being cited by AI engines when users ask questions. The underlying work overlaps about 70% — schema, content quality, author authority, technical health all matter for both. The other 30% is citation-specific: structured data that LLMs can extract verbatim, entity authority signals, brand distribution across cited surfaces (Reddit, Quora, directories), and content patterns LLMs prefer (definitive statements, stat anchors, comparison tables).

How long does an AI visibility audit take?

A complete audit covering all four phases takes 4–6 hours. The on-site portion (schema, content, author authority) can be done from the URL alone in 3 hours. Live LLM citation testing across five engines adds 1–2 hours of structured manual testing. For a paid client engagement, total turnaround including writeup and walkthrough is typically 21 days.

How long until AI citation fixes show results?

Schema and content changes typically begin reflecting in AI engine outputs within 60–180 days. Off-site changes (new directory listings, guest posts, brand mentions) take longer — usually 90–365 days for the entity authority to compound. AI engines update their training data and indices on slow cycles, so don’t expect overnight changes regardless of how aggressive the work is.

Should I track AI citations with a paid tool?

Not yet, as of May 2026. The tools that promise daily AI citation tracking (Profound, Otterly, and similar) are too noisy to be reliable — caching, regional variation, and prompt sensitivity make daily readings unreliable. A manual monthly run of 20–30 queries across five engines, captured in a Google Sheet, gives more accurate data than any paid tracker right now. Revisit in 6–12 months as the tooling matures.

Is “AI SEO” a permanent category or a buzzword?

It’s a temporary positioning lever. Every shiny SEO category — Mobile SEO, Voice Search SEO, E-A-T optimisation — gets absorbed back into “SEO” within 24–36 months. AI SEO will follow the same pattern. The agencies that bet their entire identity on it will rebrand by 2027–28. The right play is to use AI SEO as a positioning differentiator now while building durable underlying SEO capability that lasts.

Related reading

Want a similar audit on your site?

Free AI visibility audit in May–June 2026, in exchange for permission to publish anonymised findings. 30-minute call to scope it.

Book Free Audit →
Get Your Free SEO Audit →