Open SourceXanLens is now open source — self-host with your own API keysView on GitHub →

[ Methodology ]

How XanLens works

GEO is probabilistic, not deterministic. Ask AI the same question 10 times, get 10 different answers. We measure how AI engines see your brand — not how Google ranks your website. Different problem, different methodology.

What we measure

When someone asks Gemini, Grok, ChatGPT, or any AI engine for a recommendation in your space, does your brand come up? That's what we test.

Brand recognition

Does the AI know your brand exists? Can it describe what you do accurately, or does it hallucinate features you don't have?

Category visibility

When users ask for "the best tools" in your industry, does your brand make the list? Or do only your competitors show up?

Recommendation strength

Being mentioned is one thing. Being recommended with positive sentiment and accurate details is another. We measure both.

Competitive positioning

How does your brand stack up against competitors in AI responses? Who gets mentioned first? Who gets recommended more often?

How we score

The GEO score (0–100) reflects your real visibility across AI engines using our proprietary multi-factor scoring algorithm. We combine brand knowledge, discoverability, and citation analysis across multiple query types.

Multi-engine validation

Each audit queries multiple AI engines with ~132 prompts across different intent types. Our LLM judge (Gemini) validates every response for accuracy, relevance, and recommendation strength to build a complete visibility picture.

Knowledge + discoverability + citations

The scoring algorithm combines how well AI engines know your brand, how often they recommend it in discovery scenarios, and the quality of sources they cite. All three components contribute to your overall score.

Accuracy validation

Our LLM judge detects hallucinations and factual errors in AI responses about your brand. Incorrect information is penalized in the scoring to ensure authentic visibility measurement.

Trend tracking

Consistent scoring methodology enables reliable tracking of visibility changes over time. Monitor improvements after implementing GEO optimizations with confidence.

GEO score vs Web Presence

We separate AI visibility from web visibility because they measure different things. A brand can rank #1 on Google and still be invisible to ChatGPT.

GEO Score

How AI engines respond when asked about your brand or category. Direct measurement of AI visibility.

Web Presence

How discoverable your brand is on the traditional web — the source material AI engines learn from and cite.

Grade scale

Most brands score below 40. That is not a bug — it is the current state of AI visibility. Most businesses have not optimized for how AI engines discover and recommend products.

A
80–100

Strong AI visibility. Consistently mentioned and recommended across query types.

B
60–79

Good visibility with gaps. Known to AI but missing from some query categories.

C
40–59

Moderate. AI has some awareness but rarely recommends. Competitors dominate.

D
20–39

Weak. Limited or outdated knowledge. Mentioned only in direct queries, if at all.

F
0–19

Invisible. AI engines don't know the brand or actively recommend competitors instead.

AI engines

Each AI engine processes information differently. What Gemini knows, ChatGPT might not. Multi-engine coverage gives you the full picture.

Gemini

Live

Grok

Live

DeepSeek

Live

ChatGPT

Coming Soon

Claude

Coming Soon

Perplexity

Coming Soon

Meta AI

Coming Soon