Open SourceXanLens is now open source — self-host with your own API keysView on GitHub →

Audit Engine

The audit engine queries multiple AI providers with ~132 prompts including category, discovery, and competitor queries.

Live Engines

  • Gemini (Google) — knowledge queries via Gemini API
  • Grok (xAI) — knowledge queries via Grok API
  • DeepSeek — knowledge queries via DeepSeek API
  • Gemini Grounded — discoverability analysis with Google Search grounding

Coming Online

  • ChatGPT (OpenAI) — via GPT API
  • Claude (Anthropic) — via Claude API
  • Perplexity — via Perplexity API
  • Meta AI (Llama) — via NVIDIA NIM

All queries run at low temperature for reproducible results. Each engine receives multiple prompt types: branded, discovery, and comparative. An LLM judge (Gemini) validates every response to detect hallucinations and verify the response is about the correct brand.

Prompt Design

Branded prompts ask engines directly about the target brand. Discovery prompts ask for recommendations without mentioning the brand. Comparative prompts test head-to-head against competitors. The combination reveals:

  • Whether engines know the brand exists
  • Whether engines recommend the brand organically
  • How the brand is described (sentiment, accuracy)
  • Which competitors appear instead
  • What sources engines cite
  • How visibility varies across query types

Additional Analysis

Beyond AI engine queries, each audit includes parallel technical checks:

  • SEO vs GEO comparison — Tavily web search as SEO proxy vs AI engine score
  • AI crawler access — robots.txt analysis for 13 AI crawlers + llms.txt check
  • Technical health — PageSpeed Insights (performance, SEO, accessibility)
  • Authority sources — Wikipedia, Crunchbase, GitHub, LinkedIn presence
  • Search demand — Google Autocomplete suggestions + People Also Ask
  • Content AI-friendliness — On-page analysis for headings, FAQ, schema, entities