Skip to content
llmoptimisation.fr

Method

How to optimise a site for AI engines

A six-step method, testable on any site, that covers the foundations shared by SEO, AEO and GEO, then adds the requirements specific to AI answer engines.

Mise à jour : 14 April 2026 12 min de lecture

Step 1 — Baseline: measure before optimising

Before any action, take an honest snapshot of your site's current visibility across AI surfaces. Without a baseline, no iteration is credible.

  1. Pick 10 to 20 strategic queries covering top, middle and bottom of funnel.
  2. Run each on ChatGPT, Perplexity, Gemini, Claude and Google (AI Overviews).
  3. Note for each query: sources cited, presence or absence of your brand, quality of the answer (accurate, biased, outdated).
  4. Log results in a dated table.

Complement with Google Search Console (AI Overviews report when available), and, if possible, a monitoring tool (Profound, Peec AI, Otterly, Scrunch, AthenaHQ).

Step 2 — Structure content for retrieval

AI engines split documents into chunks and index passages. Poorly structured content becomes invisible, even if good. Operational rules:

Minimal structure of a GEO-optimised page

  • A unique H1 carrying the primary query, clear and jargon-free.
  • H2s segmenting distinct intents, not verbose chapters.
  • Paragraphs of 3 to 6 lines, one idea per paragraph.
  • Standalone sentences: readable when extracted from context.
  • Numbered lists for steps, bulleted lists for enumerations.
  • Tables for comparisons — AI engines read clean HTML tables well.
  • A definition at the first occurrence of every technical term.
  • A summary up top, an actionable recap at the end.

See the dedicated page on content structure for LLMs.

Step 3 — Consolidate semantic authority

LLMs reason in entities. To be cited on a topic, your site must be recognised as a legitimate source on that topic. Levers:

Step 4 — Open the site to AI bots (or arbitrate) and reinforce the technical layer

Crawl is a necessary condition. Check:

  • Explicit robots.txt: GPTBot, PerplexityBot, ClaudeBot, Google-Extended, OAI-SearchBot, Applebot-Extended allowed (or not, by deliberate decision).
  • CDN / WAF / Cloudflare: verify AI bots aren’t blocked by default (the 2024 "block AI scrapers" rules are often on by mistake).
  • Rendering: prefer SSR / SSG; not all LLMs execute JS.
  • Performance: LCP < 2.5 s, CLS < 0.1, INP < 200 ms. AI bots cap wait times.
  • Clean XML sitemap, absolute canonical, validated schema.org.
  • HTTPS, HTTP/2 or HTTP/3, compression.
  • llms.txt: useful to surface key content, not a prerequisite.

Step 5 — Make content citable

A page can be well-indexed and never cited if it offers no extractable claim. LLMs prefer passages that stand alone and carry clear informational value.

  1. Sharp statements. Avoid "it may appear that", "some think that".
  2. Dated and sourced figures. A figure without a source is suspicious; a figure without a date ages badly.
  3. Primary-source citations. LLMs reward pages that cite other credible pages.
  4. Repeatable formats. Answer a question with a firm paragraph, followed by an example and a nuance.
  5. Identified author. A page with well-filled Organization and/or Person schema.org reinforces trust.

Step 6 — Measure, iterate, maintain

AI optimisation is a long-run discipline, not a sprint. Recommended cadence:

FrequencyAction
WeeklyLog review for AI User-Agents, fix technical errors.
Bi-monthlyRe-run the 20 strategic queries on main AI engines.
MonthlyReview tracked citations (third-party tool), prioritise pages to enrich.
QuarterlyDated refresh of each pillar page, full audit via the LOOP checklist.
BiannualContent strategy review, cluster adjustment.

Where to start, concretely

If you can only do one thing: audit a pillar page (your strongest page on a strategic topic) with the LOOP checklist, fix it in half a day, and re-test on AI engines 3 weeks later. That's the test with the highest learning per hour invested.

À lire ensuite