Why a framework?
Isolated tips ("add an llms.txt", "write FAQs") don't produce sustainable results. A framework organises actions into dimensions, enables prioritisation and allows measurement. LOOP plays that role by separating content, structure, technical and steering concerns.
Overview
Legibility
Can the machine extract your content without friction?
Ontology
Does the machine understand who you are and where your authority sits?
Operations
Can the machine access and crawl your site efficiently?
Performance
How do you measure and iterate?
L — Legibility
Can the machine extract your content without friction?
- Semantic HTML structure (hierarchical H1/H2/H3, never decorative).
- Paragraphs of 3 to 6 lines, one idea per paragraph.
- Standalone sentences: every passage readable out of context.
- Clean tables, well-bounded lists.
- WCAG AA accessibility (contrast, focus, alt, landmarks).
- Definition of every technical term at first occurrence.
O — Ontology
Does the machine understand who you are and where your authority sits?
- Brand entity named, disambiguated, bound to its domain.
- Topic cluster: one pillar, 5 to 10 interlinked satellites.
- Co-occurrence with sector markers (competitors, tools, methods).
- Inbound links from topically aligned sites.
- schema.org Organization + sameAs to official profiles.
- Cross-channel editorial consistency (site, LinkedIn, press).
O — Operations
Can the machine access and crawl your site efficiently?
- Explicit robots.txt, major AI bots allowed (or explicit arbitration).
- CDN / WAF / Cloudflare: AI bots not blocked by default rules.
- SSR / SSG — critical HTML without JS execution.
- XML sitemap, absolute canonical, validated schemas.
- Core Web Vitals in the green.
- llms.txt and llms-full.txt published.
P — Performance
How do you measure and iterate?
- Baseline over 10 to 20 strategic queries, multi-engine.
- Bi-monthly citation monitoring (third-party tool or manual).
- GSC tracking + AI Overviews report.
- Quarterly review of pillar pages, explicit refresh dates.
- Internal log of behavioural changes observed across engines.
- Simple reporting: positions, impressions, share of citations.
The loop: why "LOOP"
AI optimisation isn't a one-off project, it's a cycle. LOOP is designed as an operational loop:
- Baseline (P) — measure where you are.
- Audit (L, O, O) — assess each dimension using the checklist.
- Action — fix what's broken, enrich what's thin.
- Measurement (P) — retest, compare, log.
- Iteration — re-prioritise, loop.
Action prioritisation
Not all actions are equal. A simplified matrix to adapt:
| Action | Impact | Effort | Priority |
|---|---|---|---|
| Unblock AI bots at the WAF level | High | Low | P0 immediate |
| Add H2/H3 and rework paragraphs on a pillar page | High | Medium | P0 |
| schema.org Organization + sameAs | Medium | Low | P0 |
| Publish an llms.txt | Low | Low | P1 |
| Rewrite for standalone passages and sourcing | High | High | P1 ongoing |
| Migrate SPA to SSR | High | Very high | P2 if relevant |
| Tooled monitoring of AI citations | Medium | Medium | P1 after baseline |
What LOOP is not
- It isn't a product. No automated audit, no SaaS.
- It isn't a closed proprietary framework. It's published and free to use with attribution.
- It isn't a promise of results. It's a steering structure.
How to use it
Three adoption levels:
- Solo — open the checklist, tick, prioritise. Half a morning is enough.
- Team — assign an owner per dimension (L, O, O, P) and hold a monthly review.
- Partner — work with a consultant who uses it as an intervention frame. Either way, the framework is free.