AI SEO
Chapter 04 / 08
Claude optimization
The AI engine with the most conservative source weighting and the most thoughtful citation behavior — what Claude reaches for, what it skips, and how to be in the set when the user asks the careful question.

Claude is the most conservative of the major AI engines. It cites less often than Perplexity, less liberally than ChatGPT, and with stricter source-quality preferences than Gemini. The flip side: when Claude does cite a source, the user reads it as a more confident endorsement than a citation in a less conservative engine. Optimizing for Claude is optimizing to be the source that the careful answer reaches for — which compounds when users start trusting Claude's recommendations as a category-curating signal.
“Optimizing for Claude is optimizing for citation quality over citation quantity. The engine cites fewer sources but the citations carry more weight with users who chose Claude precisely for its conservatism. Being in the set is harder; being out of the set is more costly.”
The source-quality bias
Claude's source preferences cluster around five patterns:
- Primary sources over secondary. A government health agency over a health blog. The original research paper over the press summary. The product's own documentation over a third-party tutorial.
- Documented facts over opinions. A site that says "the field of X has 12 named subdisciplines, listed by Y in 2024" outperforms a site that says "X is an exciting and growing field."
- Recent over old when recency matters. Claude reads dates aggressively. A 2026 article on a fast-moving topic gets weighted over a 2021 article even when the older article ranks higher organically.
- Established institutions over new sites. Universities, government agencies, major nonprofits, established publications. New domains have to earn the way in via citations and link profile.
- Balanced treatment over advocacy. Pages that present multiple sides of a debate get cited for that depth; pages that argue one side aggressively get cited less, even when factually correct.
Optimization for the training layer
The same training-data dynamics that apply to ChatGPT apply to Claude — but Claude's training appears to weight a smaller, higher-quality source set. The implication is that the entry bar is higher:
- Wikipedia presence is necessary, not sufficient. Claude reads Wikipedia, but it also weighs the primary sources Wikipedia cites. A brand in Wikipedia with strong primary-source citations outperforms one with only its own promotional materials cited.
- Published research and original data. Original studies, white papers, primary research that other publications cite — these compound into Claude's training weights more than commentary content.
- Established-publication bylines. Author bylines in major publications carry more weight than anonymous content. Author + credentials + publication is the strongest combination.
- Documentation as a category. Official product documentation, technical references, and standards bodies get retrieved heavily for relevant queries. A SaaS company with thorough public documentation is more findable in Claude than one without.
Optimization for the live-retrieval layer
When Claude's web search fires, it favors pages that look like primary references. The tactical signals:
- Author bylines with credentials. Visible author names, linked to author bio pages with verifiable credentials and external profiles.
- Date stamps. Both publish date and last-updated date, in schema and in visible UI. Outdated content is read as outdated.
- Source citations within the content. Articles that cite their sources externally (and link to them) get pulled more often than articles that don't. Claude treats citation density as a quality signal.
- Structured data confirming claims. JSON-LD that matches the visible content reinforces credibility. Schema/content mismatch erodes it.
- Domain authority of established type. Educational (.edu), governmental (.gov), and major publication domains carry weight independent of standard SEO authority.
The "answer with citation" structure
Claude prefers content that makes citable claims with source backing. The structure that rewards:
- Open with a definitive answer, then back it up. ("X is true. According to Y (2026 study), the rate is Z%.")
- Use named entities (people, organizations, dated events) as anchors. Generic statements don't hook the engine.
- Show your sources. External citations to primary references signal that the page is itself a synthesis, not a primary opinion.
- Quantify when possible. "12% of users" beats "many users." Numbers are retrievable as discrete facts.
Schema specifically for Claude
Claude reads JSON-LD; the high-leverage types overlap with the other engines but the weighting differs:
- Article + author (Person) — the author entity matters more for Claude than for less-conservative engines. Person schema with sameAs to LinkedIn, Google Scholar, official bios.
- ScholarlyArticle — for research content; signals primary-source character.
- Citation — explicit reference to other works the article cites.
- FactCheck — for content that includes fact-checked claims.
- Organization with foundingDate / employee credentials — establishes the org as an entity with real human authorities behind it.
What doesn't work
- Aggressive sales copy. Pages written as marketing rather than information get skipped. Claude tends to retrieve neutral, informational pages over advocacy pages.
- Unsourced statistics. Numbers without source attribution are read as unreliable and weighted down.
- Anonymous or pen-name content. Without a verifiable author, the trust signal is weak.
- AI-generated boilerplate. Detected, deweighted. The engine looks for human-edited, source-backed content.
- Claims without dates. Undated claims are treated as potentially stale.
Measuring Claude presence
Claude is harder to measure than the other engines because there's no SERP-like inspector and the API rate limits prevent the kind of high-volume sampling possible with ChatGPT. The pragmatic measurement approach:
- Curated prompt set. 25–50 high-priority prompts run weekly via the Anthropic API.
- Response logging. Capture full responses including citations.
- Mention tracking. Brand-name appearances over time.
- Citation set comparison. Which sources are cited alongside yours; which competitors are co-cited. The chapter on AI search measurement covers tooling.
Claude rewards depth, primary sourcing, and editorial discipline. The next chapter, Perplexity optimization, covers the engine that operates on the opposite end — always-cited, always-grounded, always pulling fresh content from the open web.
Common questions
Common questions
Quick answers to what we get asked before every trial signup.
Yes, Claude has a web search and browse tool that's invoked when the query benefits from current information. The default mode is to answer from training knowledge with conservative confidence; when browsing fires, Claude pulls live pages and produces explicit citations. The browse behavior is more conservative than ChatGPT's — Claude tends to reach for fewer, higher-quality sources rather than synthesizing from many.
In this cluster