Technical SEO
Chapter 09 / 09
JavaScript SEO
Modern Google renders JavaScript — but how, and with what limits, decides whether your SPA, your React shop, or your dynamic content gets indexed at all. The render-or-die rules for 2026.

JavaScript SEO is the discipline of making sure search engines and AI engines can see, render, and index pages that depend on JavaScript to display content. Modern Google can render most JavaScript — but how it renders, what it tolerates, and what AI engines do (which mostly don’t render at all) decides whether a JS-heavy site indexes well or silently bleeds traffic.
“Server-rendered HTML is the fastest path to indexing — by Google, by every AI engine, by every other crawler that exists. Client-side rendering works for Google, slowly. AI engine crawlers usually can’t render at all. The decision tree is: render server-side unless you have a specific reason not to.”
How Googlebot actually renders JavaScript
Google uses a two-pass indexing process for JS-rendered pages:
- Pass 1 — initial crawl. Googlebot fetches the raw HTML, extracts the URLs and any non-JS content. The page enters the indexing queue.
- Pass 2 — render. Hours to days later, Googlebot runs the page through a recent Chromium-based renderer (Web Rendering Service). The rendered DOM is what gets analysed for content, links, and signals.
Practical implications:
- Server-rendered pages get the full crawl-and-extract on pass 1. Faster to index.
- JS-rendered pages wait for pass 2 — typically hours, sometimes days. Slower to index.
- Pass 2 has resource limits — long render times, slow API calls, third-party scripts can cause render to fail or time out.
- Internal links discovered only via JS-rendered DOM aren’t found until pass 2; full crawl propagation takes longer.
The four rendering patterns — and what they cost
| Pattern | When HTML is built | SEO impact |
|---|---|---|
| SSG — Static Site Generation | At deploy time | Best — pre-built HTML, fastest TTFB, indexes immediately |
| ISR — Incremental Static Regeneration | Built at deploy + regenerated periodically | Excellent — fresh content with SSG-level SEO benefits |
| SSR — Server-Side Rendering | On each request, on the server | Excellent — full HTML on request, indexes on pass 1 |
| Edge SSR | On each request, at CDN edge | Excellent — SSR benefits + low TTFB globally |
| CSR — Client-Side Rendering | In the browser, after JS loads | Worst — empty initial HTML, depends on pass-2 render |
Modern frameworks (Next.js, Nuxt, Remix, Astro, SvelteKit) all support SSG, ISR, and SSR. The decision tree:
- SSG for content that doesn’t change between deploys — academy articles, marketing pages, long-form blog posts.
- ISR for content that updates frequently but doesn’t need real-time freshness — product catalogs, news sites with daily updates.
- SSR for personalised content, real-time data, anything that varies per request.
- Edge SSR for global sites where TTFB matters and the rendering can run in V8 isolates.
- CSR for authenticated app dashboards, internal tools, anything that doesn’t need to be indexed.
The AI engine reality — most don’t render
New constraint in 2026 that didn’t exist in 2022: most AI engine crawlers don’t execute JavaScript.
| Crawler | JS rendering |
|---|---|
| Googlebot | Yes (Chromium-based, two-pass) |
| Bingbot | Yes (Chromium-based) |
| GPTBot (OpenAI) | Limited — fetches HTML, JS support evolving |
| ChatGPT-User (live browse) | Limited — uses Bing under the hood for some queries |
| ClaudeBot (Anthropic) | Limited — fetches HTML, JS support evolving |
| PerplexityBot | Limited — primarily HTML |
| Google-Extended (Gemini training) | Inherits Googlebot rendering |
| Common Crawl (CCBot) | No — HTML only |
The implication: a CSR site that ships empty HTML and builds content in the browser is invisible to most AI engines. Brand mentions, product comparisons, and informational content rendered only via JS won’t be cited in ChatGPT, Claude, or Perplexity answers. The fix is the same as for Google: server-render the content into the initial HTML.
Common JavaScript-SEO failures
- Content rendered only after user interaction. Click-to-reveal sections, tabs, accordions that don’t hydrate without click — Googlebot may not interact, content is lost.
- Infinite scroll without history.pushState. Pages 2+ of an infinite-scroll feed have no unique URL — can’t be indexed individually.
- API failures during render. If the page’s content comes from a backend API and that API is slow / down / blocked, the render fails and Google indexes empty page.
- JavaScript timeouts. Heavy hydration + slow third-party scripts can push render past Google’s budget. The rendered DOM is incomplete.
- Resources blocked by robots.txt. If JS bundles, CSS, or critical APIs are blocked from crawl, Google can’t fetch them and rendering fails.
- Content visible only in user-agent-conditional rendering. Cloaking-adjacent — bots see one thing, users another. Risky and detectable.
- Lazy-loaded content with no fallback. Below-the-fold content that requires scroll-triggered hydration may not load during render.
Debugging JavaScript SEO
- 1. Search Console > URL Inspection > Test live URL. Compare “HTML” tab (rendered) vs “Page resources” tab (loaded resources). Missing content = render failure. Failed resources = crawl path issue.
- 2. View Source vs Inspect Element. View Source shows raw HTML; Inspect Element shows rendered DOM. The diff is your JS dependency surface — anything in Inspect but not View Source needs JS to render.
- 3. Curl with no JS execution.
curl -L https://www.example.com/page/shows what non-rendering bots see. If your content isn’t there, JS-only crawlers can’t index it. - 4. Chrome DevTools > Network > throttling = Slow 3G + CPU = 4× slowdown. Simulates Googlebot’s constrained render environment. Pages that fail here may fail in the real renderer.
- 5. Disable JavaScript in Chrome. See exactly what Googlebot would see on pass 1. Critical content should still be visible.
Migration patterns
From CSR to SSR/SSG
For content sites currently on pure client-rendered SPAs, the migration to SSR or SSG is one of the highest-leverage SEO investments available. Modern frameworks make it incremental: page-by-page migration is possible without rewriting the entire app. Order of priority:
- Marketing pages (homepage, pricing, feature pages) — biggest commercial impact.
- Content pages (academy, blog, knowledge base) — biggest organic traffic surface.
- Category and listing pages — biggest crawl-budget impact.
- Product / detail pages — biggest scale, most duplication risk.
- Authenticated dashboards last — they don’t need to be indexed.
Hybrid rendering
Most sites end up with a hybrid: SSG for evergreen content, ISR for frequently-updated content, SSR for personalised pages, CSR for authenticated app surfaces. Modern frameworks handle this routing-level decision per page; the only thing to avoid is making the wrong default for content that needs to be indexed.
The bottom line
Google can render JavaScript, but server-rendered HTML still indexes faster, more reliably, and is visible to all AI engines (most of which don’t render JS). The decision tree for content that needs SEO + AI citation traffic: SSG for evergreen, ISR for periodic updates, SSR for personalised content, CSR only for surfaces that don’t need indexing. Modern frameworks (Next.js, Nuxt, Remix, Astro, SvelteKit) make all three SEO-friendly patterns easy. The only pattern to actively avoid: pure CSR on content that needs to rank.
Common questions
Common questions
Quick answers to what we get asked before every trial signup.
Yes, Googlebot renders JS using a recent Chromium engine. Practical caveats: rendering is in a second pass after the initial crawl, so JS-rendered content takes longer to index than server-rendered HTML. The renderer has time and resource limits — pages that take too long, fail mid-render, or require user interaction to load content can have content silently missed. The honest read: Google can render JS, but server-rendered HTML still indexes faster, more reliably, and ranks better in competitive niches.