01

Technical SEO

Chapter 01 / 09

Core Web Vitals

LCP, INP, and CLS — the three metrics Google actually uses to judge whether your page is fast, responsive, and stable. What each one measures, what fails them, and how to fix.

11 min readPublished May 4, 2026
Core Web Vitals

Core Web Vitals is what Google calls the trio of metrics it uses to grade real-world page experience. They’re a real ranking factor, a stronger conversion lever, and the most actionable diagnostic surface for technical SEO — which is why they’re the first article in the Technical cluster.

This article walks through what each metric measures, what the targets are, what typically causes a page to fail, and what fixes carry the most leverage.

CWV is a tiebreaker for ranking and a multiplier for everything else. The pages that ship a good experience ship faster, convert better, and earn the engagement that compounds into rankings.

The three metrics — at a glance

MetricLCP — Largest Contentful Paint
What it measuresTime until the largest visible element finishes rendering
Good≤ 2.5 s
Poor> 4.0 s
MetricINP — Interaction to Next Paint
What it measuresWorst-case latency between user input and visible response
Good≤ 200 ms
Poor> 500 ms
MetricCLS — Cumulative Layout Shift
What it measuresTotal amount of unexpected layout movement
Good≤ 0.1
Poor> 0.25

Targets are measured at the 75th percentile of real Chrome users — meaning the metric needs to be “good” for at least 75% of page loads, on both mobile and desktop, for the URL to pass the assessment.

LCP — Largest Contentful Paint

LCP measures the moment the page’s largest above-the-fold element finishes rendering. On most pages that’s the hero image; on text-heavy pages it can be the H1 or a large paragraph block. The user’s perception is “the page has loaded” when LCP fires.

What typically fails LCP

  • Hero image too large or wrong format. A 2 MB JPEG that should be a 200 KB WebP. The single most common LCP culprit.
  • Hero image lazy-loaded. Lazy loading is great for below-the-fold content; on the LCP element it actively prevents a good score. The fix: fetchpriority="high" and loading="eager" on the hero.
  • Render-blocking JavaScript. Synchronous third-party scripts, large client-side frameworks, fonts loaded inline — anything that holds up the main thread before the LCP element can paint.
  • Slow origin server / no CDN. The HTML itself takes 1+ seconds to arrive, so LCP can’t start the clock until then.
  • Web fonts swapping if the LCP element is a text block, causing the visible paint to wait for the font.

What fixes LCP fastest

  • Identify the LCP element with the Web Vitals extension (it highlights the element on the page).
  • Convert hero images to WebP or AVIF, serve responsive sizes via srcset, set explicit width and height.
  • Add <link rel="preload" as="image"> for the hero image, plus fetchpriority="high".
  • Defer or async non-critical JavaScript; eliminate render-blocking third-party tags.
  • Use font-display: swap with a system fallback to avoid blocking text rendering.
  • Preconnect to any required external origin (CDN, font host, analytics) so DNS + TLS doesn’t happen on the critical path.

INP — Interaction to Next Paint

INP replaced FID (First Input Delay) in March 2024. It measures the longest delay between a user interaction (click, tap, keypress) and the next visible paint — across the entire session, not just the first interaction. It’s a much stricter metric and many sites that passed FID fail INP.

What typically fails INP

  • Heavy JavaScript handlers on click / submit / scroll. The main thread is busy when the user interacts, so the response paints late.
  • Long tasks (over 50 ms) blocking the main thread — typically third-party scripts, large hydration runs, or unbatched state updates.
  • Synchronous network calls on interaction (clicks that wait for a fetch before painting feedback).
  • Large React/Vue/Angular apps with no code splitting — the framework is still hydrating when the user clicks.
  • Animation jank — animating layout properties (width, height, top) instead of compositor-only properties (transform, opacity).

What fixes INP fastest

  • Audit long tasks in Chrome DevTools → Performance tab. Anything over 50 ms is a candidate.
  • Break up long-running JavaScript with requestIdleCallback, setTimeout(0), or React’s startTransition.
  • Code-split routes and lazy-load components that aren’t needed on first paint.
  • Use requestAnimationFrame for visual updates; avoid layout-triggering animations.
  • Show optimistic UI on click (skeleton, loading state, button-pressed style) so the user gets visible feedback before the network round-trip.
  • Audit third-party scripts ruthlessly — analytics, chat widgets, A/B testing tools are common INP killers.

CLS — Cumulative Layout Shift

CLS measures how much the visible layout jumps around as the page loads or as the user interacts. It’s the metric users hate most — clicking a button, having an ad load above it, and accidentally clicking the ad — and historically the easiest of the three to fix.

What typically fails CLS

  • Images without width/height attributes. The browser doesn’t know how much space to reserve, so when the image loads everything below it shifts down.
  • Ads / embeds without reserved containers. A pre-roll ad slot that grows from 0 to 250px after the page renders.
  • Web fonts with wide fallback metrics. Text reflows when the custom font swaps in.
  • Dynamic content injected above existing content — a banner that appears, an A/B test that swaps a hero, a cookie banner that pushes everything down.
  • Animations on layout-triggering properties (top/left/width) instead of transforms.

What fixes CLS fastest

  • Set width and height on every image and iframe. The browser reserves the right space before the asset loads.
  • Reserve fixed-height containers for ads, embeds, and any dynamically-sized component.
  • Use font-display: optional or carefully sized fallback fonts (size-adjust) to prevent reflow on font swap.
  • Inject dynamic banners / cookie consent below the existing layout, not above.
  • Use transform for animations, not top/left/width/height.

How to actually measure CWV

Three measurement sources, in order of trust:

SourceSearch Console > Core Web Vitals report
What it gives youReal-user data, segmented by mobile/desktop, grouped by URL pattern
When to use itThe authoritative source — this is what Google uses for ranking
SourceChrome User Experience Report (CrUX) via PageSpeed Insights
What it gives youReal-user data + synthetic Lighthouse score for a single URL
When to use itSpot-check individual URLs; cross-validate Search Console patterns
SourceChrome DevTools Performance + Web Vitals extension
What it gives youLive measurement on your own browser, with full traces
When to use itDiagnosing why a specific page is failing — used for fix work

Lighthouse (synthetic) and real-user data don’t always agree. Lighthouse runs in a controlled environment with a fast machine on a throttled connection — it’s reproducible but not how your users actually experience the page. CrUX/Search Console data is the truth, lagged by 28 days.

Priority order when CWV is failing across the site

When you inherit a site with all three metrics in “poor”, work in this order:

  • 1. Fix the hero image / LCP element on the homepage and top 5 traffic pages. Highest user-perceived impact, easiest fixes, ripples through Search Console aggregates fastest.
  • 2. Audit and remove blocking third-party scripts. Marketing tags accumulate over years; deleting half of them often improves LCP and INP simultaneously.
  • 3. Set width/height on every image template-wide. Single highest-leverage CLS fix; usually a one-line template change.
  • 4. Add explicit reserved space for ads, embeds, and dynamic UI. Eliminates the second-biggest CLS class.
  • 5. Audit long tasks and JavaScript bundle size. The INP work — usually a 2–4 week engineering project, not a content fix.
  • 6. Set up Real User Monitoring (web-vitals npm package) and watch the regression closely. Without RUM, you can ship a regression that doesn’t appear in Search Console for 28 days.

What CWV doesn’t do

CWV measures page experience, not page quality. A site can pass all three thresholds and still rank poorly because the content is thin, the topical authority is weak, or the search intent isn’t satisfied. The reverse is also true: a site can fail CWV and still rank for queries it’s the best answer to.

Treat CWV as the floor — the table-stakes performance baseline. Above the floor, ranking is determined by content quality, authority signals, and intent match. See how the algorithm works for how the layers stack.

The bottom line

Three metrics, three thresholds, three fix patterns. LCP is mostly about the hero image and render-blocking scripts. CLS is about reserving space for everything that loads asynchronously. INP is about keeping the main thread free when the user interacts. Hit “good” on all three at the 75th percentile and CWV stops being a ranking concern; below that, every other technical improvement is built on a soft foundation.

Common questions

Common questions

Quick answers to what we get asked before every trial signup.

Core Web Vitals are the three Google performance metrics that measure real user experience: Largest Contentful Paint (LCP) — how fast the main content loads; Interaction to Next Paint (INP) — how responsive the page feels when the user clicks, taps, or types; and Cumulative Layout Shift (CLS) — how much the layout jumps around as it loads. Each has a 'good' / 'needs improvement' / 'poor' threshold. To pass the assessment a URL needs to be 'good' on all three at the 75th percentile of real-user data.