Technical SEO
Chapter 01 / 09
Core Web Vitals
LCP, INP, and CLS — the three metrics Google actually uses to judge whether your page is fast, responsive, and stable. What each one measures, what fails them, and how to fix.

Core Web Vitals is what Google calls the trio of metrics it uses to grade real-world page experience. They’re a real ranking factor, a stronger conversion lever, and the most actionable diagnostic surface for technical SEO — which is why they’re the first article in the Technical cluster.
This article walks through what each metric measures, what the targets are, what typically causes a page to fail, and what fixes carry the most leverage.
“CWV is a tiebreaker for ranking and a multiplier for everything else. The pages that ship a good experience ship faster, convert better, and earn the engagement that compounds into rankings.”
The three metrics — at a glance
| Metric | What it measures | Good | Poor |
|---|---|---|---|
| LCP — Largest Contentful Paint | Time until the largest visible element finishes rendering | ≤ 2.5 s | > 4.0 s |
| INP — Interaction to Next Paint | Worst-case latency between user input and visible response | ≤ 200 ms | > 500 ms |
| CLS — Cumulative Layout Shift | Total amount of unexpected layout movement | ≤ 0.1 | > 0.25 |
Targets are measured at the 75th percentile of real Chrome users — meaning the metric needs to be “good” for at least 75% of page loads, on both mobile and desktop, for the URL to pass the assessment.
LCP — Largest Contentful Paint
LCP measures the moment the page’s largest above-the-fold element finishes rendering. On most pages that’s the hero image; on text-heavy pages it can be the H1 or a large paragraph block. The user’s perception is “the page has loaded” when LCP fires.
What typically fails LCP
- Hero image too large or wrong format. A 2 MB JPEG that should be a 200 KB WebP. The single most common LCP culprit.
- Hero image lazy-loaded. Lazy loading is great for below-the-fold content; on the LCP element it actively prevents a good score. The fix:
fetchpriority="high"andloading="eager"on the hero. - Render-blocking JavaScript. Synchronous third-party scripts, large client-side frameworks, fonts loaded inline — anything that holds up the main thread before the LCP element can paint.
- Slow origin server / no CDN. The HTML itself takes 1+ seconds to arrive, so LCP can’t start the clock until then.
- Web fonts swapping if the LCP element is a text block, causing the visible paint to wait for the font.
What fixes LCP fastest
- Identify the LCP element with the Web Vitals extension (it highlights the element on the page).
- Convert hero images to WebP or AVIF, serve responsive sizes via
srcset, set explicit width and height. - Add
<link rel="preload" as="image">for the hero image, plusfetchpriority="high". - Defer or async non-critical JavaScript; eliminate render-blocking third-party tags.
- Use
font-display: swapwith a system fallback to avoid blocking text rendering. - Preconnect to any required external origin (CDN, font host, analytics) so DNS + TLS doesn’t happen on the critical path.
INP — Interaction to Next Paint
INP replaced FID (First Input Delay) in March 2024. It measures the longest delay between a user interaction (click, tap, keypress) and the next visible paint — across the entire session, not just the first interaction. It’s a much stricter metric and many sites that passed FID fail INP.
What typically fails INP
- Heavy JavaScript handlers on click / submit / scroll. The main thread is busy when the user interacts, so the response paints late.
- Long tasks (over 50 ms) blocking the main thread — typically third-party scripts, large hydration runs, or unbatched state updates.
- Synchronous network calls on interaction (clicks that wait for a fetch before painting feedback).
- Large React/Vue/Angular apps with no code splitting — the framework is still hydrating when the user clicks.
- Animation jank — animating layout properties (width, height, top) instead of compositor-only properties (transform, opacity).
What fixes INP fastest
- Audit long tasks in Chrome DevTools → Performance tab. Anything over 50 ms is a candidate.
- Break up long-running JavaScript with
requestIdleCallback,setTimeout(0), or React’sstartTransition. - Code-split routes and lazy-load components that aren’t needed on first paint.
- Use
requestAnimationFramefor visual updates; avoid layout-triggering animations. - Show optimistic UI on click (skeleton, loading state, button-pressed style) so the user gets visible feedback before the network round-trip.
- Audit third-party scripts ruthlessly — analytics, chat widgets, A/B testing tools are common INP killers.
CLS — Cumulative Layout Shift
CLS measures how much the visible layout jumps around as the page loads or as the user interacts. It’s the metric users hate most — clicking a button, having an ad load above it, and accidentally clicking the ad — and historically the easiest of the three to fix.
What typically fails CLS
- Images without width/height attributes. The browser doesn’t know how much space to reserve, so when the image loads everything below it shifts down.
- Ads / embeds without reserved containers. A pre-roll ad slot that grows from 0 to 250px after the page renders.
- Web fonts with wide fallback metrics. Text reflows when the custom font swaps in.
- Dynamic content injected above existing content — a banner that appears, an A/B test that swaps a hero, a cookie banner that pushes everything down.
- Animations on layout-triggering properties (top/left/width) instead of transforms.
What fixes CLS fastest
- Set
widthandheighton every image andiframe. The browser reserves the right space before the asset loads. - Reserve fixed-height containers for ads, embeds, and any dynamically-sized component.
- Use
font-display: optionalor carefully sized fallback fonts (size-adjust) to prevent reflow on font swap. - Inject dynamic banners / cookie consent below the existing layout, not above.
- Use
transformfor animations, nottop/left/width/height.
How to actually measure CWV
Three measurement sources, in order of trust:
| Source | What it gives you | When to use it |
|---|---|---|
| Search Console > Core Web Vitals report | Real-user data, segmented by mobile/desktop, grouped by URL pattern | The authoritative source — this is what Google uses for ranking |
| Chrome User Experience Report (CrUX) via PageSpeed Insights | Real-user data + synthetic Lighthouse score for a single URL | Spot-check individual URLs; cross-validate Search Console patterns |
| Chrome DevTools Performance + Web Vitals extension | Live measurement on your own browser, with full traces | Diagnosing why a specific page is failing — used for fix work |
Lighthouse (synthetic) and real-user data don’t always agree. Lighthouse runs in a controlled environment with a fast machine on a throttled connection — it’s reproducible but not how your users actually experience the page. CrUX/Search Console data is the truth, lagged by 28 days.
Priority order when CWV is failing across the site
When you inherit a site with all three metrics in “poor”, work in this order:
- 1. Fix the hero image / LCP element on the homepage and top 5 traffic pages. Highest user-perceived impact, easiest fixes, ripples through Search Console aggregates fastest.
- 2. Audit and remove blocking third-party scripts. Marketing tags accumulate over years; deleting half of them often improves LCP and INP simultaneously.
- 3. Set width/height on every image template-wide. Single highest-leverage CLS fix; usually a one-line template change.
- 4. Add explicit reserved space for ads, embeds, and dynamic UI. Eliminates the second-biggest CLS class.
- 5. Audit long tasks and JavaScript bundle size. The INP work — usually a 2–4 week engineering project, not a content fix.
- 6. Set up Real User Monitoring (web-vitals npm package) and watch the regression closely. Without RUM, you can ship a regression that doesn’t appear in Search Console for 28 days.
What CWV doesn’t do
CWV measures page experience, not page quality. A site can pass all three thresholds and still rank poorly because the content is thin, the topical authority is weak, or the search intent isn’t satisfied. The reverse is also true: a site can fail CWV and still rank for queries it’s the best answer to.
Treat CWV as the floor — the table-stakes performance baseline. Above the floor, ranking is determined by content quality, authority signals, and intent match. See how the algorithm works for how the layers stack.
The bottom line
Three metrics, three thresholds, three fix patterns. LCP is mostly about the hero image and render-blocking scripts. CLS is about reserving space for everything that loads asynchronously. INP is about keeping the main thread free when the user interacts. Hit “good” on all three at the 75th percentile and CWV stops being a ranking concern; below that, every other technical improvement is built on a soft foundation.
Common questions
Common questions
Quick answers to what we get asked before every trial signup.
Core Web Vitals are the three Google performance metrics that measure real user experience: Largest Contentful Paint (LCP) — how fast the main content loads; Interaction to Next Paint (INP) — how responsive the page feels when the user clicks, taps, or types; and Cumulative Layout Shift (CLS) — how much the layout jumps around as it loads. Each has a 'good' / 'needs improvement' / 'poor' threshold. To pass the assessment a URL needs to be 'good' on all three at the 75th percentile of real-user data.
In this cluster