Core Web Vitals in 2026: What Actually Moves the Needle
The metrics have shifted, the thresholds have tightened, and most teams are still optimising for yesterday's numbers. Here is what actually matters now, and where the biggest wins usually hide.
Core Web Vitals were supposed to simplify web performance. "Here are three numbers, hit these thresholds, done." In practice, teams chase the numbers without understanding them, spend weeks shaving milliseconds off the wrong metric, and end up with a site that is faster on paper and no better for users.
The metrics themselves have also evolved. Interaction to Next Paint replaced First Input Delay in 2024. The thresholds have been quietly tightened. The tooling has improved, and with it, the quality of what "good" actually means.
Here is what we think is worth focusing on in 2026.
The Three Metrics, In Practice
Google's current definition gives you three:
Largest Contentful Paint (LCP) — how long it takes the largest meaningful thing above the fold to render. Target: under 2.5 seconds for 75% of visits.
Interaction to Next Paint (INP) — how responsive the page feels when the user actually interacts with it. Target: under 200ms for 75% of visits.
Cumulative Layout Shift (CLS) — how much things jump around after the page loads. Target: under 0.1 for 75% of visits.
What is less well understood is that these are all 75th-percentile metrics. Your average user's experience does not matter. The experience of the slower quarter of your users does. This changes where you should be looking for improvements.
LCP Is Usually About the Image
On most sites, the largest contentful element is a hero image, a banner, or the first big block of text in an article. If LCP is slow, it is usually because that element takes too long to arrive.
The typical causes, roughly in order of how often we see them:
The image is not preloaded. The browser does not know to fetch the hero image until it has parsed enough HTML and CSS to realise it needs to. A <link rel="preload"> or, better, the Next.js <Image priority> prop changes this.
The image is too large. Shipping a 2MB JPEG that will be rendered at 1200px wide is still common. Modern formats (AVIF, WebP) and proper responsive sizes make this much easier than it used to be.
A font is blocking text rendering. If the LCP element is a text block, a slow font load will block it. font-display: swap or font-display: optional lets the text render with a fallback first.
A third-party script is blocking the render. The obvious ones are analytics, tag managers, consent banners. If you are still loading these synchronously in the <head>, fixing that is a bigger LCP win than most code optimisations.
The web.dev LCP guide has the canonical walkthrough, and it is genuinely worth reading end to end if you have not.
INP Is Where the Interesting Work Is Now
INP replaced First Input Delay as a Core Web Vital in March 2024. The shift matters: FID measured only the delay before the first interaction was processed, which was usually fine. INP measures the delay of every interaction, across the whole session, which is much harder to pass.
What we see when we profile INP problems:
Long tasks on the main thread. A click handler that does synchronous work for 300ms blocks the entire UI. React's rendering model makes this easy to do accidentally, because a state update that triggers a large re-render will block just as surely as a synchronous loop.
Too much JavaScript to parse and execute on load. A bundle that ships a large vendor chunk will keep the main thread busy for long enough after navigation that early interactions miss the 200ms target.
Event handlers that trigger network requests synchronously from the UI's point of view. "User clicked, we showed a loading state 400ms later" is an INP hit, even if the network request itself was fast.
The fix is usually not "rewrite the whole app". It is to find the specific slow interactions — the Chrome DevTools performance panel will point them out — and make those specific interactions faster. Break up long tasks with setTimeout or the Scheduler API. Use useTransition or startTransition in React to defer non-urgent updates. Avoid doing work in mousedown that could be done in mouseup.
CLS Is Usually About Reserving Space
CLS is the metric that teams most often pass without thinking about, until they ship a change that regresses it.
The common causes are small and mechanical:
Images without width and height attributes. The browser does not know how much space to reserve, so content below the image jumps when it loads. Adding the attributes (or using <Image> with explicit dimensions) prevents this entirely.
Web fonts that change metrics on swap. When the fallback font is a different size than the web font, text reflows when the web font arrives. The size-adjust CSS font descriptor and careful fallback selection mitigate this.
Ads and embedded content with unknown dimensions. Same root cause as images, and the same fix: reserve the space.
Cookie banners and other late-injected UI that pushes the content down. This is the one users notice most, because it usually happens exactly when they try to click something.
A good CLS score is mostly a matter of discipline: reserve space for everything that will appear on the page, and avoid injecting content above existing content after render.
What Most Teams Get Wrong
The most common mistake is optimising the lab score rather than the field score. Lighthouse tells you what happens on one run, on a specific device, on a specific network. Real Core Web Vitals are the aggregate of actual users.
A site can have a Lighthouse score of 98 and real-world Core Web Vitals that are failing, because Lighthouse was run from an office with fast internet on a recent MacBook, and the actual users are on mid-range Android phones on patchy mobile data.
The fix is to look at real-user data. The Chrome User Experience Report (CrUX) is the public dataset Google uses for ranking. Your own analytics, if you are collecting Web Vitals, will tell you the same thing but with more detail.
The other common mistake is treating the 75th percentile as "pretty good". The 75th percentile is the bar you need to clear to pass, not to excel. A site with a p75 LCP of 2.4 seconds passes, but it is not fast. A site with a p75 LCP of 1.2 seconds feels genuinely quick to everyone.
Where the Biggest Wins Usually Hide
After reviewing a lot of sites, the single highest-leverage thing is almost always "remove or defer the third-party scripts that are not pulling their weight". Analytics, tag managers, session replay, A/B testing, chat widgets — each one is usually adding more load than it is worth, and most teams have more of them than they realise.
The second is images. Specifically, using a modern format, sizing them correctly, and preloading the hero element. This is where we find the biggest LCP wins on most sites we review.
The third is reducing JavaScript. This is more work and harder to measure, but on any site where the whole experience is rendered client-side, there is almost always a way to push more to the server or cut more of what is being shipped.
None of this is glamorous. All of it adds up.
If you are on Next.js specifically, our earlier post on why your Next.js app is slower than it should be covers the framework-specific mistakes that compound on top of these fundamentals.
Site failing its Core Web Vitals and not sure where to start? Get in touch — performance audits are something we do regularly, and the wins are usually more boring and more effective than people expect.