Largest Contentful Paint Optimization: A Deep Dive into Every LCP Sub-Part

Break LCP into its four sub-parts — TTFB, Resource Load Delay, Load Duration, and Render Delay — with targeted strategies, production-ready code examples, and a complete audit checklist for achieving sub-2.5s LCP scores.

Why LCP Is the Core Web Vital You Can't Afford to Ignore

Largest Contentful Paint (LCP) measures how long it takes for the biggest visible content element — usually a hero image, a prominent heading, or a large block of text — to render in the viewport. Google considers an LCP of 2.5 seconds or less as "good," between 2.5 and 4 seconds as "needs improvement," and anything above 4 seconds as "poor." These thresholds are measured at the 75th percentile across real user sessions, so three-quarters of your visitors need to experience fast loading for your site to pass.

The business impact? It's pretty stark.

According to Google's own research, when LCP improves from 2.7 seconds to 1.7 seconds, conversion rates can jump by up to 15%. Vodafone improved their LCP by 31% and saw an 8% increase in sales. The web performance team at Tokopedia reduced LCP by 55% and measured a 23% improvement in average session duration. These aren't small numbers — for any site that generates revenue, LCP is directly tied to the bottom line.

If you've already read our comprehensive guide on Interaction to Next Paint (INP), you know that INP governs the responsiveness of your page after load. LCP, by contrast, governs the perception of load speed — how quickly your users feel the page is ready. Together, they form two pillars of the Core Web Vitals trio (alongside Cumulative Layout Shift). In this guide, we're going to dissect LCP into its four sub-parts, walk through targeted optimizations for each one, explore modern image delivery strategies including AVIF, and provide production-ready code examples you can apply today.

The Four Sub-Parts of LCP

Understanding LCP as a single number is useful for benchmarking, but optimizing it requires breaking it down into the four sequential phases that make it up. Each phase has different root causes and different solutions. If you optimize the wrong phase, you'll waste time with no measurable improvement.

So, let's break them down one by one.

1. Time to First Byte (TTFB)

TTFB is the time from when the user kicks off navigation (clicking a link, typing a URL) to when the browser receives the first byte of HTML from the server. This phase covers DNS resolution, TCP connection, TLS handshake, server processing, and network transit time back to the client.

TTFB is the foundation. Nothing can happen on the frontend — no parsing, no rendering, no resource loading — until that first byte arrives. Google's recommended budget allocates roughly 40% of your total LCP time to TTFB. For a 2.5-second LCP target, that means TTFB should ideally be under 800ms (though lower is always better).

2. Resource Load Delay

Resource Load Delay is the time between TTFB completing and the browser actually starting to download the LCP resource (typically an image). This delay happens for two reasons: either the browser discovers the LCP resource late — because it's hidden behind JavaScript rendering or CSS background-image declarations — or the resource gets low priority from the browser's loading heuristics.

This is honestly one of the most misunderstood phases. Many developers assume that once the HTML arrives, the browser immediately starts fetching images. That's only true if the image is in the raw HTML markup where the preload scanner can find it. If your hero image is injected by a JavaScript framework during client-side rendering, the browser can't discover it until after JavaScript has been downloaded, parsed, and executed — which can easily add hundreds of milliseconds.

3. Resource Load Duration

This one's more straightforward: it's the time it takes to actually download the LCP resource over the network. For images, this depends on file size, the user's connection speed, and whether the resource is being served from a nearby CDN edge node or a distant origin server. For text-based LCP elements (headings, paragraphs), this phase is effectively zero since the content is already in the HTML.

Google's budget allocates roughly 40% of total LCP time to this phase. For our 2.5-second target, that means the image download should complete in about one second. On a typical 4G connection (roughly 7 Mbps), that gives you a budget of approximately 875 KB for the LCP image — and that's before factoring in other concurrent downloads competing for bandwidth.

4. Element Render Delay

The final phase is the time between the LCP resource finishing its download and the browser actually painting it to the screen. Render delay occurs because the browser might still be processing render-blocking CSS, executing JavaScript on the main thread, or performing layout and compositing work that prevents it from painting the frame containing your LCP element.

Google recommends that Resource Load Delay and Element Render Delay combined should account for no more than 20% of total LCP — meaning each should be about 10%, or around 250ms at most. In practice, render delay is often the phase that catches teams off guard because the resource has technically loaded but the page still doesn't look ready.

Measuring LCP Sub-Parts in the Field

You can't optimize what you can't measure. While Chrome DevTools and Lighthouse give you lab data, understanding LCP sub-parts in the field across real users is what actually matters. The Performance API and the web-vitals library make this possible.

Using the PerformanceObserver API

The browser exposes LCP entries via the PerformanceObserver API. Each entry includes timestamps that let you calculate the sub-parts:

const observer = new PerformanceObserver((list) => {
  const entries = list.getEntries();
  const lastEntry = entries[entries.length - 1];

  // Get navigation timing for TTFB
  const navEntry = performance.getEntriesByType('navigation')[0];
  const ttfb = navEntry.responseStart;

  // Get the LCP resource timing
  const lcpResource = performance.getEntriesByType('resource')
    .find(r => r.name === lastEntry.url);

  const subParts = {
    ttfb: ttfb,
    resourceLoadDelay: lcpResource
      ? lcpResource.requestStart - ttfb
      : 0,
    resourceLoadDuration: lcpResource
      ? lcpResource.responseEnd - lcpResource.requestStart
      : 0,
    elementRenderDelay: lcpResource
      ? lastEntry.startTime - lcpResource.responseEnd
      : lastEntry.startTime - ttfb,
    totalLCP: lastEntry.startTime,
  };

  console.log('LCP Sub-Parts:', subParts);
});

observer.observe({ type: 'largest-contentful-paint', buffered: true });

Using the web-vitals Library with Attribution

The web-vitals library (version 4+) provides LCP attribution out of the box, including sub-part breakdowns:

import { onLCP } from 'web-vitals/attribution';

onLCP((metric) => {
  const { attribution } = metric;

  console.log('LCP value:', metric.value);
  console.log('LCP element:', attribution.lcpEntry?.element);
  console.log('LCP resource URL:', attribution.lcpEntry?.url);

  // Sub-part breakdown
  console.log('TTFB:', attribution.timeToFirstByte);
  console.log('Resource Load Delay:', attribution.resourceLoadDelay);
  console.log('Resource Load Duration:', attribution.resourceLoadDuration);
  console.log('Element Render Delay:', attribution.elementRenderDelay);

  // Beacon this data to your analytics endpoint
  navigator.sendBeacon('/analytics/lcp', JSON.stringify({
    value: metric.value,
    url: location.href,
    lcpElement: attribution.lcpEntry?.element,
    lcpUrl: attribution.lcpEntry?.url,
    ttfb: attribution.timeToFirstByte,
    resourceLoadDelay: attribution.resourceLoadDelay,
    resourceLoadDuration: attribution.resourceLoadDuration,
    elementRenderDelay: attribution.elementRenderDelay,
  }));
});

With this data flowing into your analytics, you can pinpoint which sub-part is the bottleneck for most of your users and focus your optimization efforts where they'll actually matter.

Optimizing TTFB: The Foundation of Fast LCP

If TTFB is slow, LCP can't be fast. Period. Every millisecond added to TTFB adds at least a millisecond to LCP. Here are the most effective strategies for reducing TTFB in 2026.

Deploy a CDN — and Actually Use It

A Content Delivery Network caches your content at edge locations around the world, serving responses from the nearest point of presence to each user. For static sites, this can reduce TTFB from hundreds of milliseconds to single digits. For dynamic sites, modern CDN platforms like Cloudflare, Fastly, and Vercel offer edge compute capabilities that let you run server-side logic right at the edge.

Cloudflare Workers, for instance, use V8 isolates with cold starts under 1ms and deploy to over 300 data centers worldwide. Vercel Edge Functions leverage bytecode caching and predictive instance warming to achieve near-zero cold starts. The result is that your server-side rendering or API calls execute within 10-50ms of your users instead of 100-300ms from a central origin.

// Example: Cloudflare Worker for edge-side rendering with caching
export default {
  async fetch(request, env) {
    const url = new URL(request.url);
    const cacheKey = new Request(url.toString(), request);
    const cache = caches.default;

    // Check edge cache first
    let response = await cache.match(cacheKey);
    if (response) {
      return response;
    }

    // Generate response at the edge
    const html = await renderPage(url.pathname, env);
    response = new Response(html, {
      headers: {
        'Content-Type': 'text/html; charset=utf-8',
        'Cache-Control': 'public, max-age=60, s-maxage=3600',
        'CDN-Cache-Control': 'max-age=3600',
      },
    });

    // Store in edge cache
    await cache.put(cacheKey, response.clone());
    return response;
  },
};

Implement Efficient Server-Side Caching

Even with a CDN, dynamic pages that miss the cache still need to be generated. A multi-layer caching strategy on your origin server can help a lot here:

// Express.js example with multi-layer caching
const NodeCache = require('node-cache');
const pageCache = new NodeCache({ stdTTL: 300 }); // 5-minute TTL

app.get('/products/:slug', async (req, res) => {
  const cacheKey = `page:${req.params.slug}`;

  // Layer 1: In-memory cache
  const cached = pageCache.get(cacheKey);
  if (cached) {
    res.set('X-Cache', 'HIT-MEMORY');
    return res.send(cached);
  }

  // Layer 2: Redis cache (shared across instances)
  const redisCached = await redis.get(cacheKey);
  if (redisCached) {
    pageCache.set(cacheKey, redisCached);
    res.set('X-Cache', 'HIT-REDIS');
    return res.send(redisCached);
  }

  // Layer 3: Generate fresh
  const html = await renderProductPage(req.params.slug);
  pageCache.set(cacheKey, html);
  await redis.set(cacheKey, html, 'EX', 300);
  res.set('X-Cache', 'MISS');
  res.send(html);
});

Use 103 Early Hints

HTTP 103 Early Hints is a feature that lets the server send preliminary headers (like preload directives) to the browser before the full response is ready. While the server is still generating the HTML, the browser can get a head start on preloading critical resources like fonts, CSS, and even the LCP image:

// Node.js example with 103 Early Hints
app.get('/', (req, res) => {
  // Send 103 Early Hints immediately
  res.writeEarlyHints({
    link: [
      '</css/critical.css>; rel=preload; as=style',
      '</images/hero.avif>; rel=preload; as=image; type=image/avif',
      '</fonts/inter-var.woff2>; rel=preload; as=font; type=font/woff2; crossorigin',
    ],
  });

  // Continue generating the full response
  const html = renderHomePage();
  res.send(html);
});

Cloudflare supports 103 Early Hints natively at the CDN level, so even if your origin doesn't generate them, the CDN can learn which resources are needed based on previous responses and send hints automatically on subsequent requests. That's a nice freebie.

Optimizing Resource Load Delay: Help the Browser Find Your LCP Element Instantly

Resource Load Delay is the gap between TTFB and when the browser starts downloading the LCP resource. The two keys to minimizing it are early discovery and high priority.

Make Sure the LCP Image Is in the HTML Source

The browser's preload scanner is a secondary, high-speed HTML parser that scans the raw markup for resource URLs before the main parser even reaches them. It works on the raw HTML — it doesn't execute JavaScript or apply CSS. This means that if your LCP image is only rendered by a JavaScript framework (React, Vue, Angular doing client-side rendering), the preload scanner will never find it.

The fix? Make sure your LCP element is present in the server-rendered HTML:

<!-- GOOD: Image is in raw HTML, discoverable by preload scanner -->
<img
  src="/images/hero.avif"
  alt="Product showcase"
  width="1200"
  height="630"
  fetchpriority="high"
>

<!-- BAD: Image only exists after JS executes -->
<div id="hero-container"></div>
<script>
  // Preload scanner cannot see this image
  document.getElementById('hero-container').innerHTML =
    '<img src="/images/hero.avif" alt="Product showcase">';
</script>

For applications using frameworks like Next.js, Nuxt, or SvelteKit, server-side rendering (SSR) or static site generation (SSG) ensures the LCP element is in the initial HTML. If you're using a single-page application with client-side rendering only, your LCP image will always have a resource load delay equal to the time it takes to download, parse, and execute your JavaScript bundle. That's a pretty significant penalty.

Use fetchpriority="high" on the LCP Image

The fetchpriority attribute is perhaps the single highest-impact, lowest-effort optimization available for LCP. It tells the browser to prioritize downloading a specific resource over other competing resources. Despite how effective it is, data from the HTTP Archive shows that only about 15% of eligible pages use it. That's a lot of missed opportunity.

<!-- Apply fetchpriority="high" to your LCP image -->
<img
  src="/images/hero-product.avif"
  alt="Featured product"
  width="800"
  height="600"
  fetchpriority="high"
  decoding="async"
>

<!-- For CSS background images, use a preload link with fetchpriority -->
<link
  rel="preload"
  href="/images/hero-bg.avif"
  as="image"
  type="image/avif"
  fetchpriority="high"
>

Real-world impact: preloading a background image and using fetchpriority="high" has been shown to improve LCP from 3.4 seconds to 1.7 seconds — a 50% reduction from a single attribute. I've personally seen similar results across multiple client projects.

Never Lazy-Load the LCP Image

This seems obvious, but it's one of the most common mistakes we see in the wild. Many CMS platforms and component libraries apply loading="lazy" to all images by default. If your LCP image is above the fold, lazy loading will delay its download until the browser determines it's near the viewport — which for an above-the-fold image means an unnecessary round-trip delay.

<!-- WRONG: Do NOT lazy-load your LCP image -->
<img src="/hero.jpg" alt="Hero" loading="lazy">

<!-- CORRECT: Eager load (default) with high priority -->
<img src="/hero.jpg" alt="Hero" fetchpriority="high">

<!-- Lazy-load is perfect for below-the-fold images -->
<img src="/product-1.jpg" alt="Product 1" loading="lazy">
<img src="/product-2.jpg" alt="Product 2" loading="lazy">

Preload the LCP Image When It Can't Be in an img Tag

Sometimes the LCP element is a CSS background image, a video poster, or a dynamically loaded resource. In these cases, use <link rel="preload"> in the <head> to make it discoverable early:

<head>
  <!-- Preload hero background image with format hint -->
  <link
    rel="preload"
    href="/images/hero-bg.avif"
    as="image"
    type="image/avif"
    fetchpriority="high"
  >

  <!-- Preload with responsive images using imagesrcset -->
  <link
    rel="preload"
    as="image"
    fetchpriority="high"
    imagesrcset="
      /images/hero-400w.avif 400w,
      /images/hero-800w.avif 800w,
      /images/hero-1200w.avif 1200w"
    imagesizes="(max-width: 600px) 400px, (max-width: 1024px) 800px, 1200px"
  >
</head>

Optimizing Resource Load Duration: Deliver Smaller, Faster Resources

Once the browser discovers and starts downloading the LCP resource, the download time comes down to file size and network conditions. You can't control the user's network, but you absolutely can control the file size.

Adopt AVIF as Your Primary Image Format

AVIF (AV1 Image File Format) has matured significantly and in 2026 enjoys broad browser support across Chrome, Firefox, Safari (from version 16.1), and Edge. AVIF achieves approximately 50% smaller file sizes than JPEG at the same perceived visual quality, and 20-30% smaller than WebP. For LCP images, that translates directly to faster downloads.

Converting just your hero images to AVIF can shave 200-500ms off your LCP time on typical mobile connections. Here's a practical implementation using the <picture> element for progressive format delivery:

<picture>
  <!-- AVIF: smallest file size, best quality -->
  <source
    srcset="
      /images/hero-400w.avif 400w,
      /images/hero-800w.avif 800w,
      /images/hero-1200w.avif 1200w"
    sizes="(max-width: 600px) 400px, (max-width: 1024px) 800px, 1200px"
    type="image/avif"
  >
  <!-- WebP: fallback for older browsers -->
  <source
    srcset="
      /images/hero-400w.webp 400w,
      /images/hero-800w.webp 800w,
      /images/hero-1200w.webp 1200w"
    sizes="(max-width: 600px) 400px, (max-width: 1024px) 800px, 1200px"
    type="image/webp"
  >
  <!-- JPEG: universal fallback -->
  <img
    src="/images/hero-800w.jpg"
    srcset="
      /images/hero-400w.jpg 400w,
      /images/hero-800w.jpg 800w,
      /images/hero-1200w.jpg 1200w"
    sizes="(max-width: 600px) 400px, (max-width: 1024px) 800px, 1200px"
    alt="Featured product showcase"
    width="1200"
    height="630"
    fetchpriority="high"
    decoding="async"
  >
</picture>

Automate Image Optimization in Your Build Pipeline

Manually converting images isn't sustainable. Integrate image optimization into your build process instead. Here's an example using sharp in a Node.js build script:

const sharp = require('sharp');
const glob = require('fast-glob');
const path = require('path');

async function optimizeImages() {
  const images = await glob('src/images/**/*.{jpg,jpeg,png}');

  for (const imagePath of images) {
    const basename = path.basename(imagePath, path.extname(imagePath));
    const outputDir = 'dist/images';

    const widths = [400, 800, 1200];

    for (const width of widths) {
      const pipeline = sharp(imagePath).resize(width);

      // Generate AVIF
      await pipeline
        .clone()
        .avif({ quality: 65, effort: 6 })
        .toFile(`${outputDir}/${basename}-${width}w.avif`);

      // Generate WebP
      await pipeline
        .clone()
        .webp({ quality: 75, effort: 6 })
        .toFile(`${outputDir}/${basename}-${width}w.webp`);

      // Generate optimized JPEG fallback
      await pipeline
        .clone()
        .jpeg({ quality: 80, progressive: true, mozjpeg: true })
        .toFile(`${outputDir}/${basename}-${width}w.jpg`);
    }

    console.log(`Optimized: ${basename}`);
  }
}

optimizeImages();

Serve Images from a CDN with Automatic Format Negotiation

Modern image CDNs like Cloudflare Images, Imgix, and Cloudinary can automatically negotiate the best format based on the client's Accept header. This removes the need for <picture> fallback chains entirely:

<!-- With an image CDN, one URL serves the optimal format automatically -->
<img
  src="https://cdn.example.com/images/hero.jpg?w=1200&q=75&auto=format"
  alt="Hero image"
  width="1200"
  height="630"
  fetchpriority="high"
  decoding="async"
  srcset="
    https://cdn.example.com/images/hero.jpg?w=400&q=75&auto=format 400w,
    https://cdn.example.com/images/hero.jpg?w=800&q=75&auto=format 800w,
    https://cdn.example.com/images/hero.jpg?w=1200&q=75&auto=format 1200w"
  sizes="(max-width: 600px) 400px, (max-width: 1024px) 800px, 1200px"
>

The CDN inspects the browser's Accept header, sees that it supports AVIF, and serves the AVIF version — all from a single URL. No <picture> element needed, no build-time format generation required. The trade-off is cost and vendor dependency, but for teams that want to move fast, it's honestly the most pragmatic approach.

Optimizing Element Render Delay: Unblock the Painting Pipeline

Your LCP image has been discovered early, downloaded quickly, and is sitting in the browser's memory — but it still hasn't been painted to the screen. Render delay is the final obstacle, and it's almost always caused by render-blocking resources or main thread congestion.

Eliminate Render-Blocking CSS

CSS is render-blocking by default. The browser won't paint anything until all CSS in the <head> has been downloaded and parsed. If you have a large stylesheet that includes styles for your entire site, the browser has to download all of it before rendering any of your page — including the LCP element.

The solution is to inline the critical CSS needed for above-the-fold content and defer the rest:

<head>
  <!-- Inline critical CSS for above-the-fold content -->
  <style>
    /* Only styles needed for initial viewport render */
    .hero { position: relative; width: 100%; aspect-ratio: 16/9; }
    .hero img { width: 100%; height: 100%; object-fit: cover; }
    .nav { display: flex; align-items: center; padding: 1rem; }
    /* ... minimal critical styles ... */
  </style>

  <!-- Load full stylesheet without blocking render -->
  <link
    rel="preload"
    href="/css/main.css"
    as="style"
    onload="this.onload=null;this.rel='stylesheet'"
  >
  <noscript><link rel="stylesheet" href="/css/main.css"></noscript>
</head>

Tools like critters (for webpack/Vite) or critical (standalone) can automate critical CSS extraction during your build process.

Avoid Blocking JavaScript Before the LCP Element

Synchronous <script> tags in the <head> or before the LCP element in the body will block the parser and delay rendering. Use defer or async attributes on all non-critical scripts, and move scripts that must be synchronous to after the LCP element in the DOM:

<head>
  <!-- These scripts will NOT block rendering -->
  <script src="/js/app.js" defer></script>
  <script src="/js/analytics.js" async></script>
</head>
<body>
  <!-- LCP element renders without waiting for JS -->
  <section class="hero">
    <img src="/images/hero.avif" alt="Hero" fetchpriority="high">
  </section>

  <!-- Non-critical JS loads after LCP content -->
  <script src="/js/carousel.js" defer></script>
</body>

Optimize Web Font Loading

If your LCP element is a text block (heading, paragraph), web fonts can delay its rendering. The browser may wait for the font to download before painting text — the so-called "Flash of Invisible Text" or FOIT behavior. Use font-display: swap to ensure text renders immediately with a fallback font:

@font-face {
  font-family: 'Inter';
  src: url('/fonts/inter-var.woff2') format('woff2-variations');
  font-weight: 100 900;
  font-display: swap;
  unicode-range: U+0000-00FF, U+0131, U+0152-0153, U+02BB-02BC, U+2000-206F;
}

/* Reduce layout shift from font swap with size-adjust */
@font-face {
  font-family: 'Inter Fallback';
  src: local('Arial');
  size-adjust: 107%;
  ascent-override: 90%;
  descent-override: 22%;
  line-gap-override: 0%;
}

And don't forget to preload your critical font file to reduce the time the fallback font is visible:

<link
  rel="preload"
  href="/fonts/inter-var.woff2"
  as="font"
  type="font/woff2"
  crossorigin
>

LCP for Video Elements: What Changed in 2026

A significant change in how Chrome measures LCP is the inclusion of video elements as LCP candidates. Previously, only videos with poster images were considered. Now, the first frame of an auto-playing video can be the LCP element. This matters because hero videos are increasingly common, and their performance directly impacts your LCP score.

Here's how to optimize video LCP:

<!-- Provide a poster image as an immediate visual -->
<video
  autoplay
  muted
  loop
  playsinline
  poster="/images/hero-poster.avif"
  width="1920"
  height="1080"
>
  <source src="/video/hero.mp4" type="video/mp4">
</video>

<!-- Preload the poster image for fastest LCP -->
<link
  rel="preload"
  href="/images/hero-poster.avif"
  as="image"
  fetchpriority="high"
>

<!-- For video-as-LCP, preload the video itself -->
<link
  rel="preload"
  href="/video/hero.mp4"
  as="video"
  fetchpriority="high"
>

Keep hero videos short, compressed, and ideally under 2 MB. Use modern codecs like H.265 (HEVC) or AV1 where browser support allows, and always provide a poster image as a fast visual placeholder.

Framework-Specific LCP Optimization

Modern frameworks have built-in features to help with LCP. Here's how to get the most out of them.

Next.js

Next.js provides the next/image component which handles many LCP optimizations automatically. The key thing is that you need to use the priority prop on your LCP image:

import Image from 'next/image';

export default function HeroSection() {
  return (
    <section className="hero">
      {/* The priority prop adds fetchpriority="high" and preload link */}
      <Image
        src="/images/hero.jpg"
        alt="Hero showcase"
        width={1200}
        height={630}
        priority
        sizes="(max-width: 768px) 100vw, (max-width: 1200px) 80vw, 1200px"
      />
    </section>
  );
}

The priority prop tells Next.js to add a <link rel="preload"> tag in the <head>, set fetchpriority="high", disable lazy loading, and prioritize the image in the loading waterfall. Without it, Next.js lazy-loads all images by default — which is exactly what you don't want for your LCP image.

Nuxt

Nuxt Image (@nuxt/image) provides similar capabilities for the Vue ecosystem:

<template>
  <section class="hero">
    <NuxtImg
      src="/images/hero.jpg"
      alt="Hero showcase"
      width="1200"
      height="630"
      preload
      loading="eager"
      fetchpriority="high"
      sizes="sm:400px md:800px lg:1200px"
      format="avif,webp"
    />
  </section>
</template>

Astro

Astro's zero-JS-by-default approach is inherently great for LCP. Its built-in <Image> component optimizes images at build time:

---
import { Image } from 'astro:assets';
import heroImage from '../assets/hero.jpg';
---

<section class="hero">
  <Image
    src={heroImage}
    alt="Hero showcase"
    widths={[400, 800, 1200]}
    sizes="(max-width: 600px) 400px, (max-width: 1024px) 800px, 1200px"
    format="avif"
    quality={75}
    loading="eager"
  />
</section>

A Complete LCP Audit Checklist

Use this checklist to systematically audit and improve your LCP score. Work through each section in order — the phases are listed from most impactful to least.

Identify Your LCP Element

  • Open Chrome DevTools, run a Lighthouse audit, or use the Performance panel to identify what your LCP element actually is
  • Check that it matches what you expect — sometimes the LCP element turns out to be a background image or a text block, not the hero image you were thinking of
  • Verify the LCP element across different viewport sizes (mobile vs. desktop) as it may differ

TTFB (Target: under 800ms)

  • Deploy a CDN and ensure HTML responses are cached at the edge
  • Implement server-side caching (in-memory, Redis) for dynamic pages
  • Use HTTP/2 or HTTP/3 for multiplexed connections
  • Enable 103 Early Hints if your CDN supports it
  • Consider edge-side rendering for dynamic pages with global audiences

Resource Load Delay (Target: under 250ms)

  • Ensure the LCP image is in the HTML source (not JavaScript-rendered)
  • Add fetchpriority="high" to the LCP image
  • Remove loading="lazy" from the LCP image
  • Use <link rel="preload"> for CSS background images or dynamically loaded images
  • Use SSR or SSG instead of client-side rendering for pages with image-based LCP

Resource Load Duration (Target: under 1000ms)

  • Serve images in AVIF format with WebP and JPEG fallbacks
  • Use responsive images with appropriate srcset and sizes
  • Compress images aggressively — AVIF at quality 65-75 typically looks identical to JPEG at quality 85
  • Serve images from a CDN edge location near the user
  • Set proper Cache-Control headers for aggressive browser caching

Element Render Delay (Target: under 250ms)

  • Inline critical CSS and defer the rest
  • Use defer or async on all non-critical scripts
  • Use font-display: swap and preload critical fonts
  • Minimize third-party scripts that execute before the LCP element renders
  • Ensure no synchronous JavaScript blocks the parser before the LCP element

Monitoring LCP in Production

Lab tools like Lighthouse give you a controlled environment, but real-user monitoring (RUM) reveals the true LCP experience across diverse devices and networks. Setting up continuous monitoring is how you catch regressions before they become problems.

import { onLCP } from 'web-vitals/attribution';

// Report LCP data with sub-part attribution
onLCP((metric) => {
  const data = {
    value: metric.value,
    rating: metric.rating,
    navigationType: metric.navigationType,
    url: location.href,
    subParts: {
      ttfb: metric.attribution.timeToFirstByte,
      resourceLoadDelay: metric.attribution.resourceLoadDelay,
      resourceLoadDuration: metric.attribution.resourceLoadDuration,
      elementRenderDelay: metric.attribution.elementRenderDelay,
    },
    lcpElement: metric.attribution.lcpEntry?.element,
    lcpUrl: metric.attribution.lcpEntry?.url,
    deviceType: navigator.userAgentData?.mobile ? 'mobile' : 'desktop',
    connectionType: navigator.connection?.effectiveType || 'unknown',
  };

  // Send to your analytics endpoint
  if (navigator.sendBeacon) {
    navigator.sendBeacon('/api/vitals', JSON.stringify(data));
  }
});

// Set up alerts for LCP regressions
// In your analytics dashboard, create alerts when:
// - p75 LCP exceeds 2500ms
// - p75 LCP increases by more than 10% week-over-week
// - Any LCP sub-part exceeds its budget allocation

Tools like DebugBear, SpeedCurve, Sentry Performance, and Google's CrUX dashboard provide pre-built LCP monitoring with sub-part breakdowns. If you're using Google Search Console, the Core Web Vitals report shows LCP at the origin level and flags pages that need improvement.

Wrapping Up: The LCP Optimization Mindset

Optimizing LCP isn't about finding a single silver bullet — it's about systematically addressing each of the four sub-parts. Start by measuring your LCP sub-parts in the field to identify your actual bottleneck. If TTFB is the problem, focus on server and CDN infrastructure. If resource load delay is the issue, fix your resource discovery and prioritization. If load duration is slow, compress and resize your images. If render delay is the culprit, address render-blocking resources.

The most impactful changes are often the simplest: adding fetchpriority="high" to your LCP image, removing an accidental loading="lazy", switching from JPEG to AVIF, or inlining critical CSS. Each of these can individually shave hundreds of milliseconds off your LCP. Combined, they can transform a "poor" LCP score into a "good" one.

And here's the thing worth remembering: LCP is measured at the 75th percentile of real user experiences. Your lab tests on a fast laptop with a fiber connection don't represent the experience of most of your users on mobile devices with variable network conditions. Always validate your optimizations against real-user data, and keep monitoring to catch regressions before your users — and Google's ranking algorithm — notice them.

About the Author Editorial Team

Our team of expert writers and editors.