Fix Poor INP Score Caused by JavaScript Long Tasks: Diagnosis and Repair Guide

Diagnose and fix poor INP scores caused by JavaScript long tasks. Covers scheduler.yield(), the LoAF API, DevTools profiling, framework-specific fixes for React, Angular, and Vue, plus a complete before-and-after verification workflow.

If your site's failing Google's Interaction to Next Paint threshold and you need to fix your INP score, the culprit is almost certainly JavaScript long tasks hogging the main thread. Long tasks — any chunk of JavaScript that runs for more than 50 milliseconds without yielding — block the browser from responding to clicks, taps, and key presses. The result? Sluggish interaction to next paint metrics that hurt both user experience and search rankings.

With INP firmly established as a Core Web Vital in 2026, understanding the relationship between JavaScript long tasks and INP isn't optional anymore. This guide walks you through a complete workflow: diagnosing exactly which long tasks are degrading your INP, breaking them apart using modern APIs, and verifying improvements in both lab and field data.

What Are JavaScript Long Tasks and Why Do They Destroy INP?

The browser's main thread handles everything users actually care about: parsing HTML, running JavaScript, processing input events, and painting pixels. It processes work in discrete units called tasks. When a single task runs longer than 50 ms, the browser labels it a long task. During that time, the event loop is completely blocked — no user interaction can be processed until the task finishes.

INP measures the worst-case latency between a user interaction (click, tap, keypress) and the next visual update. It reports the interaction at the 75th percentile across the page session. Google considers scores at or below 200 ms "good," between 200–500 ms "needs improvement," and above 500 ms "poor."

Long tasks damage INP in all three of its phases:

  • Input Delay — If a long task is already running when the user interacts, the browser has to finish that task before it can even start processing the event handler. A 300 ms analytics script running at the wrong moment? That's 300 ms of pure input delay, right there.
  • Processing Time — If the event handler itself is a long task (say, a complex filter recalculation on a search page), the processing phase balloons.
  • Presentation Delay — After the handler completes, the browser must recalculate styles, perform layout, and paint. A bloated DOM or forced synchronous layout inside the handler extends this phase into long-task territory.

Here's the critical insight: a single 400 ms long task can torpedo your entire page's INP, even if every other interaction completes in under 100 ms. INP reports the worst interaction (or the 98th percentile on pages with 50+ interactions), so one bad actor is enough to ruin your score.

How to Diagnose Long Tasks Causing Poor INP

Before you fix anything, you need to know which long tasks are responsible. A scattershot approach wastes effort. Here's a systematic diagnosis workflow combining field data, lab tooling, and programmatic APIs.

Step 1: Identify the Worst Interactions with Field Data

Lab testing only captures interactions you manually perform. Real users, though — they interact in ways you can't predict. They click during hydration, tap while a third-party tag manager fires, type while an analytics beacon serializes data. So, start with field data.

Use the web-vitals JavaScript library (v4+) to capture INP along with attribution data in production:

import { onINP } from 'web-vitals/attribution';

onINP((metric) => {
  const attribution = metric.attribution;
  console.log('INP value:', metric.value, 'ms');
  console.log('Element:', attribution.interactionTarget);
  console.log('Input delay:', attribution.inputDelay, 'ms');
  console.log('Processing time:', attribution.processingDuration, 'ms');
  console.log('Presentation delay:', attribution.presentationDelay, 'ms');

  // LoAF entries that overlapped this interaction
  const loafs = attribution.longAnimationFrameEntries;
  loafs.forEach((loaf) => {
    loaf.scripts.forEach((script) => {
      console.log('Blocking script:', script.sourceURL);
      console.log('  Duration:', script.duration, 'ms');
      console.log('  Function:', script.sourceFunctionName);
    });
  });

  // Send to your analytics endpoint
  navigator.sendBeacon('/api/vitals', JSON.stringify({
    name: metric.name,
    value: metric.value,
    page: location.pathname,
    attribution: {
      target: attribution.interactionTarget,
      inputDelay: attribution.inputDelay,
      processingDuration: attribution.processingDuration,
      presentationDelay: attribution.presentationDelay,
      scripts: loafs.flatMap(l => l.scripts.map(s => ({
        url: s.sourceURL,
        duration: s.duration,
        function: s.sourceFunctionName
      })))
    }
  }));
});

After collecting data for a few days, aggregate by page template and interaction target. You'll almost always discover that INP problems cluster around a small number of templates — checkout flows, search result pages, pricing tables, and filter-heavy listings are the usual suspects.

Step 2: Reproduce in Chrome DevTools Performance Panel

Once you know which page and interaction to target, reproduce the issue in the lab:

  1. Open Chrome DevTools and navigate to the Performance panel.
  2. Enable CPU Throttling at 4x to simulate a mid-range mobile device. Desktop CPUs mask long tasks that are devastating on real phones.
  3. Click Record, perform the slow interaction, then stop recording.
  4. Locate the interaction in the Interactions track. Interactions exceeding 200 ms are highlighted with a red/orange warning.
  5. Hover over the interaction to see the three-phase breakdown: input delay, processing duration, and presentation delay.
  6. Look at the Main thread track directly above. Long tasks appear with a red triangle in the top-right corner. Tasks overlapping your interaction's input delay phase are the blockers causing the most damage.
  7. Click on the long task to see its call stack. Drill down to find the specific JavaScript function eating up the most time.

Pay special attention to the left whisker on the interaction bar — this represents input delay. If it's long, something was running before your handler even started. Look for Timer Fired events, script evaluation, or third-party callbacks in the main thread at that exact moment.

Step 3: Use the Long Animation Frames (LoAF) API Programmatically

The LoAF API (shipped in Chrome 123, available in all Chromium browsers) replaces the older Long Tasks API with far richer attribution. It captures every script that executed for more than 5 ms within a long animation frame, including its source URL, function name, and character position.

const observer = new PerformanceObserver((list) => {
  for (const entry of list.getEntries()) {
    if (entry.duration > 100) { // Focus on frames > 100ms
      console.group(`Long Animation Frame: ${entry.duration.toFixed(0)}ms`);
      console.log('Start time:', entry.startTime.toFixed(0));
      console.log('Block duration:', entry.blockingDuration.toFixed(0), 'ms');

      for (const script of entry.scripts) {
        console.log(`  Script: ${script.sourceURL}`);
        console.log(`    Function: ${script.sourceFunctionName}`);
        console.log(`    Duration: ${script.duration.toFixed(0)}ms`);
        console.log(`    Type: ${script.invokerType}`);
      }
      console.groupEnd();
    }
  }
});

observer.observe({ type: 'long-animation-frame', buffered: true });

Quick tip: if third-party scripts appear without a sourceURL, the browser's applying the same-origin policy. Add the crossorigin="anonymous" attribute to those script tags to unlock full attribution.

Step 4: Correlate TBT with INP

Total Blocking Time (TBT) measures the total amount of time after First Contentful Paint where long tasks blocked the main thread. While TBT is a load-time metric and INP covers the full page lifecycle, a high TBT is a strong signal that long tasks during page load will also cause poor INP if users interact early.

Use Lighthouse or WebPageTest to check TBT, and treat anything above 300 ms as a red flag worth investigating.

Seven Techniques to Break Up Long Tasks and Fix INP

With the offending long tasks identified, let's get into the fixes. These are ordered by impact and ease of implementation.

Technique 1: Yield with scheduler.yield()

scheduler.yield() is honestly the single most important API for fixing INP in 2026. It yields control back to the main thread so the browser can process pending user interactions, then resumes your work as a prioritized continuation — meaning your code jumps to the front of the task queue instead of going to the back like setTimeout.

Browser support as of February 2026: Chrome (since 2024), Edge (Chromium-based), Firefox (since August 2025). Safari doesn't support it yet.

// Before: one monolithic long task
function processAllItems(items) {
  for (const item of items) {
    heavyComputation(item);   // 5ms per item
    updateDOM(item);          // 3ms per item
  }
  // 800ms total for 100 items — a brutal long task
}

// After: yielding every 40ms keeps tasks under 50ms
async function processAllItems(items) {
  let lastYield = performance.now();

  for (const item of items) {
    heavyComputation(item);
    updateDOM(item);

    // Yield if we have been running for 40ms+
    if (performance.now() - lastYield > 40) {
      await scheduler.yield();
      lastYield = performance.now();
    }
  }
}

Always include a polyfill for Safari and older browsers:

// scheduler.yield() polyfill — falls back to setTimeout
globalThis.scheduler ??= {};
globalThis.scheduler.yield ??= () => new Promise(resolve => setTimeout(resolve, 0));

Why 40 ms instead of 50? It's a safety margin. With slight timing variations, you want to make sure the browser never classifies the work as a long task. Those 10 ms of buffer have saved me from false positives more times than I can count.

Technique 2: Prioritize with scheduler.postTask()

scheduler.postTask() lets you schedule work at three explicit priority levels: user-blocking (highest), user-visible (default), and background (lowest). This is invaluable for separating critical interaction work from deferrable processing.

button.addEventListener('click', async () => {
  // Immediate visual feedback — user-blocking priority
  showLoadingSpinner();

  // Defer the expensive work to background priority
  await scheduler.postTask(async () => {
    const data = await fetchAndProcess();
    renderResults(data);
    hideLoadingSpinner();
  }, { priority: 'background' });
});

By moving expensive work to background priority, any subsequent user interactions get processed first, preventing input delay.

Technique 3: Offload Pure Computation to Web Workers

If a long task is pure computation (no DOM access needed), move it entirely off the main thread with a Web Worker. This eliminates the task from the main thread completely — it's like it was never there.

// worker.js
self.addEventListener('message', (e) => {
  const { items } = e.data;
  const results = items.map(item => {
    // Expensive computation: sorting, filtering, data transformation
    return expensiveTransform(item);
  });
  self.postMessage({ results });
});

// main.js
const worker = new Worker('/worker.js');

button.addEventListener('click', () => {
  showLoadingSpinner();
  worker.postMessage({ items: largeDataSet });
});

worker.addEventListener('message', (e) => {
  renderResults(e.data.results);
  hideLoadingSpinner();
});

One thing to watch out for: serialization costs. Transferring large objects between the main thread and a Worker uses the structured clone algorithm, which can itself become a long task. For large ArrayBuffers, use transferable objects to avoid the copy:

// Transfer the buffer instead of copying it
const buffer = new ArrayBuffer(1024 * 1024);
worker.postMessage({ buffer }, [buffer]);

Technique 4: Lazy-Load and Code-Split JavaScript

Every kilobyte of JavaScript you load must be parsed, compiled, and often executed on the main thread. Script evaluation is itself a common source of long tasks, especially during page load when it overlaps with early user interactions.

// Before: eagerly importing a heavy charting library
import { renderChart } from './heavy-chart-library.js';

// After: dynamically import only when needed
document.querySelector('#show-chart').addEventListener('click', async () => {
  showLoadingSpinner();
  const { renderChart } = await import('./heavy-chart-library.js');
  renderChart(container, data);
  hideLoadingSpinner();
});

Aim to keep individual script files under 100 KB. Larger scripts are more likely to trigger long tasks during evaluation. Use your bundler's code-splitting features (Webpack, Vite, Rollup) to break your application into route-based or feature-based chunks — most modern bundlers make this pretty straightforward.

Technique 5: Debounce and Throttle High-Frequency Event Handlers

Events like scroll, resize, input, and mousemove can fire dozens of times per second. If your handler runs heavy logic on each event, you're generating dozens of long tasks per second. That's... not great.

// Before: filtering on every keystroke
searchInput.addEventListener('input', (e) => {
  filterAndRenderResults(e.target.value); // 80ms per call
});

// After: debounce to 150ms — the handler only fires once
// the user pauses typing
function debounce(fn, delay) {
  let timer;
  return (...args) => {
    clearTimeout(timer);
    timer = setTimeout(() => fn(...args), delay);
  };
}

searchInput.addEventListener('input', debounce((e) => {
  filterAndRenderResults(e.target.value);
}, 150));

For scroll-driven animations or layout adjustments, use requestAnimationFrame instead of debounce to synchronize with the browser's rendering cycle and avoid layout thrashing.

Technique 6: Eliminate Layout Thrashing

Layout thrashing occurs when JavaScript reads a layout property (like offsetHeight or getBoundingClientRect()) immediately after writing styles. This forces the browser to perform a synchronous layout calculation inside your event handler, which extends processing time significantly.

// BAD: layout thrashing — forced synchronous layout on every iteration
items.forEach(item => {
  item.style.width = container.offsetWidth + 'px'; // read triggers layout
  item.style.height = '100px';                      // write invalidates layout
});

// GOOD: batch reads, then batch writes
const containerWidth = container.offsetWidth; // single read
items.forEach(item => {
  item.style.width = containerWidth + 'px';   // writes only
  item.style.height = '100px';
});

When you need to interleave reads and writes across many elements, use requestAnimationFrame to schedule writes in the next frame, or grab the fastdom library to automatically batch DOM reads and writes. It's a small library that can save you a lot of headaches.

Technique 7: Audit and Defer Third-Party Scripts

Third-party scripts — tag managers, analytics, chat widgets, A/B testing tools — are the most common source of input delay long tasks. They run on their schedule, not yours, and they often fire during critical interaction windows.

Use the LoAF API attribution data from Step 3 to identify which third-party scripts show up in your slow interactions. Then apply one or more of these strategies:

  • Defer loading: Add defer or async to the script tag, or load it dynamically after the page becomes interactive.
  • Load on interaction: For chat widgets and feedback tools, defer loading until the user actually clicks the trigger button. Why pay the cost upfront if most visitors never use it?
  • Move to a Web Worker or service worker: Tools like Partytown can run third-party scripts in a Worker, freeing the main thread entirely.
  • Offload to the edge: Platforms like Cloudflare Zaraz execute tag management logic at the CDN edge, so the JavaScript never runs in the user's browser at all.
  • Remove unused scripts: Audit your tag manager. Unused or redundant tags accumulate over time. I've seen cases where removing a single abandoned tag cut INP by hundreds of milliseconds.

Framework-Specific Long Task Patterns in 2026

Modern JavaScript frameworks introduce their own categories of long tasks. Here's how to address them in the big three.

React: Leverage the React Compiler and Concurrent Features

React 19's compiler (formerly React Forget) automatically inserts memoization at build time, eliminating the manual useMemo and useCallback ceremony that many teams skipped anyway. This cuts unnecessary re-renders by 25–40%, directly reducing processing-time long tasks.

For remaining heavy renders, use React's concurrent features:

import { useTransition } from 'react';

function SearchResults({ query }) {
  const [isPending, startTransition] = useTransition();
  const [results, setResults] = useState([]);

  function handleSearch(newQuery) {
    // Immediate: update the input field (user-blocking)
    setQuery(newQuery);

    // Deferred: heavy filtering/rendering is interruptible
    startTransition(() => {
      const filtered = filterLargeDataset(newQuery);
      setResults(filtered);
    });
  }

  return (
    <div>
      <input onChange={(e) => handleSearch(e.target.value)} />
      {isPending ? <Spinner /> : <ResultsList items={results} />}
    </div>
  );
}

startTransition marks the state update as non-urgent, allowing React to yield to user interactions mid-render. This directly prevents the processing phase from becoming a long task.

Angular: Go Zoneless with Signals

Angular's zone.js-based change detection has historically been a notorious long-task generator — every async callback triggers a full change detection cycle. Angular 20+ supports a zoneless architecture using Signals for fine-grained reactivity, and the performance difference is real (20–30% runtime improvements).

import { Component, signal, computed } from '@angular/core';

@Component({
  selector: 'app-product-filter',
  template: `
    <input (input)="onFilter($event)" />
    <ul>
      @for (product of filteredProducts(); track product.id) {
        <li>{{ product.name }}</li>
      }
    </ul>
  `
})
export class ProductFilterComponent {
  private query = signal('');
  private products = signal<Product[]>([]);

  // Computed signal — only recalculates when query or products change
  filteredProducts = computed(() =>
    this.products().filter(p =>
      p.name.toLowerCase().includes(this.query().toLowerCase())
    )
  );

  onFilter(event: Event) {
    this.query.set((event.target as HTMLInputElement).value);
  }
}

Pair Signals with OnPush change detection and Ahead-of-Time (AOT) compilation to minimize runtime work. The combination eliminates entire categories of change-detection long tasks.

Vue: Explore Vapor Mode for Hot Paths

Vue 3.5's Vapor Mode compiles components into direct DOM manipulation code, bypassing the Virtual DOM diffing step entirely. For interaction-heavy components like data tables, autocompletes, and real-time dashboards, Vapor Mode can cut rendering long tasks dramatically.

Even without Vapor Mode, Vue's fine-grained reactivity system with the Composition API already limits re-renders to only the components whose reactive dependencies have changed. Make sure you're using shallowRef for large objects and computed properties instead of inline method calls in templates — it's an easy win that avoids unnecessary recalculations.

Verifying Your Fixes: A Before-and-After Workflow

You've applied fixes. Great. But how do you know they actually worked? You need to validate in both lab and field environments. Here's how.

Lab Validation with DevTools

  1. Record a performance trace of the same interaction you diagnosed earlier.
  2. Confirm that the long task (red triangle) is gone or reduced below 50 ms.
  3. Check the Interactions track: the interaction should no longer show a red/orange warning, and hovering should reveal all three phases under 200 ms total.
  4. Test with 4x CPU throttling enabled — your fix has to hold up under constrained conditions, not just on your beefy developer machine.

Field Validation with CrUX and RUM

Lab improvements don't always translate to field improvements (and vice versa). Monitor these after deploying your fix:

  • Chrome User Experience Report (CrUX): Check your origin's 75th-percentile INP in PageSpeed Insights or the CrUX Dashboard. CrUX data updates on a 28-day rolling window, so expect a 2–4 week lag before you see the full effect.
  • Real User Monitoring (RUM): Tools like DebugBear, SpeedCurve, Sentry, or your custom web-vitals integration will show improvements within hours of deployment. Compare the INP distribution before and after.
  • Search Console: The Core Web Vitals report in Google Search Console categorizes your URLs as Good, Needs Improvement, or Poor. Watch for your pages to shift from the red/yellow zones into green.

Continuous Monitoring

INP isn't a fix-it-and-forget-it metric. New feature code, updated third-party scripts, and changed content can reintroduce long tasks at any time. Set up automated alerts when your 75th-percentile INP crosses 200 ms on key pages, and run performance regression tests in CI using tools like Lighthouse CI or WebPageTest's API.

Common Mistakes When Fixing INP Long Tasks

Even experienced developers trip up on these. Don't be that person:

  • Yielding too often: Calling scheduler.yield() after every micro-operation adds scheduling overhead that can actually make things slower. Use a time-based deadline (yield every 40–50 ms of accumulated work), not after every single item.
  • Testing only on desktop: Desktop CPUs are 4–8x faster than mid-range mobile devices. An interaction that takes 80 ms on your MacBook takes 320–640 ms on a real phone. Always test with CPU throttling.
  • Ignoring input delay: Many developers focus on optimizing their event handler (processing time) but completely miss input delay caused by other scripts running before the handler starts. LoAF attribution catches these — use it.
  • Using setTimeout for yielding without understanding the cost: setTimeout(fn, 0) pushes work to the back of the task queue, while scheduler.yield() puts it at the front. For sequential work that should resume quickly, scheduler.yield() is strictly better.
  • Fixing one interaction and declaring victory: INP reports the worst interaction. When you fix the slowest one, the second-slowest becomes your new INP. Keep iterating until your 75th percentile is safely under 200 ms.

Real-World Impact: Why Fixing Long Tasks Matters

The business case for fixing INP long tasks is well-documented. Preply improved their INP from 250 ms to 185 ms by eliminating unnecessary React re-renders and deferring non-critical callbacks — that correlated with an estimated $200,000 per year in additional organic search value. redBus saw 80–100% improvement in mobile conversion rates after reducing JavaScript execution time. Google's own research shows that improving INP from 500 ms to 200 ms correlates with up to 22% better user engagement.

These aren't edge cases. They're the predictable result of respecting the main thread. Every millisecond you shave off a long task is a millisecond the browser can spend responding to the people actually using your site.

Frequently Asked Questions

What is a good INP score in 2026?

Google considers an INP score of 200 milliseconds or less (measured at the 75th percentile) as "good." Scores between 200–500 ms need improvement, and anything above 500 ms is classified as poor. Since INP replaced First Input Delay (FID) as a Core Web Vital in March 2024, it directly affects your Core Web Vitals assessment in Search Console and can influence search rankings.

How do I find which JavaScript is causing my poor INP?

Use a combination of field data and lab tools. Deploy the web-vitals library (v4+) with attribution enabled to capture which element, page, and scripts are involved in your slowest interactions. Then reproduce the issue in Chrome DevTools' Performance panel with 4x CPU throttling. The Interactions track shows the three-phase breakdown, and the Main thread track reveals long tasks with red triangles. The Long Animation Frames (LoAF) API provides script-level attribution including source URLs and function names.

Is scheduler.yield() supported in all browsers?

As of February 2026, scheduler.yield() is supported in Chrome (since 2024), Edge (Chromium-based), and Firefox (since August 2025). Safari doesn't support it yet and has no announced timeline. Always include a polyfill that falls back to setTimeout(resolve, 0) so your code works across all browsers.

What is the difference between long tasks and long animation frames?

Long tasks are any main-thread tasks exceeding 50 ms, as reported by the older Long Tasks API. Long Animation Frames (LoAF) are a superset that measures the entire rendering cycle — from task start through style/layout recalculation to paint. LoAF also provides detailed script attribution (source URL, function name, duration per script) that the Long Tasks API lacks. For INP debugging, LoAF is strictly more useful and is the recommended API in Chromium browsers since Chrome 123.

Can third-party scripts cause poor INP even if my own code is optimized?

Absolutely. Third-party scripts — analytics, tag managers, chat widgets, A/B testing tools — run on the same main thread as your code. If a third-party script is executing a long task at the moment a user interacts, it creates input delay that directly inflates your INP score. Use LoAF attribution data to identify the offending scripts, then defer, remove, or offload them using tools like Partytown or Cloudflare Zaraz.

About the Author Editorial Team

Our team of expert writers and editors.