Performance Budgets with Lighthouse CI: Catch Regressions Before They Ship

Learn how to define performance budgets and enforce them with Lighthouse CI in GitHub Actions. Catch regressions automatically with budget.json config, lighthouserc.js setup, and real-world thresholds for e-commerce, blogs, and SaaS dashboards.

Why Your Site Gets Slower with Every Deploy

Performance degradation rarely happens in one dramatic failure. It creeps in — a new analytics script here, an unoptimized hero image there, a dependency upgrade that quietly adds 40 KB of JavaScript. Each change passes code review because it looks perfectly reasonable in isolation.

But over months, your Largest Contentful Paint drifts from 1.8 seconds to 3.4 seconds, and nobody can point to the single commit that caused it. I've seen this happen on teams that genuinely care about performance. It's not a people problem — it's a process problem.

Performance budgets solve this. They establish hard numeric thresholds for metrics that matter — page weight, script size, LCP, CLS — and enforce them automatically on every pull request. When a budget is exceeded, the build fails before the regression reaches production. Think of them as unit tests for speed.

In this guide, you'll learn how to define effective performance budgets, enforce them with Lighthouse CI in GitHub Actions (and other CI platforms), and build a workflow that keeps your site fast without slowing down your team.

What Is a Performance Budget?

A performance budget is a set of quantitative limits on metrics that affect user experience. Unlike a performance goal ("we want to be fast"), a budget is a constraint — it triggers automated action when exceeded. Goals are aspirational; budgets are enforceable.

The reason budgets work so well is simple: they shift performance from a periodic audit activity to a continuous integration concern. Every commit is measured. Every regression is caught. And the conversation about tradeoffs happens at the right time — during the pull request, not after users start complaining.

The Three Types of Performance Budgets

Effective budgets combine metrics from three categories:

  • Resource budgets (quantity-based): Limits on the size or count of resources loaded by the page. Examples: total JavaScript under 200 KB, images under 1 MB, no more than 25 HTTP requests. These are deterministic and fail early in the pipeline.
  • Timing budgets (milestone-based): Limits on user-centric timing metrics like LCP, First Contentful Paint (FCP), or Time to Interactive (TTI). These are closer to actual user experience but can fluctuate between runs.
  • Rule-based budgets: Limits on aggregate scores from tools like Lighthouse. For example: performance score must stay above 90. These give you a single-number summary but can obscure specific regressions.

A strong starter setup combines all three: one JavaScript size budget, one total page weight budget, and one or two timing metric budgets. Honestly, LCP and CLS are the highest-impact choices in 2026 — start there.

How to Choose the Right Budget Thresholds

Setting budgets too aggressively causes every build to fail, which trains your team to ignore them. Setting them too loosely catches nothing. Neither outcome is great.

Here's a practical approach:

Step 1: Measure Your Current Baseline

Run Lighthouse locally or in CI against your production site at least five times. Note the median values for key metrics. Your budget should start at or slightly below your current baseline — the immediate goal is to prevent further degradation, not to achieve perfection overnight.

# Run Lighthouse CLI 5 times and collect results
for i in {1..5}; do
  npx lighthouse https://yoursite.com \
    --output=json \
    --output-path=./report-$i.json \
    --chrome-flags="--headless --no-sandbox"
done

Step 2: Set Warning and Error Levels

Use two tiers of thresholds:

  • Warning level: Slightly above your baseline. Signals that performance is trending in the wrong direction. Doesn't block the build but creates visibility.
  • Error level: The hard ceiling. Exceeding this blocks the merge. Set this at the point where users would actually notice degradation — research from psychophysics suggests humans perceive differences of roughly 20% or more.

Step 3: Use Core Web Vitals Thresholds as Guardrails

For timing-based budgets, align your error thresholds with Google's "Good" thresholds at a minimum:

  • LCP: Under 2.5 seconds
  • INP: Under 200 milliseconds
  • CLS: Under 0.1

If your site currently exceeds these, don't panic. Start with your baseline and create a roadmap to tighten the budget over successive sprints.

Step 4: Set Per-Page Budgets

This one's easy to overlook. Different page types have wildly different resource profiles. Your homepage, a product listing page, and a blog post all load different amounts of JavaScript, images, and third-party scripts. Define separate budgets for each critical template — a one-size-fits-all budget will either be too loose for simple pages or too strict for complex ones.

Lighthouse CI: The Engine Behind Automated Budgets

Lighthouse CI (LHCI) is Google's official suite for running Lighthouse audits in continuous integration environments. It gives you three core capabilities:

  1. Collect: Run Lighthouse against one or more URLs with configurable settings (device emulation, network throttling, number of runs).
  2. Assert: Compare results against your budget thresholds. Fail the build on violations.
  3. Upload: Store reports for historical comparison — either to temporary public storage, a private LHCI server, or as CI artifacts.

Installing Lighthouse CI

# Install globally
npm install -g @lhci/cli

# Or as a dev dependency
npm install --save-dev @lhci/cli

The lighthouserc.js Configuration File

All Lighthouse CI behavior is controlled through a lighthouserc.js (or .lighthouserc.json) file at the root of your project. Here's a production-ready example:

module.exports = {
  ci: {
    collect: {
      url: [
        'http://localhost:3000/',
        'http://localhost:3000/blog',
        'http://localhost:3000/products/widget'
      ],
      numberOfRuns: 5,
      settings: {
        preset: 'desktop',
        chromeFlags: '--no-sandbox --headless'
      }
    },
    assert: {
      preset: 'lighthouse:recommended',
      assertions: {
        'categories:performance': ['error', { minScore: 0.9 }],
        'categories:accessibility': ['warn', { minScore: 0.95 }],
        'largest-contentful-paint': ['error', { maxNumericValue: 2500, aggregationMethod: 'median' }],
        'cumulative-layout-shift': ['error', { maxNumericValue: 0.1, aggregationMethod: 'median' }],
        'total-blocking-time': ['error', { maxNumericValue: 300, aggregationMethod: 'median' }],
        'resource-summary:script:size': ['error', { maxNumericValue: 200000 }],
        'resource-summary:image:size': ['warn', { maxNumericValue: 500000 }],
        'resource-summary:total:size': ['error', { maxNumericValue: 800000 }]
      }
    },
    upload: {
      target: 'temporary-public-storage'
    }
  }
};

A few things worth calling out in this config:

  • numberOfRuns: 5 — Lighthouse scores fluctuate between runs due to CPU contention and network variability. Running five times and using the median (via aggregationMethod: 'median') reduces flaky failures significantly.
  • preset: 'lighthouse:recommended' — This applies sensible default assertions for all Lighthouse audits. You then override specific ones in the assertions block.
  • Resource assertions use bytesmaxNumericValue: 200000 means 200 KB for the resource-summary assertions. This is what catches JavaScript bloat before it ships.

Using a budget.json File

If you prefer a simpler, declarative approach to resource budgets, Lighthouse supports a standalone budget.json file. This format is also used by the Lighthouse panel in Chrome DevTools, which is a nice bonus for local testing.

[
  {
    "path": "/*",
    "timings": [
      { "metric": "interactive", "budget": 3000 },
      { "metric": "first-contentful-paint", "budget": 1500 }
    ],
    "resourceSizes": [
      { "resourceType": "script", "budget": 200 },
      { "resourceType": "image", "budget": 400 },
      { "resourceType": "total", "budget": 800 }
    ],
    "resourceCounts": [
      { "resourceType": "third-party", "budget": 5 }
    ]
  },
  {
    "path": "/blog/*",
    "resourceSizes": [
      { "resourceType": "script", "budget": 100 },
      { "resourceType": "total", "budget": 500 }
    ]
  }
]

One gotcha that trips people up: resource sizes in budget.json are in kilobytes, while Lighthouse CI assertion maxNumericValue for resource summaries uses bytes. Double-check your units — I've seen this cause confusion more than once.

To reference the budget file in your Lighthouse CI config:

module.exports = {
  ci: {
    collect: { /* ... */ },
    assert: {
      budgetsFile: './budget.json'
    },
    upload: { /* ... */ }
  }
};

GitHub Actions Integration: Step by Step

GitHub Actions is the most common CI platform for Lighthouse CI, so let's dig into this one. Here are two approaches — using the official CLI directly, and using the community GitHub Action.

Option A: Using @lhci/cli Directly

This approach gives you maximum control and works with any CI platform.

# .github/workflows/lighthouse.yml
name: Lighthouse CI
on: [push, pull_request]

jobs:
  lighthouse:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - uses: actions/setup-node@v4
        with:
          node-version: 20

      - name: Install dependencies and build
        run: npm ci && npm run build

      - name: Start server
        run: npm run start &

      - name: Wait for server
        run: npx wait-on http://localhost:3000 --timeout 30000

      - name: Run Lighthouse CI
        run: |
          npm install -g @lhci/cli
          lhci autorun
        env:
          LHCI_GITHUB_APP_TOKEN: ${{ secrets.LHCI_GITHUB_APP_TOKEN }}

The lhci autorun command reads your lighthouserc.js, runs the collect/assert/upload steps in sequence, and exits with a non-zero code if any assertion fails — which causes the GitHub Actions job to fail and block the merge. Simple and effective.

Option B: Using treosh/lighthouse-ci-action

The community-maintained treosh/lighthouse-ci-action provides a simpler YAML-based setup:

# .github/workflows/lighthouse.yml
name: Lighthouse CI
on: [push, pull_request]

jobs:
  lighthouse:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Build site
        run: npm ci && npm run build

      - name: Run Lighthouse
        uses: treosh/lighthouse-ci-action@v12
        with:
          urls: |
            http://localhost:3000/
            http://localhost:3000/blog
          budgetPath: ./budget.json
          uploadArtifacts: true
          configPath: ./lighthouserc.js

Enabling PR Status Checks

To display Lighthouse results as a status check on pull requests, install the Lighthouse CI GitHub App on your repository. Copy the generated token and add it as a repository secret named LHCI_GITHUB_APP_TOKEN. This gives you a visual pass/fail indicator directly in the PR interface — reviewers can see at a glance whether a change regresses performance.

Beyond GitHub Actions: GitLab CI and Other Platforms

Lighthouse CI works on any CI platform that can run Node.js. Here's an equivalent GitLab CI configuration:

# .gitlab-ci.yml
lighthouse:
  image: node:20
  stage: test
  script:
    - npm ci && npm run build
    - npm run start &
    - npx wait-on http://localhost:3000 --timeout 30000
    - npm install -g @lhci/cli
    - lhci autorun
  artifacts:
    paths:
      - .lighthouseci/

The nice thing here is that the same lighthouserc.js and budget.json files work unchanged across platforms — you only modify the CI workflow syntax.

Handling Flaky Results

Let's talk about the elephant in the room. Lighthouse performance scores can fluctuate by five or more points between consecutive runs on the same page, even in a controlled CI environment. CPU contention on shared CI runners, network variability, and non-deterministic browser behavior all contribute to this.

It's frustrating, but manageable. Here are the strategies that actually work:

  • Run multiple times: Set numberOfRuns to at least 3 (5 is better). Use aggregationMethod: 'median' for metric assertions and 'pessimistic' for static audits like accessibility.
  • Use dedicated runners: If available, run Lighthouse on self-hosted runners with consistent hardware to reduce variance.
  • Don't alert on single-point drops: A score dropping from 92 to 89 in one run is noise. Budget thresholds should have enough margin to absorb natural variability — typically 3-5 points for score-based budgets.
  • Prefer metric assertions over score assertions: Asserting largest-contentful-paint < 2500 is more stable and actionable than asserting performance score > 0.9. This one change alone can dramatically reduce false failures.

Tracking Performance Over Time with LHCI Server

Temporary public storage is fine for quick feedback, but reports expire after seven days. For long-term tracking, you'll want to deploy the LHCI Server — a Node.js application that stores Lighthouse reports in a database and provides a dashboard for trend analysis.

# Install and start LHCI Server
npm install -g @lhci/server
lhci server --storage.storageMethod=sql \
  --storage.sqlDialect=sqlite \
  --storage.sqlDatabasePath=./lhci.db

# Update lighthouserc.js upload target
module.exports = {
  ci: {
    upload: {
      target: 'lhci',
      serverBaseUrl: 'https://your-lhci-server.example.com',
      token: 'your-build-token'
    }
  }
};

The LHCI Server dashboard shows score trends across commits, highlights regressions with build diffs, and lets you compare any two reports side by side. This is invaluable for answering the question "when did we get slower?" — you can pinpoint the exact commit that caused a regression, which (in my experience) is worth the setup effort alone.

Real-World Performance Budget Examples

Theory is great, but what do actual budgets look like for real projects? Here are practical examples for common site types.

E-Commerce Product Page

E-commerce pages tend to be image-heavy with a fair amount of third-party scripts (analytics, A/B testing, chat widgets). The budget reflects that reality while still keeping things in check:

{
  "path": "/products/*",
  "timings": [
    { "metric": "largest-contentful-paint", "budget": 2500 },
    { "metric": "first-contentful-paint", "budget": 1800 },
    { "metric": "interactive", "budget": 3500 }
  ],
  "resourceSizes": [
    { "resourceType": "script", "budget": 250 },
    { "resourceType": "image", "budget": 600 },
    { "resourceType": "font", "budget": 80 },
    { "resourceType": "third-party", "budget": 120 },
    { "resourceType": "total", "budget": 1200 }
  ],
  "resourceCounts": [
    { "resourceType": "third-party", "budget": 8 }
  ]
}

Content Blog

Blog pages should be lean. There's really no excuse for a text-heavy article to load megabytes of JavaScript:

{
  "path": "/blog/*",
  "timings": [
    { "metric": "largest-contentful-paint", "budget": 2000 },
    { "metric": "first-contentful-paint", "budget": 1200 }
  ],
  "resourceSizes": [
    { "resourceType": "script", "budget": 120 },
    { "resourceType": "image", "budget": 400 },
    { "resourceType": "total", "budget": 600 }
  ]
}

SaaS Dashboard

Dashboards are a different beast — they're JavaScript-heavy by nature, so the script budget is more generous. But you should still keep total blocking time under control:

{
  "path": "/dashboard/*",
  "timings": [
    { "metric": "interactive", "budget": 4000 },
    { "metric": "total-blocking-time", "budget": 300 }
  ],
  "resourceSizes": [
    { "resourceType": "script", "budget": 400 },
    { "resourceType": "total", "budget": 1500 }
  ]
}

When Budgets Fail: A Decision Framework

A failing budget isn't always a reason to block the deploy. Before you reach for the "skip CI" button (please don't), walk through this framework:

  1. Is the regression real? Rerun the CI job. If it passes on retry, the failure was noise — consider increasing numberOfRuns or widening the threshold slightly.
  2. Is the regression intentional? Adding a necessary feature (like a chat widget) may legitimately increase script size. If the business value justifies it, update the budget and document the reason in the PR.
  3. Can you offset the regression? If you're adding 30 KB of new JavaScript, look for 30 KB to remove elsewhere — unused dependencies, unneeded polyfills, or images that can be compressed further.
  4. Is there a lighter alternative? The classic example: a developer adds moment.js (231 KB) for date formatting. CI fails because the JavaScript budget is exceeded. The fix? Switch to date-fns (13 KB) or the native Intl.DateTimeFormat API.

Integrating Performance Budgets into Team Culture

Tooling alone doesn't create a performance culture. I've seen teams set up Lighthouse CI perfectly and still ship slow sites because nobody looked at the results. Pair your setup with these practices:

  • Make budgets visible: Display the LHCI Server dashboard on a team monitor or share weekly snapshots in Slack. Visibility creates accountability.
  • Include designers: Share resource budgets with designers before they finalize mockups. Knowing that fonts are limited to 80 KB changes the conversation about typeface selection entirely.
  • Budget new features: Before adding a new third-party script, require a "performance impact assessment" — how many kilobytes does it add, and what's the expected impact on LCP and INP?
  • Celebrate improvements: When someone reduces JavaScript by 50 KB or shaves 200 ms off LCP, recognize it. Positive reinforcement is just as important as automated gates.

Frequently Asked Questions

What is a good performance budget for a website?

A good starting budget uses your current metrics as the baseline and targets Google's Core Web Vitals "Good" thresholds: LCP under 2.5 seconds, INP under 200 milliseconds, CLS below 0.1. For resource budgets, aim for total JavaScript under 200 KB and total page weight under 800 KB on mobile. Adjust based on your site type — an e-commerce page will need a higher image budget than a SaaS dashboard.

How do performance budgets differ from performance goals?

A performance goal is aspirational ("we want LCP under 1.5 seconds eventually"). A performance budget is a hard constraint enforced in CI ("the build fails if LCP exceeds 2.5 seconds"). Goals guide your roadmap; budgets prevent regressions. You need both — the budget protects your current state while you work toward the goal.

How do I prevent false failures from Lighthouse score fluctuations?

Run Lighthouse at least 3-5 times per URL in CI and use the aggregationMethod: 'median' option in your assertions. Prefer metric-based assertions (like largest-contentful-paint < 2500) over score-based assertions, as individual metrics fluctuate less than aggregate scores. Build in a 3-5 point margin for score-based budgets.

Can I use Lighthouse CI with frameworks like Next.js or Nuxt?

Absolutely. Lighthouse CI works with any framework that produces a servable build. For Next.js, run next build && next start before the Lighthouse audit step. For static site generators, use @lhci/cli's built-in static server with staticDistDir: './out' in your collect config — it'll serve the directory and audit the pages automatically without needing a separate server process.

Should I set the same performance budget for mobile and desktop?

No, and this is a common mistake. Mobile and desktop have very different performance characteristics. Over 60% of web traffic is mobile, and mobile devices have slower CPUs and less reliable networks. Set separate, stricter budgets for mobile. In Lighthouse CI, you can create separate configurations using the preset: 'desktop' setting for desktop runs and the default mobile emulation for mobile runs, each with their own assertion thresholds.

About the Author Editorial Team

Our team of expert writers and editors.