Every byte you send over the wire costs time — that's not news to anyone who's worked on web performance. HTTP compression has been a foundational technique for decades, but the landscape shifted in a big way when Zstandard (zstd) gained native browser support in 2024. This algorithm, originally developed at Meta, compresses roughly 42% faster than Brotli at comparable ratios and outperforms gzip in both speed and size.
If your server still relies solely on gzip, you're leaving real performance on the table.
This guide covers everything you need to adopt zstd in production: how it stacks up against gzip and Brotli, when to use each algorithm, step-by-step configuration for Nginx, Caddy, Node.js, and Cloudflare, pre-compressing static assets at build time, and how all of this ties back to Core Web Vitals like LCP and TTFB.
How HTTP Compression Works
Before we get into the zstd-specific stuff, here's a quick refresher on how HTTP compression negotiation actually works:
- The browser sends a request with an
Accept-Encodingheader listing the compression algorithms it supports — something likeAccept-Encoding: zstd, br, gzip. - The server picks the best algorithm it supports from that list and compresses the response body.
- The server includes a
Content-Encodingheader in the response indicating which algorithm was used, for exampleContent-Encoding: zstd. - The browser decompresses the response body before parsing it.
This negotiation is completely transparent to the user and happens on every text-based response. Browsers that don't support zstd simply won't advertise it, so the server falls back to Brotli or gzip automatically. There's zero compatibility risk in adding zstd support to your stack.
Why Zstandard Changes the Game
Gzip has been the web's default compression since the late 1990s. Brotli, released by Google in 2015, improved compression ratios by 15–25% over gzip but at significantly higher CPU cost — especially at elevated compression levels. Zstandard finds a genuinely compelling sweet spot between these two.
Compression Speed
Zstd compresses data approximately 42% faster than Brotli at equivalent compression ratios, according to Cloudflare's benchmarks. It's also roughly 43% faster than gzip while producing smaller output. This speed advantage matters most for dynamic content — HTML pages, JSON API responses, server-rendered markup — where compression has to happen on every single request.
Compression Ratio
At comparable speed settings, zstd reduces file sizes by about 11% more than gzip. On JSON payloads specifically, zstd achieves nearly 50% better compression than gzip, which is honestly impressive. While Brotli at its highest levels still wins on raw compression ratio, the gap is typically only 5–10% on web assets, and Brotli's highest levels are impractical for real-time use anyway.
Decompression Speed
Decompression — what happens in the user's browser — is fast across all three algorithms. Zstd's decompression speed is competitive with gzip and slightly faster than Brotli in most benchmarks. For end users, the differences are negligible here.
Comparative Benchmark Summary
Here's a typical benchmark comparison on a 226 KB JSON API response:
Algorithm | Level | Compressed Size | Compression Time | Ratio
-------------|-------|-----------------|------------------|------
gzip | 6 | 68.2 KB | 4.8 ms | 69.8%
Brotli | 4 | 61.5 KB | 6.1 ms | 72.8%
Brotli | 11 | 54.3 KB | 312 ms | 76.0%
zstd | 3 | 59.8 KB | 2.7 ms | 73.5%
zstd | 9 | 56.1 KB | 8.2 ms | 75.2%
The key takeaway: zstd at level 3 beats both gzip-6 and Brotli-4 on size while being the fastest of the three. At level 9, it approaches Brotli-11 ratios at a fraction of the CPU cost. That's a pretty remarkable tradeoff.
Browser Support in 2026
Zstd Content-Encoding is supported in all major Chromium-based browsers since Chrome 123 (March 2024), in Firefox since version 126 (May 2024), and in Safari since version 18 (September 2024). So as of 2026, zstd covers well over 95% of global browser traffic.
You can verify current support at caniuse.com/zstd. Any browser that doesn't support zstd simply won't include it in the Accept-Encoding header, and your server serves gzip or Brotli instead. No user-facing breakage, period.
When to Use Each Algorithm
The best compression strategy in 2026 isn't about picking one algorithm — it's about using all three strategically:
- Zstd for dynamic content: HTML responses, JSON API payloads, server-side rendered pages, and anything compressed in real time. Zstd's fast compression speed means lower TTFB and less CPU overhead per request.
- Brotli for static assets (pre-compressed): JavaScript bundles, CSS files, and SVGs that are compressed once at build time and served repeatedly. Use Brotli level 11 here — compression speed doesn't matter since only the ratio counts.
- Gzip as the universal fallback: Keep gzip enabled for any browser or client that doesn't support zstd or Brotli. It's the baseline that everything understands.
This layered approach ensures every visitor gets the best possible compression their browser supports, while your server spends the least CPU on real-time work.
Setting Up Zstd on Nginx
Nginx doesn't include zstd support in its core distribution. You'll need the nginx-module-zstd third-party dynamic module.
Step 1: Install the Module
On RHEL/CentOS/AlmaLinux systems with the GetPageSpeed repository:
sudo yum install -y https://extras.getpagespeed.com/release-latest.rpm
sudo yum install -y nginx-module-zstd
On Debian/Ubuntu, you can compile the module from source using the zstd-nginx-module repository on GitHub, or use a pre-built package if your distro provides one.
Step 2: Load the Module
Add these lines at the very top of your /etc/nginx/nginx.conf, before any http {} block:
load_module modules/ngx_http_zstd_filter_module.so;
load_module modules/ngx_http_zstd_static_module.so;
Step 3: Configure Multi-Algorithm Compression
Inside the http {} block, set up all three algorithms with zstd taking priority when supported:
http {
# Zstandard — highest priority for dynamic content
zstd on;
zstd_comp_level 3;
zstd_min_length 256;
zstd_types
text/plain
text/css
text/javascript
text/xml
application/javascript
application/json
application/xml
application/rss+xml
application/atom+xml
image/svg+xml;
# Serve pre-compressed .zst files for static assets
zstd_static on;
# Brotli — for browsers that support it but not zstd
brotli on;
brotli_comp_level 4;
brotli_min_length 256;
brotli_types
text/plain
text/css
text/javascript
text/xml
application/javascript
application/json
application/xml
application/rss+xml
application/atom+xml
image/svg+xml;
brotli_static on;
# Gzip — universal fallback
gzip on;
gzip_vary on;
gzip_comp_level 5;
gzip_min_length 256;
gzip_types
text/plain
text/css
text/javascript
text/xml
application/javascript
application/json
application/xml
application/rss+xml
application/atom+xml
image/svg+xml;
gzip_static on;
}
Step 4: Test and Reload
# Validate config
sudo nginx -t
# Reload without downtime
sudo systemctl reload nginx
Verify it works by sending a request with the zstd accept-encoding header:
curl -H "Accept-Encoding: zstd" -I https://yoursite.com/
# Look for: Content-Encoding: zstd
Setting Up Zstd on Caddy
Caddy has built-in zstd support with no extra modules required — one of the things I really appreciate about Caddy's design philosophy. It automatically negotiates the best encoding based on the client's Accept-Encoding header. A minimal Caddyfile for a static site:
yoursite.com {
encode zstd br gzip
root * /var/www/yoursite
file_server
}
The order in the encode directive sets priority — zstd is preferred when the browser supports it, then Brotli, then gzip. That's it. Caddy also supports serving pre-compressed files automatically if they exist alongside the original file.
Setting Up Zstd in Node.js
Node.js 21.7+ includes native zstd support in the zlib module. For Express applications, you can implement zstd compression as middleware:
import express from 'express';
import { zstdCompress } from 'node:zlib';
import { promisify } from 'node:util';
const compress = promisify(zstdCompress);
function zstdMiddleware(req, res, next) {
const acceptEncoding = req.headers['accept-encoding'] || '';
if (!acceptEncoding.includes('zstd')) {
return next(); // Fall through to other compression middleware
}
const originalSend = res.send.bind(res);
res.send = async function (body) {
if (typeof body === 'string' || Buffer.isBuffer(body)) {
try {
const compressed = await compress(Buffer.from(body));
res.setHeader('Content-Encoding', 'zstd');
res.setHeader('Vary', 'Accept-Encoding');
return originalSend(compressed);
} catch {
return originalSend(body);
}
}
return originalSend(body);
};
next();
}
const app = express();
app.use(zstdMiddleware);
For production use, consider the @napi-rs/zstd package which provides bindings to the native zstd library with better performance and more configuration options than the built-in module.
Setting Up Zstd on Cloudflare
If you use Cloudflare as your CDN, enabling zstd is straightforward and doesn't require any server-side changes:
- Log in to your Cloudflare dashboard and select your domain.
- Navigate to Rules > Compression Rules.
- Create a new rule that matches all requests (or specific content types).
- Set the compression algorithm preference to include Zstandard.
- Deploy the rule.
Cloudflare handles the negotiation at the edge. When a browser sends Accept-Encoding: zstd, Cloudflare serves the zstd-compressed version. Browsers without zstd support automatically get Brotli or gzip. This is honestly the lowest-effort way to adopt zstd because Cloudflare compresses at the edge — your origin server doesn't need any changes at all.
One thing to note: as of early 2026, Cloudflare's connection between the edge and your origin server still primarily uses gzip and Brotli. The zstd compression happens between Cloudflare's edge and the end user's browser.
Pre-Compressing Static Assets at Build Time
For static assets like JavaScript bundles, CSS, and SVGs, you'll get the best compression ratios by pre-compressing at higher levels during your build process. The server just serves the pre-compressed file — no real-time CPU cost whatsoever.
Using the zstd CLI
# Compress at level 19 (high ratio, slow — but it's a one-time cost)
find ./dist -type f \( -name "*.js" -o -name "*.css" -o -name "*.svg" -o -name "*.html" \) \
-exec zstd -19 --rm -q {} \;
This creates .zst files alongside your originals. Nginx's zstd_static on directive and Caddy's file server both look for these pre-compressed files automatically.
Integrating into a Vite/Webpack Build
For Vite-based projects, use the vite-plugin-compression plugin:
// vite.config.js
import compression from 'vite-plugin-compression';
export default {
plugins: [
// Pre-compress with Brotli (still best ratio for static)
compression({
algorithm: 'brotliCompress',
ext: '.br',
threshold: 256,
}),
// Also pre-compress with zstd for supporting servers
compression({
algorithm: 'zstdCompress',
ext: '.zst',
threshold: 256,
}),
// Gzip fallback
compression({
algorithm: 'gzip',
ext: '.gz',
threshold: 256,
}),
],
};
This outputs three compressed versions of every asset. Your server picks the right one based on the browser's Accept-Encoding header, with zero runtime compression overhead.
Impact on Core Web Vitals
So, how does all this actually affect your performance metrics? Switching to zstd for dynamic content and pre-compressing static assets directly improves several Core Web Vitals.
Time to First Byte (TTFB)
Zstd's faster compression speed means the server starts streaming the response sooner. On a typical 50 KB HTML page, the difference is measurable: zstd at level 3 finishes compressing in roughly half the time of Brotli at level 4. For server-rendered pages, this translates directly to a lower processing time component of TTFB.
Largest Contentful Paint (LCP)
Smaller transfer sizes reduce the download time for the HTML document and any render-blocking CSS or JavaScript. The faster critical resources arrive, the earlier the browser can paint the largest content element. A 10–15% reduction in transfer size for CSS and JS can shave 50–150 ms from LCP on typical mobile connections — and that's the kind of improvement that shows up in your CrUX data.
Interaction to Next Paint (INP)
INP benefits indirectly. When the server compresses responses faster and uses less CPU per request, it handles more concurrent users without degradation. Under high traffic, this means more consistent response times and fewer long-queued requests that could delay subsequent interactions.
Server CPU Savings
With zstd's lower CPU overhead, your server can allocate more processing power to application logic rather than compression. This is especially significant for dynamic sites running server-side rendering frameworks like Next.js, Nuxt, or SvelteKit, where every millisecond of server processing time counts.
Verifying Your Compression Setup
After deploying zstd, you'll want to verify everything works correctly across all three algorithms:
# Test zstd
curl -s -H "Accept-Encoding: zstd" -o /dev/null -w \
"Encoding: %{content_type}\nSize: %{size_download}\nCode: %{http_code}\n" \
-D - https://yoursite.com/ 2>&1 | grep -i "content-encoding"
# Test Brotli fallback
curl -s -H "Accept-Encoding: br" -o /dev/null -D - \
https://yoursite.com/ 2>&1 | grep -i "content-encoding"
# Test gzip fallback
curl -s -H "Accept-Encoding: gzip" -o /dev/null -D - \
https://yoursite.com/ 2>&1 | grep -i "content-encoding"
You should see Content-Encoding: zstd, Content-Encoding: br, and Content-Encoding: gzip respectively. Also check that the Vary: Accept-Encoding header is present — this ensures CDN caches store separate versions for each encoding.
In Chrome DevTools, open the Network tab, right-click the column header, and enable the "Content-Encoding" column. Load your page and confirm that responses show zstd for text-based resources.
Common Pitfalls and How to Avoid Them
- Compressing already-compressed formats: Don't apply zstd to JPEG, PNG, WebP, AVIF, WOFF2, or video files. These formats have their own compression built in. Adding HTTP compression on top wastes CPU and can even increase file size.
- Setting the compression level too high for dynamic content: Levels above 6 hit diminishing returns for real-time compression. Stick to level 3 as the default — trust me on this. Reserve high levels (15–19) for pre-compressed static assets only.
- Forgetting the Vary header: Without
Vary: Accept-Encoding, CDNs and proxies may cache a zstd-compressed response and serve it to a browser that doesn't support zstd. Most server modules add this automatically, but always verify. - Missing the min-length threshold: Very small responses (under 256 bytes) may actually grow larger after compression due to the algorithm's overhead. Set a minimum length to avoid this.
- 0-RTT replay considerations: If you use HTTP/3 with 0-RTT and zstd dictionary compression, be aware that dictionaries must be synchronized between client and server. For standard zstd content-encoding without dictionaries, this isn't a concern.
Measuring the Impact
To quantify the improvement from adopting zstd, measure these metrics before and after deployment:
- Transfer sizes in WebPageTest or Chrome DevTools: Compare total bytes transferred for text-based resources.
- TTFB in your RUM data: Look for reductions in the server processing time component, especially on dynamic pages.
- Server CPU utilization: Monitor CPU usage during peak traffic. You should see lower utilization with zstd replacing gzip for dynamic compression.
- Lighthouse performance scores: Run before/after audits. Expect improvements in TTFB and resource transfer size metrics.
- CrUX data over 28 days: Check the Chrome User Experience Report for LCP and TTFB improvements at the 75th percentile.
Frequently Asked Questions
Does zstd replace Brotli and gzip entirely?
No. The recommended strategy is to use all three algorithms together. Zstd excels at real-time compression of dynamic content. Brotli still produces the smallest files for static assets when pre-compressed at high levels. Gzip remains the universal fallback for any client that doesn't support the other two. Configure your server to negotiate the best algorithm per request.
Is zstd compression safe to enable in production?
Absolutely. The HTTP content negotiation mechanism ensures that only browsers advertising zstd support in their Accept-Encoding header will receive zstd-compressed responses. All other clients automatically fall back to Brotli or gzip. There's no risk of serving an unreadable response to an unsupported browser.
What compression level should I use for zstd?
For dynamic (on-the-fly) compression, level 3 offers the best balance of speed and ratio. For pre-compressed static assets, levels 15–19 produce smaller files at the cost of longer build times — which is totally fine since compression only happens once. Avoid level 22 in nearly all cases; it's extremely slow with minimal benefit over level 19.
Does Safari support zstd content-encoding?
Yes. Safari added zstd support in version 18, released in September 2024. As of 2026, all major browsers — Chrome, Firefox, Safari, and Edge — support zstd content-encoding. You can verify current support at caniuse.com/zstd.
How much bandwidth can zstd save compared to gzip?
On typical web content, zstd at level 3 produces files roughly 10–15% smaller than gzip at level 6. On JSON-heavy API responses, the improvement can reach 40–50%. Across a site serving millions of requests, that translates to significant bandwidth savings and measurably faster page loads for users on slower connections.