BeyondIT logo
BeyondIT
frontend

CDNs Don’t Fix Web Vitals — Netflix Learned This the Hard Way

11 min read
frontend
CDNs Don’t Fix Web Vitals — Netflix Learned This the Hard Way
In this article

Everyone tells you to “just add a CDN” to fix Core Web Vitals.

  • Netflix tried that.
  • So did Airbnb.
  • So did Shopify.

And it still wasn’t enough.

This isn’t a CDN problem — it’s an architectural one. In this guide, we’ll break down why CDNs fail Web Vitals, what real companies discovered the hard way, and how to fix performance at the system level instead of chasing Lighthouse scores.

💡

⚠️ Important Note

  • This content is for educational purposes based on industry case studies.
  • Performance results may vary based on your specific implementation.
  • Always test optimizations in staging before production deployment.

Why CDN Isn't the Magic Solution You Think It Is

Hey, welcome back to BeyondIT. Today we're going to discuss the biggest misunderstanding or lie of the web development industry: "Considering CDN as a set-and-forget solution for web performance".

While 72% of global internet traffic is handled by CDNs, completely relying on CDN for performance optimization is a fallacy. Let's break this down with data.

CDN Performance Study

According to the study "The limited impact of CDN on modern web performance" (University of Washington 2023), they A/B tested 10,000 websites. The study concluded that CDN can improve metrics by only 15%, but 85% of performance gains come from:

  • Server-Side Optimization (40%)
  • Client-Side Optimization (35%)
  • Asset Optimization (10%)

Netflix Case Study: When CDN Isn't Enough

Netflix was using the best CDN architecture, but Netflix engineers found that despite CDN, Time-To-Interactive (TTI) was suffering. The issue was not about CDN or delay in serving files. The issue was hydrating React components on their logged-out page.

Now, the solution was not to upgrade the CDN tier because it cannot solve the issue.

What Netflix Did:

  • Netflix removed React from the signup page and replaced it with vanilla JavaScript
  • It drastically reduced client-side JavaScript by 200KB
  • It improved Time-To-Interactive by 50%

The Conclusion: If your application logic is bloated, CDN cannot act as a magical tool and improve the performance metrics.


Web Vitals Re-Explained — The Infrastructure vs. Architecture Gap

The reality is that we consider Core Web Vitals as a scorecard to rank in Google's algorithm. This is the main issue—we never consider it as a measurement tool for friction our users face while accessing our websites.

The role of CDN is to transfer files from server to client as fast as possible. The execution of those files on the client side cannot be optimized using CDN.

Let's understand the three core pillars of Core Web Vitals and why CDNs fail to optimize them.

CDN vs Architecture Gap


1. LCP (Largest Contentful Paint): The Render-Blocking Trap

We think that if we put our hero image on a CDN, LCP will be fast. In 2026, network speed and delivery is not the main issue—the main issue is resource prioritization.

Your CDN can serve the image in 50ms, but what if large JavaScript blocks the browser from parsing the HTML that requests the hero image? The problem of LCP will remain as it is.

CDN can solve the issue of improving Resource load time, but it cannot solve the issue of Resource load delay.

Airbnb Case Study

Airbnb LCP Optimization

Airbnb found that their LCP was stagnating at 4.2 seconds, despite using a sophisticated CDN setup. Their hero image was delivered from CDN but still was not rendering early.

ISSUE: The image was getting delivered first but was getting buried under heavy client-side JavaScript execution.

SOLUTION: They just implemented priority hints (fetchpriority="high"). Yes, it was this simple—now the browser prioritizes the hero image and LCP dropped to 2.1 seconds.

IMPACT: LCP dropped to 2.1 seconds and 12% increase in bookings.

Lesson: Understand LCP as Four Components

  • Time to First Byte (TTFB) — CDN can optimize it
  • Resource load time — CDN can optimize it
  • Resource load delay — CDN cannot optimize it
  • Element Render Delay — CDN cannot optimize it

If your LCP is high, don't just focus on CDN—try to optimize the resource load delay and element render delay as Airbnb did.


2. INP (Interaction to Next Paint): The Hidden Business Killer

"My site loads fast, so it is fast"—But no. A site can load fast but may be unresponsive for a brief time because of a blocked main thread.

In simple words, it measures the time delay between each click, tap, keystroke, and response. It is the time when the browser was frozen, executing JavaScript and delaying input execution.

Simply, it cannot be solved by CDN. CDN delivers JavaScript files fast, but it cannot execute them fast for you.

Case Study: Shopify's Checkout Investigation

Shopify INP Optimization

Shopify found that their checkout button had an INP of 450ms. In the world of e-commerce, a half-second delay can lead to abandoned carts and huge revenue loss.

ISSUE: A third-party fraud detection script was blocking the main thread, causing the checkout button to be non-responsive for 450ms.

SOLUTION: They moved the fraud detection logic off the main thread using Web Workers. They also used a third-party library called "Partytown" which relocates resource-intensive scripts to the background.

RESULT: INP improved by 120ms and 7% reduction in cart abandonment.

Lesson Learned: Yield the Main Thread

We need to break up long tasks (generally over 50ms) using setTimeout() or scheduler.yield(). Try to offload non-UI logic to Web Workers for more efficiency.


3. CLS (Cumulative Layout Shift): The Revenue Leech

Generally, we think layout shift is just a visual annoyance. In reality, they reflect the sign of race conditions in the rendering path. They directly impact revenue.

CLS happens when visible elements change after initial rendering. In most cases, dynamic content like advertisements and hydration causes CLS.

Case Study: Wayfair's Revenue Impact Study

Wayfair CLS Study

Wayfair conducted an elaborate study on the relation between CLS and revenue impact. According to the study, 0.1 increase in CLS led to 1.2% loss in revenue.

ISSUE: Dynamic ad loading and promotional banner loading were causing CLS of 0.45.

SOLUTION: They implemented strict container reservation (setting explicit width and height for containers). They also used size negotiation APIs to ensure ads fit their defined container size.

RESULT: The CLS dropped to 0.02, and they recovered around $2.4 million monthly revenue.

Lesson Learned: Your CDN Can Worsen CLS

If the CDN delivers assets in unpredictable order, it can worsen CLS. You have to manage layout stability at the CSS level. Ensure all images and embeds have explicit dimensions and use font-display: swap carefully to prevent text reflows.


Actionable Next Steps: The Performance Transformation Roadmap

Phase 1: Immediate Actions (First 1–2 Hours)

Step 1: Run PageSpeed Insights Like a Pro

PageSpeed Insights

  • Run PageSpeed Insights on important pages like Homepage, Product Page, and Checkout page
  • Capture not just scores but whole diagnostic details
    • Export JSON: Use Chrome Developer Tools, Network Tab → Right-click request → Save as HAR
  • Run comparison: Test with and without CDN by adding "?bypass-cdn"

Step 2: CDN Cache Hit Ratio Analysis

CDN Cache Analysis

  • Analyze CDN hit ratio vs. origin calls
    • Cloudflare: Analytics → Cache tab
    • Fastly: Real-time analytics → Cache performance
  • Cache TTL analysis
  • Personalization impact on caching
    • Custom metric: Add header "X-Cache: HIT/MISS" and log it

What if your CDN had 95% hit ratio but still poor LCP? It may be caching the wrong assets—maybe caching small JS files instead of large images.

Your check: Are your hero images being cached or not?

curl -I <your-image-url> | grep "Cache-Control"

Step 3: Install & Configure Web Vitals Extension

Web Vitals Extension

Track Web Vitals on every page navigation. Correlate with business metrics and set up alerts for degradation.

  • Install: Chrome Web Store → "Web Vitals"
  • Configure: Click extension → Options
  • Enable: Real-time monitoring and history
  • Test: Navigate through user journey path

Phase 2: Week 1 Actions (Strategic Foundation)

Week 1 Actions

Step 1: Set Up RUM (Real User Monitoring) Like SpeedCurve

BBC uses performance monitoring to make sure their website is fast, smooth, and good for users.

They use a tool named "SpeedCurve" that collects real user performance data and alerts them if something goes wrong.

Day 1–2 — Add Performance Tracking to Your Site

Add this simple JavaScript code to the website header:

import {onCLS, onLCP, onINP, onFCP, onTTFB} from 'web-vitals';

function sendToAnalytics(metric) {
  ga('send', 'event', {
    eventCategory: 'Web Vitals',
    eventAction: metric.name,
    eventValue: Math.round(metric.value),
    nonInteraction: true,
  });
}

onCLS(sendToAnalytics);
onLCP(sendToAnalytics);
onINP(sendToAnalytics);
onFCP(sendToAnalytics);
onTTFB(sendToAnalytics);

This script tracks 5 important performance signals:

MetricWhat it measuresWhy it matters
LCPHow fast main content loadsSlow = users leave
CLSHow much page layout shiftsAnnoying jumps
INPHow fast site responds to clicksLaggy buttons = bad UX
FCPFirst paint timeInitial render speed
TTFBServer response timeBackend performance
PerformanceBusiness impact
Slow LCP →Higher bounce rate
Poor INP →More cart abandonment
High CLS →Lower ad revenue
  • The function "sendToAnalytics()" sends this data to Google Analytics (or SpeedCurve)
  • Every time user loads or interacts with the page, real performance data will be recorded
Day 3–4 — Build Dashboards

Now create dashboards for the collected data:

  • Worst LCP (95th percentile) → Show the slowest users
  • INP by interaction types → Find slow buttons
  • CLS by page templates → Identify broken layouts

This helps you understand where you are lacking.


Phase 3: Font Loading Optimization

Font Loading

Let's learn to optimize font loading as GitHub does. First load the system fonts—now the page appears instantly. In the background, load custom fonts; it does not create delay. Manage CLS so text does not jump or move.

Step 1: Preload Your Main Font (Load Faster)

<link rel="preload" href="/fonts/inter.woff2" as="font" type="font/woff2" crossorigin>

This tells the browser to download this font early.

Step 2: Define the Font Safely

<style>
@font-face {
  font-family: 'Inter';
  src: url('/fonts/inter.woff2') format('woff2');
  font-display: swap; 
}
</style>

"font-display: swap" handles the whole workload. It shows the system font first and then replaces with Inter later. This prevents layout shift.

Step 3: Use System Font First, Then Inter

<style>
body {
  font-family: -apple-system, BlinkMacSystemFont, 
               'Segoe UI', Roboto, sans-serif;
}

/* When font is ready, switch to Inter */
.fonts-loaded body {
  font-family: 'Inter', -apple-system, sans-serif;
}
</style>

This code handles loading the system fonts initially and moves to Inter font only after it loads.

Step 4: Simple JavaScript to Apply Font

<script>
document.fonts.load('1em Inter').then(() => {
  document.documentElement.classList.add('fonts-loaded');
});
</script>

This code tells the browser that when Inter font finishes loading, use it on the page.

ProblemYour Solution
Slow page loadSystem fonts first
Flash of unstyled textPreload + swap
Layout jumpingStable fallback fonts
Bad UXSmooth transition

Phase 4: Month 1 – Building a Performance-First Architecture

In this month, the goal is to build a strong performance foundation. We need not just to fix bugs but change how your team writes and ships code.

Month 1 Phase

Step 1 — Set Performance Budgets (Like Walmart)

A performance budget simply defines the baseline for your website performance. If your site performs slower than this, the build should fail.

What Walmart Does: They set limits for:

  • Page speed (LCP, INP, CLS)
  • JavaScript size
  • Third-party scripts

If any developer breaks the limit, the code cannot be merged.

Step 2 — Create a File ".performance-budgets.js"

module.exports = {
  homepage: {
    lcp: 2500,        // must load under 2.5s
    inp: 200,         // must respond fast
    cls: 0.1,         // minimal layout shift
    bundle: 200000,   // max 200KB JS
    thirdParty: 300000 // max 300KB external scripts
  },
  product: {
    lcp: 2200,
    inp: 150,
    cls: 0.05,
    bundle: 150000,
    thirdParty: 200000
  }
};

If your page speed gets slower or heavier than defined, then it blocks the changes.

Step 3 — Add to GitHub (Automatic Check)

# .github/workflows/performance.yml
name: Performance Check
on: [pull_request]

jobs:
  performance:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2
      - name: Run Lighthouse
        uses: treosh/lighthouse-ci-action@v3
        with:
          urls: |
            https://your-site.com/
            https://your-site.com/product
          budgetPath: ./.performance-budgets.js

Now all pull requests will be tested automatically.

Lighthouse CI Setup (Like Etsy)

Etsy checks performance the following way:

  • Before PR (locally)
  • During PR (Automated)
  • After Deployment (Continuous monitoring)

Install:

npm install -g @lhci/cli

Create "lighthouserc.json":

{
  "ci": {
    "collect": {
      "url": ["http://localhost:3000/"],
      "startServerCommand": "npm start",
      "numberOfRuns": 3
    },
    "assert": {
      "assertions": {
        "categories:performance": ["error", {"minScore": 0.9}],
        "largest-contentful-paint": ["error", {"maxNumericValue": 2500}]
      }
    }
  }
}

Run locally:

lhci autorun

If you get a bad score, then you should fix the challenges before merging.

Improve JavaScript Architecture

First, you have to find the heavy JavaScript files:

npx source-map-explorer 'dist/*.js'

This will show what files are making your website slower.

Now instead of loading everything at once, try to load pages only when needed:

const ProductPage = React.lazy(() => import('./ProductPage'));
const Checkout = React.lazy(() => import('./Checkout'));

Use Server or Edge (Like Netflix)

Example idea: Netflix performs personalization server-side and then ships to the client side. It reduces client-side loading and provides a faster experience.

export async function getServerSideProps(context) {
  const userData = await getUserData(context.req);

  return {
    props: {
      recommendations: await getRecommendations(userData),
    }
  };
}

What Should You Expect

Month 1: Foundation (You are here)

  • Diagnostic setup
  • Monitoring established
  • Quick wins implemented

Month 2: Optimization

  • JavaScript architecture overhaul
  • Third-party script management
  • Build process optimization

Month 3: Business Integration

  • A/B testing performance changes
  • ROI measurement framework
  • Performance as core KPI

Expected Results (Based on Case Studies)

  • 40-60% LCP improvement (BBC: 2.1s → 1.4s)
  • 50-70% INP improvement (Shopify: 450ms → 120ms)
  • 90% CLS reduction (Wayfair: 0.45 → 0.02)
  • 5-15% conversion increase (Walmart: 5% for 500ms improvement)

Tools & Resources Quick Reference

Immediate (Free)

Week 1 (Free Tier)

  • SpeedCurve: 30-day trial, then free for basic
  • Calibre: Free for single site
  • Lighthouse CI: Open source

Month 1 (Investment)

  • Full RUM Suite: SpeedCurve Enterprise ($500+/month)
  • Advanced CI/CD: GitHub Enterprise + Actions
  • Edge Compute: Vercel Pro/Enterprise

Communities to Join


More High-Value Performance Guides