You ran a speed test. You got a number. Now what?
Most WordPress performance benchmarks you'll find online compare hosting providers against one another, measuring server capabilities in isolation. That's useful if you're choosing a host, but it doesn't answer the question you actually have: are my numbers good or bad for my type of site?
That question matters because a 2-second load time means something very different for a five-page brochure site than it does for a membership portal with logged-in users, CRM integrations, and a page builder theme. The brochure site should be doing better. The membership portal might be performing exactly as expected.
We manage over 200 WordPress sites across every configuration imaginable, from lightweight brochure sites to complex publishing platforms that produce multiple articles per day. What we've learned is that context determines whether a number is a problem or perfectly normal.
What Are Good Core Web Vitals Scores for WordPress?
Core Web Vitals are Google's primary performance metrics, and they're the only speed-related measurements that directly factor into search rankings. What good Core Web Vitals scores look like for WordPress depends entirely on your site type — so before you panic or celebrate your numbers, get the context right. If you're going to pay attention to any benchmarks, start here.
| Metric | Good | Needs Improvement | Poor |
|---|---|---|---|
| LCP (Largest Contentful Paint) | Under 2.5s | 2.5s to 4.0s | Over 4.0s |
| INP (Interaction to Next Paint) | Under 200ms | 200ms to 500ms | Over 500ms |
| CLS (Cumulative Layout Shift) | Under 0.1 | 0.1 to 0.25 | Over 0.25 |
A page passes Core Web Vitals when all three metrics fall in the "Good" range at the 75th percentile of real user data. That's the threshold that matters for SEO, and it's what Google Search Console reports on.
But passing is the floor, not the ceiling. For well-optimized WordPress sites, we target tighter numbers:
- LCP under 1.5s for sites with proper caching and optimized images
- INP under 100ms for sites with minimal JavaScript overhead
- CLS under 0.05 for sites with explicit image dimensions and stable font loading
Those tighter targets are achievable for many site types, but not for every WordPress configuration. And that's the point most benchmark articles miss entirely.
For a detailed guide to improving each of these metrics specifically in WordPress, see our Core Web Vitals optimization guide.
TTFB: The Metric That Reveals Your Infrastructure
Time to First Byte measures how long the server takes to begin responding to a request. It's not a Core Web Vital, but it's the foundation on which everything else builds. A slow TTFB puts a hard floor under every other metric.
These are the TTFB ranges we consider reasonable by site type:
| Site Type | Excellent | Good | Acceptable | Problem |
|---|---|---|---|---|
| Brochure site (5-20 pages, minimal plugins) | Under 100ms | 100-200ms | 200-400ms | Over 400ms |
| Blog or content site (moderate plugins) | Under 200ms | 200-400ms | 400-600ms | Over 600ms |
| Membership or LMS (logged-in users) | Under 300ms | 300-500ms | 500-800ms | Over 800ms |
| E-commerce (WooCommerce) | Under 300ms | 300-500ms | 500-800ms | Over 800ms |
| Large-scale publishing (thousands of posts, complex queries) | Under 200ms | 200-400ms | 400-700ms | Over 700ms |
These ranges assume appropriate caching for the site type. A brochure site should serve cached pages for nearly every request. A membership portal with logged-in users can't rely on page caching at all, because every page is dynamic and personalized. That's why the acceptable TTFB range is so different.
The hosting tier determines your TTFB floor. Independent benchmark testing shows consistent patterns across hosting categories:
| Host Category | Average TTFB | Typical Range |
|---|---|---|
| Budget shared (GoDaddy, Bluehost) | 600-800ms | 500-1,200ms |
| Mid-tier shared (A2, GreenGeeks) | 395-450ms | 300-600ms |
| Managed WordPress (Cloudways, Kinsta, WP Engine) | 300-500ms | 200-700ms |
| Premium with edge caching (Cloudflare Enterprise, Fastly) | 20-100ms | 5-150ms |
No amount of plugin optimization overcomes a hosting floor of 600-800ms. Meanwhile, sites on managed hosting with edge caching regularly achieve TTFB under 100ms for cached content.
One thing we see regularly: SEO consultants will look at a site's TTFB and blame the server. But on the very same server, we can show them a different site with a dramatically lower TTFB. The difference isn't the hardware. It's what's on the page: page-builder bloat, unoptimized database queries, and heavy markup.
All of that adds to TTFB regardless of the server's speed. You can't isolate any single factor and call it the problem.
In our experience, TTFB is the weakest metric across WordPress sites globally. Industry data confirms this: only about 65% of WordPress sites pass even the generous 800ms threshold. It's primarily a hosting and infrastructure problem, which is why we focus on server-level and edge caching rather than relying on plugins to compensate.
For a deeper look at what TTFB means and how to address it, see our guide to WordPress TTFB. If you want to understand how different caching layers affect these numbers, our guide to WordPress caching covers the full stack.
What Is an Acceptable WordPress Load Time? Benchmarks by Site Type

"Load time" is one of the most commonly cited performance metrics, and also one of the most misleading. Different tools measure different things: DOM content loaded, fully loaded time, and time to interactive. For most practical purposes, what matters is when the main content becomes visible and usable, which is closer to what LCP measures.
That said, total load time still gives you a useful sense of overall page weight and complexity. The WordPress loading time benchmark that matters most is the one calibrated to your specific site type. Here are realistic targets:
| Site Type | Excellent | Good | Acceptable | Slow |
|---|---|---|---|---|
| Brochure site | Under 1.0s | 1.0-1.5s | 1.5-2.5s | Over 2.5s |
| Blog or content site | Under 1.5s | 1.5-2.5s | 2.5-3.5s | Over 3.5s |
| E-commerce product page | Under 2.0s | 2.0-3.0s | 3.0-4.0s | Over 4.0s |
| Membership or dynamic portal | Under 2.5s | 2.5-3.5s | 3.5-5.0s | Over 5.0s |
| Complex WooCommerce checkout | Under 2.5s | 2.5-4.0s | 4.0-5.0s | Over 5.0s |
The average WordPress site loads in about 2.5 seconds on desktop. On mobile, with simulated throttling, that number jumps to 8 seconds or more. What counts as an acceptable WordPress load time depends entirely on complexity. If your site falls within the "Good" range for its type, you're ahead of the majority of WordPress sites on the web.
It's simple: the more you load, the slower it will be. A site running 30 plugins, a page builder theme, analytics, a chat widget, and marketing pixels has a different performance ceiling than a lightweight site with a custom theme and five plugins. Both can be well-optimized for what they are. The benchmarks just look different.
What Is a Good WordPress Page Speed Score?
PageSpeed Insights is the tool most people reach for first when evaluating their WordPress page speed, and it's the tool most likely to cause unnecessary panic. Mobile scores are always lower than desktop, often by 20-30 points, and that's completely normal.
The reason is throttling. PageSpeed Insights simulates a mid-tier mobile device on a slower network connection. That simulation penalizes JavaScript-heavy sites heavily, and most WordPress sites with page builders, analytics, and interactive elements are inherently JavaScript-heavy.
Realistic Desktop Score Ranges
| Site Type | Realistic Range | Excellent |
|---|---|---|
| Brochure site | 90-100 | 95+ |
| Blog (lightweight theme) | 85-98 | 90+ |
| Blog (page builder theme) | 65-85 | 80+ |
| WooCommerce (simple store) | 75-90 | 85+ |
| WooCommerce (complex store) | 60-80 | 75+ |
| Membership or LMS | 60-85 | 75+ |
Realistic Mobile Score Ranges
| Site Type | Realistic Range | Excellent |
|---|---|---|
| Brochure site | 75-95 | 85+ |
| Blog (lightweight theme) | 60-85 | 75+ |
| Blog (page builder theme) | 35-60 | 50+ |
| WooCommerce (simple store) | 45-70 | 60+ |
| WooCommerce (complex store) | 30-55 | 45+ |
| Membership or LMS | 30-60 | 45+ |
Look at the page builder blog row. A mobile score of 35-60 is the realistic range for a well-optimized site using Elementor, Divi, or WP Bakery. If you're seeing a score of 45 on mobile with a page builder theme, that may be perfectly appropriate. A score of 45 on a lightweight brochure site with a custom theme, on the other hand, indicates a real problem.
This is the context that most speed-test articles leave out, and it's where we see the most unnecessary anxiety among site owners. Clients come to us convinced that something is broken because their mobile score is 50.
When we look at the site and see a complex page builder, 25 plugins, analytics, a chat widget, and marketing pixels, a score of 50 is not a bad score. It is the cost of that particular technology stack.
We also see the opposite: sites that score well on PageSpeed but feel slow to actual users. We had a client with a US-based organization and one remote team member working from France. He constantly complained about site speed. Scores came back nearly perfect, and everything was optimized.
The issue was that the site didn't have a significant French audience, so the CDN wasn't holding cached assets at this location. Every visit required fetching from the origin server in the US. On top of that, he was testing while logged in as an administrator, bypassing all caching layers.
He was judging performance from an admin perspective in a geographic outlier, not from an anonymous visitor's perspective. The scores were accurate for the audience that mattered. His experience was real but not representative.
When to Stop Chasing Points
Diminishing returns set in fast:
- 40 to 70 on mobile: High-impact improvement. Fixes real performance problems that users can feel.
- 70 to 85: Moderate impact. Noticeable improvement, worth pursuing if practical.
- 85 to 95: Low impact. Technically better, but users won't notice the difference.
- 95 to 100: Zero practical impact. May require removing useful functionality to achieve.
We've seen resource-intensive pages with moderate speed scores that rank incredibly well because the content is strong and relevant. We've also seen lightweight sites with near-perfect scores that get no traffic because the content doesn't serve a real audience. Speed scores are one factor among many. They're not the difference between a site that performs well in search and one that doesn't.
For more on how to interpret what these scores actually tell you, see our guide to PageSpeed scores versus real-world performance.
GTmetrix Scores: A Different Lens
GTmetrix provides letter grades and its own set of metrics. One important difference: GTmetrix doesn't throttle connections by default, so its results tend to look more favorable than PageSpeed Insights mobile scores.
| GTmetrix Grade | What It Means |
|---|---|
| A (90-100%) | Excellent performance; fast load, minimal issues |
| B (80-89%) | Good performance; minor optimization opportunities |
| C (70-79%) | Acceptable; some notable performance issues |
| D (60-69%) | Below average; multiple optimization opportunities |
| E (50-59%) | Poor; significant performance problems |
| F (Below 50%) | Very poor; fundamental issues need addressing |
A rough translation between the two:
- GTmetrix Grade A often corresponds to a PSI mobile score of 65-80
- GTmetrix Grade B often corresponds to a PSI mobile score of 50-70
They're measuring different things under different conditions, and neither is "wrong." If you're tracking performance over time, pick one tool and use it consistently rather than comparing across tools. GTmetrix is particularly good for historical tracking because its methodology stays more consistent between updates.
GTmetrix also surfaces useful context that PageSpeed doesn't emphasize:
- Total page size: Under 1MB is lean. 1-3MB is average for WordPress. Files over 5MB are a problem, usually due to unoptimized images.
- Total HTTP requests: Under 30 is lean. Over 100 is heavy and worth investigating.
Why Your Numbers Vary Between Tests
If you've ever tested the same page twice and gotten different results, you're not doing it wrong. Scores can fluctuate 5-10 points between runs. This is normal and expected. Server load, CDN cache state, third-party script timing, and even which test server your request hits all introduce variability.
Common cross-tool discrepancies and what they mean:
| Situation | Likely Explanation |
|---|---|
| GTmetrix is much better than PSI mobile | Normal. GTmetrix doesn't throttle by default. |
| PSI desktop is much better than PSI mobile | Normal. Expected 20-30 point gap from mobile throttling. |
| Scores fluctuate 5-10 points between runs | Normal. Server and network variability. |
| Scores dropped suddenly without changes | Check whether Lighthouse updated its scoring methodology. Google periodically adjusts this. |
Which Tool to Use When
Different tools serve different purposes, and we use each one for a specific reason:
| Use Case | Recommended Tool |
|---|---|
| Quick health check | PageSpeed Insights |
| SEO assessment (CWV field data) | PageSpeed Insights (field data section) |
| Tracking performance over time | GTmetrix (consistent methodology, historical tracking) |
| Deep diagnostic investigation | WebPageTest (most configurable, most detail) |
| Development testing | Chrome DevTools Lighthouse (local, immediate) |
| Monitoring all pages | Google Search Console (CWV report) |
When we evaluate a site, we start with Google PageSpeed Insights and GTmetrix for the numbers, then open Chrome DevTools to see what's actually happening on the page. We look at the Network tab for oversized images, excessive third-party calls, and render-blocking resources. That combination gives us the scores and the context behind them.
The Metrics That Correlate With Business Outcomes
Not every performance metric matters equally for your bottom line. Some are technical indicators that help diagnose problems. Others directly predict whether visitors stay, engage, and convert.
Load time and bounce rate are closely linked. The research paints a clear picture:
| Load Time | Approximate Bounce Rate | Change from 1s Baseline |
|---|---|---|
| 1 second | ~7% | Baseline |
| 3 seconds | ~11% | +32% probability of bounce |
| 5 seconds | ~38% | +90% probability of bounce |
| 10 seconds | ~65%+ | Majority of visitors leave |
Conversion rates drop by approximately 4.4% for each additional second of load time between 0 and 5 seconds. To put that in concrete terms: for a site generating $100,000/month in e-commerce revenue, a 1-second improvement in load time could represent roughly $4,400/month in recovered revenue. A 2-second improvement could mean $8,800/month.
Those are estimates based on aggregate research, and actual results vary by industry and audience, but the scale of impact is real.
Core Web Vitals function as an SEO tiebreaker. When competing pages have similar content quality and authority, the faster page may rank higher. But the indirect effects of speed, like lower bounce rates and longer engagement, likely outweigh the direct ranking signal.
The metrics worth caring about most are the ones your visitors actually experience. LCP tells you when they see your content. INP tells you whether interactions feel responsive. CLS tells you whether the page is visually stable while they're trying to use it. Everything else is diagnostic context.
How WordPress Stacks Up: The Real Numbers
Industry data from large-scale studies gives useful context for where WordPress sites stand globally. These WordPress performance benchmarks represent aggregate data across millions of sites:
- About 44-46% of WordPress sites pass all three Core Web Vitals on mobile. That means more than half fail at least one.
- LCP is where most WordPress sites succeed: roughly 86% of pageviews pass, with a median LCP of just 1.1 seconds.
- INP is a strength: about 91% of WordPress pageviews pass. WordPress handles interactivity well.
- CLS is solid: about 94% of WordPress pageviews pass. Visual stability is not typically a WordPress problem.
- TTFB is the weak point: only about 65% pass. This is an infrastructure issue, not a WordPress code issue.
The trend is positive. Global CWV pass rates have improved year over year, with WordPress mobile pass rates climbing from about 44% in 2024 to 48% in 2025. Desktop rates run roughly 6-10 percentage points higher across the board.
WordPress trails platforms like Shopify and Squarespace on overall CWV pass rates, but that comparison is misleading. Those platforms control their hosting infrastructure end-to-end. WordPress runs on everything from a $5/month shared plan to enterprise-grade managed hosting.
The wide variance in hosting quality creates a broad performance distribution. A WordPress site on proper infrastructure with good caching outperforms most platforms. WordPress outperforms Joomla and Drupal in CWV pass rates, which share the same open-hosting model.
Our Approach: Per-Site Goals, Not Universal Targets
We don't maintain universal performance benchmarks for our client sites. Every site we manage is different: different developers built them, they run different plugin stacks, use different themes, and have different levels of complexity. Most of the sites we manage were built by someone else before the client came to us.
A site with 5 plugins and a custom ACF theme has a fundamentally different performance ceiling than one with 40 plugins and a page builder. Setting a single benchmark number across that portfolio would be meaningless.
What we do instead is approach every optimization by asking: what can we realistically improve for this particular site? We look at what is actually on the page, identify the easy wins, fix what is obvious, and see where the numbers land.
Sometimes there is a 6MB image in the hero section that a content editor uploaded. Sometimes there are fifteen third-party scripts loading analytics, chat widgets, and marketing pixels. Sometimes the hosting environment is the bottleneck.
We can tell a client a hundred times to upload optimized images, but nothing is stopping them from dropping a 6-megabyte photo into a space designed for something a fraction of that size. That's the reality of managing sites you didn't build.
Two Lenses: Scores and Experience
We frame performance for clients through two lenses: the technical scores and the actual user experience.
The technical scores may look great or alarming. But the scores don't always tell you what the real user is experiencing.
A site can score well on PageSpeed but feel sluggish to a logged-in admin working from a geographic outlier. A site can have moderate scores but load instantly for the anonymous visitors who make up 95% of the traffic. Both of those things can be true at the same time.
The user experience lens asks: how does this site feel to your average visitor, and where is the bulk of your audience? If 80% of your audience is on desktop, a lower mobile score is worth understanding but not worth panicking over.
More complex websites will produce lower scores. More marketing pixels, more analytics tools, and more interactive elements will all slow things down. Stack it all up and the numbers move accordingly.
The Consultative Approach
When a client comes to us for performance work, we put together a report with PageSpeed results, GTmetrix insights for a different perspective, and a list of the straightforward improvements we can address: image optimization, script consolidation, caching configuration, and hosting assessment. Then we say, "Let's do this first and see where we are."
That approach works because it sets realistic expectations and then delivers noticeable improvement. When you explain that they don't need to obsess over speed scores, show that you can still improve them, and then actually make the site feel faster, you get a very different outcome than chasing an arbitrary number.
We never try to scare clients into thinking their site is broken. That's not how we operate. We show them what we're seeing, what we think we can improve, and what that means for their users and their business goals.
The most honest thing we can tell you about WordPress performance benchmarks is this: the tables above give you a useful frame of reference, but your specific site has its own realistic ceiling based on how it was built and what it does.
The goal is not to hit an arbitrary number. The goal is to identify what is genuinely wrong, fix what can be fixed, and ensure the site performs well for the people who actually use it.
What This Looks Like in Practice
The AIER case study is a good example. The American Institute for Economic Research came to us, managing three websites with daily content publication. Because they publish multiple times per day, full-page edge caching was not viable. The content changes too frequently.
We implemented a server-level caching stack with Varnish, Redis, and Memcached, optimized for high-frequency publishing. Through infrastructure changes alone, with no code or design modifications, we achieved an 84% speed improvement. That result came from matching the caching strategy to how the site actually operates, not from chasing a specific score target.
That's the difference between benchmarking hosting providers and benchmarking your actual site. A hosting response-time benchmark can tell you that a server responds in 44 milliseconds. But that number says nothing about what happens when WordPress loads 30 plugins, renders a page builder template, and processes a dozen database queries before it can serve a single page.
The benchmarks that matter are the ones measured against what your site actually does.
What to Do With Your Results
If you've run a speed test and are looking at your numbers, the next step depends on where they fall:
If your site passes all three Core Web Vitals in field data: Your performance is meeting Google's standard. There may still be room for improvement, but you're not in danger of a ranking penalty from speed alone.
If your scores fall within the "Good" range for your site type in the tables above: You're performing well relative to similar WordPress sites. Further optimization will produce diminishing returns.
If your scores fall in the "Acceptable" range: There are likely specific, addressable issues worth looking into. Image optimization, caching configuration, and reducing third-party scripts are the most common improvements.
If your scores are in the "Problem" or "Slow" range: Something needs attention. It could be hosting infrastructure, an unoptimized site build, or both. Our WordPress slow site diagnostic guide walks through how to identify the actual cause — because the fix depends entirely on what's driving the problem.
The most important thing to remember is that performance isn't a single-point problem. It's not just the server, not just the caching, not just how the site was built.
A poorly built website, even with the best caching, the most powerful server, and the greatest CDN, will still underperform compared to a well-developed website in the same environment. All of those layers work together, and that's what determines your speed test results.
If your WordPress site needs a performance evaluation, our optimization services include a full diagnostic assessment and prioritized recommendations based on your specific site configuration.
Speed test results only tell part of the story. For a complete diagnostic framework, see our guide: Why Is My WordPress Site So Slow?