Case Study: The Overcached Website
A marketing agency’s WordPress sites had four caching layers and were caching almost nothing — here’s how that happens.
A marketing agency called me in March with an odd complaint. They managed a portfolio of twenty-one WordPress sites for their clients (all on SiteGround shared hosting, all behind Sucuri’s web application firewall and CDN) and the sites had a persistent, slippery problem that nobody had been able to trace.
Someone would update a page, but the update wouldn’t appear. They’d clear every cache they knew about: the browser, the caching plugin, the Sucuri dashboard but…still no update. Then someone on the team would append ?1 to the end of the URL, and the page would load perfectly, fresh and correct.
The team knew enough to read the signal correctly: if ?1 fixed the page, the underlying code was right and the problem was caching somewhere in the stack. What they didn’t know was which layer, or why, and with four potential caches sitting between WordPress and the visitor’s browser, “somewhere in the stack” wasn’t a narrow search.
They wanted two pilot sites audited. Fix these two, figure out the pattern, roll the fix out to the rest of the portfolio…a clean engagement with a clean goal.
What I found was a caching stack so layered that it had cancelled itself out.
What caching actually does
Before I tell you what I found, I need to explain what caching is, because the whole story turns on understanding it.
So, websites don’t hand you a finished page the moment you ask for one. When you type a URL and hit Enter, a small sequence of events unfolds before the page reaches your screen. On a WordPress site, the server wakes up, loads the WordPress code, reads the database, runs the theme and any active plugins, assembles the resulting HTML, and ships it back. Most of that work produces the same output for every visitor; the About page looks the same whether you visit it or I do. Doing all of it fresh on every request is wasteful.
So the web has built layers of “save a copy of what we just made” machinery, called caches, at every stage of the journey. A properly configured WordPress site might have four or five of them, one on top of another:
- The browser cache saves files to your local disk so you don’t re-download them when you navigate between pages.
- A CDN edge cache — like Sucuri, Cloudflare, or Fastly — holds copies of your pages on servers physically close to your visitors, so someone in Tokyo doesn’t have to reach all the way to a server in Texas to load your homepage.
- A server-level proxy cache sits at the door of your hosting provider’s infrastructure and holds pages between the CDN and WordPress itself. On SiteGround, this is their NGINX dynamic cache, branded as SuperCacher.
- A WordPress caching plugin — WP Rocket, LiteSpeed Cache, W3 Total Cache, or any of a dozen others — pre-builds complete HTML files and saves them to disk inside WordPress, so the PHP code doesn’t have to rebuild the page from scratch every time.
Each of these caches exists to make pages faster; each is a reasonable choice on its own. Stacked properly, they form a ladder: the visitor’s request travels down from their browser to your CDN, then to your server proxy, then to your WordPress plugin, and only falls all the way through to the WordPress PHP engine if every rung above is empty. Each rung only needs to build the page once; every other request gets a saved copy from whichever rung has one ready.
Stacked improperly, they don’t form a ladder at all…they form a traffic jam. Each layer has its own opinions about whether the page should be cached, for how long, and for whom. If those opinions disagree, nothing gets cached anywhere — or, worse, stale copies get stuck in the middle of the chain where nobody can clear them.
That’s what this agency had.
The first clue: a firewall that wasn’t caching anything
I started on the simpler of the two sites: a single-domain WordPress install…one website, one database, one theme; nothing fancy.
Before I had any admin credentials, I did what every investigator does first: I looked at the outside. I sent a handful of requests to the site from the command line and read the response headers that came back. Headers are the polite conversation that happens in the background between a visitor’s browser and the stack of caches between them and the server. They’re not visible when you visit the page normally, but they tell you exactly who cached what, at which layer.
Two headers were the tell. The first, x-sucuri-cache, reported whether Sucuri’s edge had served the page from its own cache. The second, x-proxy-cache, reported the same thing for SiteGround’s server-level NGINX cache.
The homepage came back HIT and HIT. Both caches were doing their jobs. Fine.
The About page came back MISS and MISS. Neither layer had a saved copy. Odd.
Products…MISS. Services… MISS. News… MISS. Every inner page on the site was reaching the origin server on every single request, being rebuilt from scratch, and returning uncached from both layers above. The homepage was the one exception. Every other page was doing all the work, every time.
Why?
The answer was in a single header the origin server was sending back with every inner page. It’s called cache-control, and it’s the directive a website uses to tell caches what they’re allowed to do. Here’s what this site was sending:
cache-control: no-cache, must-revalidate, max-age=0, no-store, private
Five directives, all saying the same thing in different words: do not store this page anywhere…not at the edge, not at the proxy, not in the browser. Every cache layer was dutifully obeying. Sucuri wasn’t broken, nor was SiteGround’s proxy. They were all following orders, it’s just that the orders were just wrong.
A site running behind Sucuri’s premium firewall and CDN (a service the agency paid several hundred dollars a month for across the portfolio) was passing nearly every request straight through to the origin server. Sucuri was being paid to do nothing, or more precisely, it was being paid to receive instructions to do nothing and to faithfully execute them.
The question became: who’s sending the instructions?
The plugin archaeology
Once I got into the WordPress admin, the answer surfaced quickly. The site had two active caching plugins.
The first was WP Rocket — the well-known premium caching plugin the agency had licensed across its portfolio, the one that was supposed to be the authoritative caching layer. It was building HTML files correctly and saving them to disk. Those files were ready to be served.
The second was SG Speed Optimizer, the caching plugin SiteGround bundles automatically with every hosting account. Installed by default. enabled by default; never configured or mentioned.
Both were running and trying to do the same job but neither was aware of the other.
SG Speed Optimizer was the one sending the no-cache header. This wasn’t a bug tnough, it was intentional. SG Optimizer wanted SiteGround’s own server-level cache (the NGINX SuperCacher layer) to be the authoritative cache. The no-cache header was its way of saying to Sucuri and the browser: not your job, mine.,,,Which would have been fine if SiteGround’s SuperCacher had actually been set up to cache these pages…but it wasn’t. SG Optimizer was telling every other layer to stand down in favor of a cache that wasn’t active. The directive kept Sucuri away from the page without providing a replacement.
Two caching plugins, but neither of them actually caching; both preventing any other layer from caching either.
SG Optimizer had left other artifacts too. Buried in the site’s .htaccess file was a stray directive: Header unset Vary. That one line strips a technical header CDNs rely on to decide whether two versions of a page are actually different (compressed versus uncompressed, for example). Without it, Sucuri got confused enough that it stopped caching certain static assets altogether. It was a quiet contribution to the silence.
The .htaccess file told a longer story than I’d expected. It contained directives from plugins that had been installed, configured, and deactivated years earlier; security rules from three different eras; a handler declaration for PHP 7.4 (a version that reached end-of-life in November 2022 and is still somehow running in production). Every plugin that had ever been added to the site had left sediment behind, and nobody had ever cleaned any of it up. The site’s present-day behavior was the compound result of every decision anyone had made about it, going back years, with nothing in between to referee.
The second site: three plugins deep
The second pilot site was a WordPress Multisite: a single WordPress installation hosting three separate websites under one roof, in this case a landing page that routed visitors by country, a U.S.-English site, and a Canadian-English site. Multisite is a powerful feature and an unforgiving one: settings on one subsite can affect every other subsite, and a plugin active on one can interfere with another.
This site had three caching plugins active simultaneously.
The first was WP Rocket, running on the U.S. subsite — the same premium plugin from the first site.
The second was LiteSpeed Cache, running on the landing page and the Canadian subsite. LiteSpeed Cache is a powerful caching plugin, but it’s built specifically for the LiteSpeed web server. On any other server type (like SiteGround which runs Apache behind an NGINX proxy, not LiteSpeed) its core caching engine simply cannot operate. It’s a bit like installing a diesel engine in a gasoline car: it doesn’t work, but it doesn’t always fail cleanly either. The plugin was installed anyway, active on two of the three subsites, doing an impression of caching without actually caching anything.
The third was SG Speed Optimizer, active across the whole multisite network.
LiteSpeed Cache couldn’t do its primary job on an Apache server, but its PHP code still loaded on every request, and it was still injecting its own version of “do not cache” headers — subtly different from SG Optimizer’s version, but equally fatal. LiteSpeed Cache was sending cache-control: private, s-maxage=0, which told Sucuri to treat the page as private and unique to each visitor…which it wasn’t, but Sucuri had no way to second-guess that.
Meanwhile, on the U.S. subsite, WP Rocket noticed LiteSpeed Cache active elsewhere on the network and quietly refused to write its own .htaccess rewrite rules. This was WP Rocket being well-behaved: it detected another caching plugin in the environment, assumed that plugin would be the authoritative cache, and stepped back to avoid a conflict. The problem, of course, was that LiteSpeed Cache couldn’t actually cache anything on this server.
So WP Rocket was running in a crippled mode building cached HTML files on disk, but unable to serve them the fast way. Serving them the fast way requires writing a small block of rewrite rules into .htaccess, and WP Rocket refuses to write that block as long as LiteSpeed Cache is present. Every request on the U.S. subsite went through the full WordPress PHP process instead of being served directly from the pre-built file on disk.
Then SG Optimizer, on top of everything else, was minifying the site’s stylesheets on a rolling schedule. Every few hours it would regenerate them with new filenames, but WP Rocket’s cached HTML still referenced the old filenames. When visitors loaded a page, their browsers would ask for stylesheet files that no longer existed. The stylesheets would fail to load,the page would render as a mass of unstyled text, and the site would look broken.
This was the ?1 trick in action. Appending a query string to the URL forced the request past every cache layer and all the way back to the live PHP process, which rendered a fresh page referencing the current stylesheet filenames; the page then looked right…right up until the next time SG Optimizer quietly regenerated the stylesheets in the background, at which point the cycle started over.
The agency’s entire portfolio of twenty-one sites was running some version of this stack. Most were less severe than the multisite, but the pattern was the same: caching plugins accumulated over the years, none fully removed, each with its own ideas about how the site should be cached, and Sucuri at the edge patiently obeying whichever one shouted the loudest.
The fix
The fix, once the pattern was clear, was almost anticlimactic.
For each site, I picked one caching plugin — WP Rocket, since the agency was already paying for it — and deactivated the others. I cleaned the orphaned directives out of .htaccess. and changed Sucuri’s caching mode from “Site caching (honor origin headers)” to “Enabled (recommended),” which tells Sucuri to apply its own cache TTL to HTML regardless of what the origin says. In this case, that was the right call: the origin had lost the right to be trusted with caching decisions. Then I cleared every cache in the stack and let them repopulate from scratch.
Within hours, both pilot sites were returning x-sucuri-cache: HIT on every page. Visitors were being served from Sucuri’s global edge network instead of the SiteGround origin and pages loaded faster. The stale-content problem disappeared because there was no longer a contested middle layer holding onto copies that nobody could clear.
The playbook I wrote for the remaining nineteen sites in the portfolio was, fundamentally, one instruction: pick one caching plugin and commit to it, then turn the CDN on properly. Don’t add a fourth cache to fix the problem; subtract the three that aren’t working.
The lesson under the lesson
The technical lesson here is straightforward: caching layers must coordinate. More caches is not more caching; a stack of plugins that each assume they own the cache will cancel each other out.
But there’s a deeper lesson about how small businesses end up in this situation in the first place, and that lesson is worth the longer explanation.
Every component in this stack was chosen because it was the easy option. SiteGround shared hosting, because setting up your own server is intimidating and you’re running a business, not a datacenter. SG Speed Optimizer, because SiteGround installed it for you and recommended it in the welcome email. WP Rocket, because a developer you hired years ago said it would make the site faster. LiteSpeed Cache, because someone probably read a blog post that said it was the best caching plugin on the market (true, on a LiteSpeed server but the blog post likely left that part out). Sucuri, because security matters and the sales page said it would speed things up too. None of these decisions were wrong in isolation. Each one, looked at on its own, was the sensible, responsible choice.
What the small business owner doesn’t see is that each of these “easy” choices introduces a stakeholder. SiteGround wants SG Optimizer running on your site, because that’s one of the ways they justify their monthly fee. WP Rocket wants to be in charge, because it’s designed around the assumption that it is in charge. LiteSpeed Cache wants to intercept every request, because that’s what a caching plugin does — but it can only do that job on the right server. Sucuri wants to respect whatever the origin tells it, because that’s what a good CDN does. All four positions are reasonable. Stacked on top of each other though and they produce gridlock.
The promise of the “easy” path — install a plugin, buy a service, turn it on, get back to running your business — is that someone else will handle the complexity. What actually happens is that the complexity doesn’t disappear; instead it just moves somewhere the business owner can’t see it…into the interaction between a plugin and a server it wasn’t designed for, into a stray line in an Apache configuration file, into a header that gets sent on inner pages but not the homepage because a plugin’s author made a decision years ago that you’ll never read. And the day the site breaks (the day a content update doesn’t appear, the day the CSS stops loading and the marketing team starts passing around the ?1 trick) is the day the business owner finds out that those layers existed, and that they’ve been quietly fighting each other for a long time.
The agency I worked with is doing the right thing now: auditing what’s actually running on their sites, figuring out what’s providing value and what isn’t, and removing what isn’t. Two of their pilot sites are now serving from Sucuri the way a CDN is supposed to serve, and the playbook for the other nineteen is in hand.
The pattern I keep finding, across engagements like this one, is that the most impactful change is usually subtraction. Remove the plugin that doesn’t work on your server, cancel the service that hasn’t delivered value in three years, trust one layer to do its job instead of three layers doing it half-heartedly. A smaller, simpler stack is not only usually but it’s legible as well, meaning you can see what’s happening;you can explain what’s happening to someone else, and when something breaks, you know where to look.
Websites don’t get faster by adding more caches. They get faster by letting the right one do its job.
Related Reading
-
Case Study: Proving a WordPress Staging Site Is Production-Ready
Another page cache that wasn't caching — plus a single plugin silently adding 30 seconds to every admin request, and a 4-gigabyte memory ghost that crashed the server. -
Case Study: How to Audit WordPress Plugins Before a Server Migration
What happens when you actually inventory a WordPress site: 88 plugins, 395 MB of database bloat, and orphaned integrations still calling APIs three years after the vendor was replaced. -
Case Study: The Slow Accumulation of Overlooked Misconfigurations
The same sediment pattern, pointed at security instead of performance — a full-site backup publicly downloadable for six months, a contact form routing submissions to a stranger, and phantom logins nobody had noticed.