img

Migrating a WordPress Site from Shared Hosting to a Self-Managed VPS

Building the server, moving the site, and proving the new environment is safe before trusting it with production.


Introduction: You Can’t Just Move

In Part 1 of this series, I walked through the pre-migration audit of a nonprofit ministry’s WordPress site: how a plugin cleanup revealed 88 plugins, a 650-megabyte database bloated by a three-year-old bug, and the kind of accumulated technical debt that makes migrations dangerous. That audit became the foundation for understanding what we were actually moving.

Now it was time to build the place we were moving it to.

The client’s site had been running on a managed hosting platform — a shared server with 34 other websites, a disk at 95% capacity, and performance throttled because the other sites on the server were consuming so much of the shared resources that the client’s site couldn’t get what it needed. The plan was straightforward on paper: stand up a dedicated server, move the site, prove everything works. In practice, it was five distinct phases of work, each one surfacing problems that wouldn’t have been visible from the outside.

This is the story of building the new server, extracting the site from its old environment, deploying it to staging, and then doing the work that most migration guides skip entirely: verifying that the staging site wasn’t quietly talking to production systems (processing real donations, sending real emails, leaking data to the internet, etc).

A migration is much more than copying files; you’re doing a controlled transfer of a living system, and every assumption you don’t verify is a risk you’re carrying into production.


Part One: The Foundation

Choosing the Stack

The new server was a DigitalOcean droplet (8GB of RAM, 4 vCPUs, 160GB of SSD storage) running Debian 13 (Trixie), which at the time was the newest stable release. The choice of Debian 13 was deliberate: it would give the server the longest possible runway before the next major OS upgrade. But “newest” in server software often means “least tested,” and that tension surfaced immediately.

The server stack I was building would have five core components: OpenResty (an Nginx-based web server with built-in Lua scripting), PHP-FPM (the process manager that actually runs WordPress), MariaDB (the database engine), Redis (an in-memory cache), and CrowdSec (a community-driven intrusion detection system). Each needed to be installed, configured, and tuned to work together, and on a brand-new OS release, that meant verifying compatibility at every step.

Technical Note: OpenResty is a distribution of Nginx that bundles LuaJIT and a collection of Lua libraries. It’s functionally identical to Nginx for serving web traffic, but the Lua integration allows for things like running CrowdSec’s security decisions directly inside the web server — blocking malicious IPs before requests ever reach WordPress. This matters for performance: a blocked request handled at the web server layer costs almost nothing, while a blocked request that makes it to PHP costs memory and CPU.

The First Droplet

The first sign that this build wouldn’t be entirely routine came before I’d even installed the application stack. After installing CrowdSec on the fresh droplet, its Central API — the cloud service that distributes community threat intelligence to every CrowdSec instance — refused to register. The error message said the server’s IP was temporarily blocked for excessive login attempts, with a one-hour cooldown.

CrowdSec had introduced rate limiting on their Central API in late December 2025 to protect against misconfigured instances stuck in restart loops. The thresholds are generous for normal use — more than 20 logins in 50 minutes — but the rate limit is enforced per IP address, not per installation. DigitalOcean recycles IP addresses across droplets. A fresh server can inherit an IP that was previously assigned to someone else’s misconfigured CrowdSec deployment, and that IP’s reputation comes with it.

I could have waited an hour for the block to expire. But on a brand-new droplet with a fresh OS install and a single CrowdSec registration attempt, there was no legitimate reason for a rate limit. The IP itself was the problem — it had history I couldn’t see and didn’t trust. I destroyed the droplet and provisioned a new one with a clean IP. CrowdSec registered immediately.

It’s a small decision, but it reflects a principle I follow throughout server builds: if something is unexplained, don’t work around it. Start clean.

Compatibility on the Bleeding Edge

CrowdSec and OpenResty both required workarounds on Trixie. CrowdSec’s old installation script pointed to a package repository that didn’t support Debian 13, and had to be installed from their newer install.crowdsec.net endpoint. OpenResty didn’t have a Trixie repository at all — I had to use the Debian 12 (Bookworm) repository with a trust override because Trixie’s stricter package signing had deprecated the SHA1 cryptographic keys OpenResty was still using. Both workarounds were functional and clean, but they represented the kind of friction that a managed hosting platform hides from you — and that you have to understand when you’re building your own stack.

PHP and MariaDB, by contrast, were both available natively in Trixie’s repositories. PHP 8.4 and MariaDB 11.8 installed cleanly with no workarounds needed. This mattered because the client’s old server was running PHP 7.4 — a version that had reached end-of-life in November 2022, meaning it had gone over three years without security patches. The migration would be jumping two major PHP versions, and I needed the cleanest possible install to isolate any compatibility issues that jump would create.

Tuning for the Workload

Installing software is the easy part. Configuring it for a specific workload is where the knowledge lives.

Think of PHP-FPM as a team of workers standing by to handle requests. When someone visits a WordPress page, one worker picks up the job, processes the PHP code, and returns the result. The pool settings control how many workers are available at any given time: how many start when the server boots, how many stay on standby during quiet periods, and the maximum number that can spin up during a traffic rush. Get these numbers wrong in either direction and you either waste memory paying idle workers to stand around, or you run out of workers during a busy period and visitors see errors.

The audit from Part 1 had given me concrete data about this site’s actual traffic patterns — roughly 12,000 to 17,000 requests per day, peaking at about 40 requests per minute during the evening hours. Each worker consumed approximately 50 megabytes of memory. For an 8GB server running a single WordPress site, I configured the pool to allow up to 30 workers at peak (consuming roughly 1.5GB at full capacity), with 8 workers starting immediately and the pool scaling dynamically based on demand. That left ample headroom for the database, cache, web server, and operating system.

I also configured PHP’s bytecode cache — a mechanism that compiles each PHP script once and reuses the compiled version for subsequent requests instead of re-reading and re-compiling the source file every time. For a production WordPress site that isn’t being actively developed, this eliminates thousands of redundant file operations per hour.

The Web Server Configuration

OpenResty’s default configuration file ships full of commented-out examples; it’s more of a teaching document than it is a production config. I replaced it entirely with a clean configuration built for this specific deployment.

The web server needed to do three things well: serve WordPress pages efficiently, enforce security rules before requests reach PHP, and handle the connection volume a production site generates. I configured it to run one worker process per CPU core (four, matching the server’s vCPUs), set appropriate connection limits, and enabled compression for common web content types to reduce bandwidth.

The site-specific configuration included the kind of security hardening that should be standard but rarely is. Direct access to the WordPress configuration file — which contains database credentials — was blocked. PHP execution inside the uploads directory was blocked, because that’s one of the most common vectors attackers use: upload a PHP file disguised as an image, then execute it by requesting its URL directly. The legacy XML-RPC endpoint (which is now almost exclusively used for brute-force attacks and DDoS amplification) was blocked. Access to backup files, SQL dumps, and other sensitive file patterns that occasionally end up in the web root was also blocked.

This first pass was HTTP-only; SSL would come later with Cloudflare origin certificates. But even getting to this point surfaced the first of many issues.

The First Cascade

When I ran nginx -t to validate the configuration, OpenResty warned that 4,096 worker connections exceeded the operating system’s default file descriptor limit of 1,024. The fix was a systemd drop-in file that raised the limit to 65,536.

After applying that fix, I restarted OpenResty, and the restart command hung; it never returned. Investigating with ps aux showed that OpenResty was actually running fine, but systemd was stuck in an activating state. The root cause was a PID file mismatch: my configuration specified one PID file location, but the systemd unit was looking for a different one. Systemd was waiting forever for a PID file that would never appear at the path it expected.

The fix was a one-line change in the configuration. But the recovery was complicated by the fact that nginx -s stop also relies on the PID file which didn’t exist at the path it was looking for. I had to kill the processes manually before systemd could start fresh.

This is a characteristic pattern in server administration: a simple configuration choice (where to write the PID file) creates a cascading failure (hung restart, unable to gracefully stop) that requires understanding how multiple systems interact (nginx, systemd, PID files, process signals). On a managed hosting platform, you never see this. When you’re building your own stack, it’s Tuesday.


Part Two: The Extraction

Getting the Data Out

With the server built and waiting, I needed to extract the site from its old home — a managed hosting environment that, like most managed platforms, makes getting in easy and getting out interesting.

The database export was straightforward: a mysqldump with --single-transaction to ensure a consistent snapshot without locking tables. The site was live with donations actively being processed, so a table lock during export could have caused a donor’s payment to fail. The dump produced a 27MB compressed file representing 170MB of database content — 105 tables, 19,148 posts, 911 users, and three years of donation records.

The file export was less straightforward. The site’s total footprint was enormous — mostly because of the backup situation documented in Part 1. The wp-content directory alone contained 37GB of local backup archives, 19GB of uploaded media, 599MB of old Duplicator migration backups, and 498MB of cache files. The actual WordPress core, themes, and plugins (the parts I needed for staging) totaled roughly 600MB.

I excluded the large directories from the archive. Media files weren’t needed for staging validation; I’d configure the new server to proxy missing images from the production site, so pages would render normally while keeping the staging deployment small. The backups and cache were waste. What I needed was the code: WordPress core, themes (especially the Avada theme and its Fusion Builder companion), all 56 active plugins, and the configuration files.

Technical Note: “Proxy missing images from production” means configuring the web server so that when a browser requests an image that doesn’t exist on the staging server, instead of returning a 404 error, the server quietly fetches the image from the live production site and passes it through. The visitor never knows the image isn’t stored locally. This is a standard technique for staging environments that lets you validate layout and functionality without transferring gigabytes of media files.

Getting the Files Out

The managed hosting platform’s user isolation model created some friction during the export — different SSH and application users with different file permissions, transfer protocol quirks specific to the platform’s shell configuration — but nothing that couldn’t be worked through. I downloaded both the database dump and the file archive to my local machine, then pushed them to the new droplet.

The Finding in the Backup Directory

While investigating the bloated backup directory, I noticed something that changed the scope of the engagement.

The 37GB updraft/ directory had standard directory permissions: world-readable. UpdraftPlus protects its backup files with an .htaccess file containing deny from all. This is an Apache directive. The managed hosting platform serves static files through Nginx.

Nginx does not read .htaccess files. It ignores them completely.

I navigated directly to a backup URL in a browser and the file downloaded immediately. No authentication, no access control, no challenge. Two complete backup sets (database dumps, plugin archives, theme archives, and media uploads split across nearly 100 zip files) were publicly downloadable by anyone who could guess or enumerate the predictable UpdraftPlus filename pattern.

The database dumps contained everything: donor names, email addresses, physical addresses, payment transaction metadata, Authorize.net API configuration, PayPal Commerce credentials, Zapier webhook URLs, Zoho integration tokens, and WordPress user credentials.

I deleted the backup files immediately and documented the finding for disclosure to the client. The root cause was a design assumption baked into one of the most popular WordPress backup plugins: UpdraftPlus assumes Apache. On any server running Nginx, LiteSpeed, or OpenResty with default UpdraftPlus settings, backup files stored locally are unprotected.

This finding reinforced a decision I’d already made: on the new server, backups would be handled at the server level with remote storage, not by a WordPress plugin writing archives into the web root.


Part Three: Deployment Day

From Archive to Running Site

With the archives transferred to the new droplet, deployment was a sequence of carefully ordered steps. Extract the WordPress files to the staging web root, import the database, configure wp-config.php, install WP-CLI, run search-replace to update all URLs, configure SSL, point DNS, test.

Each step sounds simple…but each one had at least one complication.

The file extraction revealed Cloudways artifacts that had no business on the new server: a MalCare WAF file (the managed platform’s security scanner), a test HTML file, and a separate wp-salt.php file. Cloudways had externalized the WordPress security salts into a standalone file, with wp-config.php containing a require('wp-salt.php') statement instead of the standard inline salt definitions. I generated fresh salts, pasted them inline, and removed the external file and its require statement.

Technical Note: Externalizing salts into a separate file is a legitimate practice; it keeps credentials out of wp-config.php in case that file is ever exposed through a backup leak or misconfiguration. But on a single-site VPS where you control the filesystem, the practical security difference is negligible, especially considering the attack scenarios where a separate file helps are the same scenarios where your server is already deeply compromised. So I went with inline because it’s the WordPress default, it’s one fewer file to track, and it’s one fewer require() statement that can break during a migration — as this one just had. Simplicity has its own security value.

The database import was clean: 105 tables, all the expected row counts matching the production audit from Part 1. I replaced the entire Cloudways-specific wp-config.php with a staging-appropriate configuration: new database credentials, staging URLs forced via WP_HOME and WP_SITEURL, debug logging enabled, auto-updates disabled, and a clean Redis configuration to replace the old platform’s custom Redis setup.

44,696 Replacements

The URL search-replace is where migrations succeed or fail for WordPress sites. Every reference to the production domain — in post content, in serialized plugin data, in theme settings, in metadata — needs to be updated to the staging domain. Miss any and you get mixed-content warnings, broken redirects, or worse: staging pages that silently load assets from production.

WP-CLI’s search-replace command handles this, including the tricky part: serialized data. WordPress plugins frequently store configuration as serialized PHP arrays in the database. A naive find-and-replace would break the serialization headers (which include string length counts), corrupting the data. WP-CLI recalculates the serialization after replacement.

I ran three passes, each with a dry run first:

  1. The production HTTPS URL to the staging HTTPS URL — 44,667 replacements
  2. Any stray HTTP references to the staging HTTPS URL — 3 replacements
  3. The www. variant to the staging URL — 26 replacements

The first pass told the story of just how deeply a domain name embeds itself in a WordPress database. GiveWP donation metadata alone accounted for 12,218 replacements; Yoast SEO indexable permalinks contributed 2,939; WP Rocket’s cache configuration held 1,009; Smart Slider slides had 429;serialized Avada theme options, WPForms entries, post GUIDs, etc; the domain was everywhere.

Technical Note: The --all-tables flag is critical here. WordPress core only uses a handful of tables, but plugins like GiveWP, WPForms, and Yoast SEO create their own custom tables. Without --all-tables, search-replace only touches the WordPress core tables, and every plugin table retains production URLs. This is one of the most common migration mistakes I see, and it creates subtle, hard-to-diagnose problems: a page loads fine but a donation form submits to the wrong domain, or an SEO plugin generates a sitemap with production URLs.

The Cascade of Issues

The first search-replace attempt failed immediately with “Error establishing a Redis connection.” The old Cloudways object-cache.php drop-in (a file that WordPress loads automatically from wp-content/) was still present, trying to connect to Redis with the old platform’s credentials. I removed it and re-ran successfully.

After configuring Cloudflare origin certificates for SSL and pointing the staging DNS record, the site returned a 522 error — Cloudflare’s code for “I can’t reach your server.” The cause was the firewall. During the initial server build, I’d configured UFW to only allow connections from the administrative IP address, but with Cloudflare proxying traffic (the standard configuration for any site using Cloudflare), connections arrive from Cloudflare’s edge IP addresses, not the visitor’s IP. I added all 15 of Cloudflare’s IPv4 ranges to the firewall rules for ports 80 and 443.

After fixing the firewall, the site returned a 500 error. The OpenResty error log showed a PHP fatal error: it was trying to load a file from the old managed hosting server’s filesystem path which no longer existed. Even though I’d already deleted the MalCare WAF file from the web root, the managed platform’s security scanner had also configured itself as a PHP auto_prepend_file in the .user.ini file, which is a directive that tells PHP to execute a specific file before every request. The path was hardcoded to the old server’s filesystem. I removed the directive, and the site came up.

A DNS resolver issue caused the uploads proxy to return 502 errors until I added an explicit resolver directive to OpenResty; unlike Apache, OpenResty requires you to specify DNS servers for resolving hostnames in proxy configurations.

Four issues, four different root causes, all surfacing in sequence because each one masked the next. This is typical of migration deployments. The site was now serving pages, Avada was rendering, images were proxying from production, and Redis was connected. But…“serving pages” and “safe to use” are very different things.


Part Four: The Isolation Audit

When Staging Can Charge Credit Cards

The staging site was a clone of production, and that’s obviouslythe point because you need an identical copy to test against. But “identical copy” means every credential, API key, webhook URL, and payment gateway configuration came along for the ride. The database didn’t know it was on a staging server. As far as every plugin was concerned, this was the live site.

My first priority was the donation system. The client used GiveWP with Authorize.net as the primary payment gateway. I needed to know: if someone submitted a donation form on staging right now, would it attempt to process a real credit card charge?

GiveWP stores all its settings in a single serialized option rather than individual database rows, so I couldn’t just look up give_authorize_net_sandbox as a standalone value. I dumped the full settings JSON and searched for gateway-related keys.

The answer was unambiguous: test_mode: disabled. GiveWP was configured for live transaction processing. The Authorize.net integration had a live webhook ID, a live public client key, and a direct connection to the production payment processor. If a donation form had been submitted on the staging site before this audit, it would have charged a real credit card through the production Authorize.net account.

I deactivated the Authorize.net gateway plugin immediately. PayPal Commerce wasn’t actually installed on the staging server; the settings existed in the database but the plugin binary wasn’t present, so there was nothing to execute them. WPForms had a PayPal Standard integration but with no credentials configured.

The Full Audit

With the most critical vector closed, I systematically checked every active plugin for external production connections:

WP Mail SMTP had a live Gmail OAuth token — a client ID, client secret, access token, and refresh token — all authenticated as the organization’s real Gmail account with full mail.google.com scope. Every WordPress email the staging site generated would have been sent from the organization’s real email address to real recipients, so I deactivated it.

But…deactivating WP Mail SMTP wasn’t enough. WordPress has a fallback: if no SMTP plugin is configured, it uses PHP’s native mail() function, which routes through whatever mail transfer agent the operating system provides. Debian 13 ships with Exim4 enabled by default. Even with WP Mail SMTP deactivated, emails would have continued flowing, just through a different path. I stopped and disabled Exim4, creating a dual-layer block: no SMTP plugin and no local mail server.

The Facebook Pixel plugin was sending staging page views to the production Meta advertising pixel. StatCounter was polluting production analytics. Akismet was consuming the production API key quota. The WPMUDEV Dashboard was hammering the WPMUDEV API with the production license key, generating 429 rate-limit errors. UpdraftPlus could have triggered scheduled backup jobs. The Cloudflare plugin was making unnecessary API calls since Cloudflare was managed at the DNS level.

I deactivated all of them.

The Zoho integrations had no stored configuration in the database, so I left them active. WP Rocket had no CDN or external API configured; it was doing local caching only. Yoast SEO had no active API connections. WPForms PayPal had no credentials. These were investigated and deliberately cleared.

After the audit, 48 plugins remained active, all content-rendering or site-functionality plugins with no identified production dependencies.

Technical Note: This kind of staging isolation audit is rarely discussed in migration guides, but it’s one of the most important steps. A staging site that can process payments, send emails, sync CRM data, or fire webhooks is more like a second production site that nobody is monitoring. The consequences range from confusing (duplicate analytics data) to financial (real charges on real credit cards) to legal (unauthorized email from the organization’s account). And the window of risk isn’t theoretical. Automated scanners, bots, and attackers don’t operate on a schedule; they probe every publicly accessible URL continuously. A staging site with a DNS record and an open port is being discovered and tested within hours, not days. Thinking “the odds are small, it should be fine” is how real charges end up on real credit cards from a site that was never supposed to be processing transactions.


Part Five: Proving the Build

Functional Testing

With the site deployed and isolated, I tested nine key pages via HTTP fetch, inspecting the raw HTML for correct URLs, proper rendering, and any lingering production references.

Every page returned HTTP 200. The Avada theme rendered correctly, Fusion Builder layouts loaded, navigation menus pointed to staging URLs across all tested pages, the uploads proxy served images from production transparently, external links were correctly untouched by the search-replace (they should have been, since they’re different domains, but verifying this is part of the process), etc.

The search-replace was clean: zero references to the production domain across all tested pages. All 44,696 replacements had landed correctly.

Search Engine Prevention

A staging site that gets indexed by Google is a problem — duplicate content penalties, confused search results, and potentially exposing a site that isn’t ready for public consumption. I set the WordPress blog_public option to 0 (which adds a noindex, nofollow meta tag to every page) and confirmed a static robots.txt was in place with Disallow: /.

The Debug Log and the Six-Second Problem

The WordPress debug log, which I’d enabled during deployment, was supposed to be my early warning system for PHP 8.4 compatibility issues and it was showing exactly what I expected: deprecation notices from GiveWP, Easy Social Share Buttons, Visual Form Builder, WP Rocket, and a few others; all non-breaking, all consistent with the PHP 7.4 → 8.4 jump.

But the log entries were repeating every six seconds even though nobody was browsing the site.

The culprit was WP Rocket’s preload feature. It was set to manual_preload, which crawled the site’s pages on a recurring schedule to keep the cache warm. Each crawled page loaded the full WordPress stack, which triggered every PHP 8.4 deprecation notice, which wrote to the debug log. The preloader was essentially DDoS-ing the debug log with noise.

I disabled the preload via WP-CLI, and the log spam dropped from every 6 seconds to roughly every 60 seconds; the remaining entries came from action_scheduler_run_queue, which GiveWP uses for background donation processing. That needed to stay active.

Twenty-Four Ghosts

The final piece of the staging validation was a complete audit of WordPress’s scheduled task system, WP-Cron. When you clone a production database, every scheduled event comes with it. Plugins that aren’t installed on the staging server still have cron hooks registered. Plugins that were deactivated still have pending scheduled tasks, and some of those tasks do things you really don’t want happening on a staging environment.

The audit found 24 orphaned or inappropriate cron events:

Nine WooCommerce tasks were actively scheduled despite WooCommerce not being installed on the server at all. These were database artifacts from the production site, ghosts of the WooCommerce stack I’d removed during the Part 1 audit. One of them, woocommerce_tracker_send_event, was designed to phone home to WooCommerce’s servers with site telemetry.

Nine tasks from plugins that weren’t active on staging: two from Jetpack, one from Gravity Forms, three from ExactMetrics, one from Mailchimp for WP, one from the All-in-One WP Migration DigitalOcean extension (which could have attempted automated backup exports), and one from Popup Maker.

Two usage-tracking tasks from WPCode and Duplicator that send environment data to plugin vendors.

Two tasks from plugins I’d intentionally deactivated during isolation: UpdraftPlus and WPMUDEV.

One production-leakage task that was the most concerning: give_paypal_commerce_refresh_live_token. Even though the PayPal Commerce plugin wasn’t installed, GiveWP’s core had registered a cron job to refresh the live PayPal OAuth token using credentials stored in the database. Had this fired, it would have reached out to PayPal’s API from the staging server using production credentials.

I deleted all 24.


Lessons for Site Owners

Managed Hosting Hides Complexity — Until It Can’t

Managed hosting platforms abstract away server configuration, security updates, and infrastructure decisions. That abstraction is valuable…up until you need to leave. The user isolation quirks and platform-specific friction I encountered during the export are byproducts of design decisions that work fine until you need to move data out. If you’re on a managed platform, make sure you’ve tested your ability to export a complete backup to an external location. Don’t wait until migration day to discover your platform’s quirks.

A Staging Site Is a Loaded Gun

Cloning a production database creates a second live environment. Payment gateways, email accounts, CRM webhooks, analytics pixels, and API keys don’t know they’re on a staging server. Before you test anything on a staging clone, audit every plugin for external connections and deactivate anything that talks to production systems. The five minutes this takes can prevent real financial transactions, email deliverability damage, and data pollution that’s difficult to undo.

Backup Plugins Aren’t Backup Systems

UpdraftPlus is one of the most popular WordPress backup plugins in the world, and its default configuration stores backup archives inside the web root with protection that only works on Apache. If your site runs on Nginx, LiteSpeed, OpenResty, or any non-Apache web server, your local UpdraftPlus backups may be publicly downloadable. Server-level backups with remote storage are fundamentally more reliable than plugin-based approaches that depend on web server configuration for access control.

Cron Jobs Survive Everything

WordPress’s scheduled task system persists in the database. Uninstalling a plugin doesn’t remove its cron hooks, deactivating a plugin doesn’t cancel its scheduled events, and migrating to a new server carries every cron registration from the old environment. After any migration, audit your cron events and remove anything that references plugins that are no longer active. Tools like WP-CLI’s wp cron event list make this straightforward.

PHP Version Jumps Are Survivable — If You Prepare

The jump from PHP 7.4 to 8.4 produced zero fatal errors across 56 active plugins. Every issue was a deprecation notice or a non-breaking warning. This wasn’t lucky though insofar as it was the result of having done the plugin audit in Part 1, removing dead plugins before the migration, and having a staging environment where deprecation notices could be catalogued and assessed without affecting a live site. The lesson isn’t “PHP upgrades are safe.” The lesson is: the preparation you do before the upgrade determines whether it’s a non-event or a crisis.


Conclusion

The staging site was live, functional, and isolated. WordPress was rendering through Avada on a clean Debian 13 stack. PHP 8.4 was running without fatal errors. Redis was connected. The debug log was clean of anything that wasn’t a known, non-breaking deprecation. Payment processing, outbound email, CRM synchronization, analytics tracking, and API consumption were all verified blocked. Twenty-four orphaned cron events had been cleaned up. The search-replace was confirmed clean across nine key pages.

But “staging works” and “ready for production” are different milestones. The browser-dependent elements (wp-admin functionality, JavaScript-rendered donation forms, the Avada Fusion Builder editor, etc) still needed testing, the 10GB media library still needed to be transferred, and a final fresh database export from production would need to happen close to cutover to minimize the delta of new donations and content changes.

The server was built, the site was deployed, the isolation was verified. Now it needed to be proven ready for the traffic, the transactions, and the trust that production demands.

That work continues in Part 2b.


This case study describes real work performed by Stonegate Web Security. Client details have been anonymized and certain identifying specifics altered. Technical details, methodologies, and findings are reported accurately.


Related Reading