It Started with a Spoofing Complaint
A real estate agent reached out about spoofed emails. Underneath was a two-year-old compromise with four hidden backdoors, active attacker sessions, wire fraud targeting his closings, and a Safe Browsing flag that was poisoning his email from a direction nobody was looking.
The message came through Upwork. A real estate agent who worked with a national franchise said he was receiving spoofed emails to himself that appeared to come from his domain and he wanted it fixed.
For a real estate agent, this isn’t a nuisance…it’s a threat to his livelihood because real estate closings involve wire transfers, often hundreds of thousands of dollars moving on the instructions in a single email and business Email Compromise, or BEC, is the single most expensive category of cybercrime the FBI tracks; real estate professionals are among its most targeted victims. A spoofed email arriving at the right moment, referencing a real transaction with real names and real dollar amounts, can redirect a wire transfer to an attacker’s account. The money is usually gone within hours.
So when this agent said he was getting spoofed emails from his domain (which likely meant others were too), I took it seriously. I asked him to send me his DNS records and access to his domain registrar so I could see what was configured.
What I expected to find was an email authentication gap — missing or misconfigured SPF, DKIM, and DMARC records. That’s almost always the answer when someone reports domain spoofing…fix the records, enforce the policy, and the spoofing stops being deliverable.
What I actually found was something much larger.
The DNS Audit
His DNS records confirmed what I expected. SPF was present but weak…too many lookup chains and a ~all softfail qualifier instead of the -all hardfail that actually tells receiving servers to reject unauthorized senders. DKIM wasn’t published at all, and DMARC was set to p=none, which is monitor-only mode; it watches spoofing happen but does nothing to stop it.
These three gaps together meant that anyone, anywhere, could send an email claiming to be from his domain, and the receiving server would have no authentication signal telling it to reject the message. For a domain attached to real estate transactions, this was an open door.
I drafted a proposal to assess and harden his email authentication, a focused engagement: audit the DNS, configure SPF properly, enable DKIM through his Microsoft 365 tenant, set DMARC to an enforcement policy, and verify the whole chain. A few hours of work. Straightforward.
He agreed, gave me access to his GoDaddy account, and I started looking around.
That’s when the engagement changed.
The Site Nobody Was Watching
His domain hosted a WordPress site — a team page for his real estate practice, built a few years earlier by a marketing agency. He told me the site was outdated and not something he actively used. He just wanted the domain to redirect to his free agent page on his brokerage’s platform; the WordPress site was an afterthought.
The first hint that something was wrong didn’t come from a scan or an audit though…it came from Upwork. I sent the agent a message that included his email domain, and Upwork refused to deliver it, flagging it as containing a malicious link. I hadn’t sent a malicious link though, I’d simply typed his domain name.
That was strange enough to make me visit the site directly. Chrome immediately threw a full-page Safe Browsing warning — the red interstitial telling visitors the site ahead contained harmful programs. I ran Sucuri SiteCheck to confirm and it came back positive for malware.
The site was serving SocGholish (also known as FakeUpdates) sitewide. This is a well-documented malware family that injects JavaScript into every page of a compromised site. When a visitor loads the page, the injected script contacts a command-and-control server and serves a fake browser update prompt. The visitor sees what looks like a legitimate Chrome or Firefox update notification. If they click it, they download a malicious executable. The injected scripts were reaching out to three known C2 domains on every page load.
I told the agent what I’d found. The “fix my email spoofing” engagement had just become something else entirely. He approved the expanded scope, and I got SSH access to the server.
Inside the Server
The site was running on GoDaddy’s Managed WordPress hosting which is a containerized environment with no root access, no cPanel, and a restricted SSH shell that mostly limited you to wp-cli commands. The WordPress installation was running the Avada theme, significantly outdated, and the database had over four gigabytes of overhead. The whole thing had the feel of a site that had been built, launched, and never touched again.
I started methodically working through the filesystem.
The first thing I found was a file in the webroot named with a Unix timestamp from December 2023: a complete copy of an old wp-config.php sitting in a publicly accessible location. It contained database credentials in plaintext…username, password, and the IP address of the database server with its port number. Anyone with a browser could have retrieved it; I deleted it immediately.
The second thing I found was worse. A hidden directory (the name started with a dot, making it invisible to casual browsing) contained a 52-megabyte SQL dump of the entire WordPress database. It was dated November 2023 and it was publicly accessible…a full database export…sitting in a web-accessible directory for over two years…containing every user record, every email address, every piece of content the site had ever stored. This had implications under his state’s data breach notification statute.
Then I found the backdoors.
Four Dropper Plugins
Inside the wp-content/plugins/ directory were four plugins with names that looked like they’d been generated by concatenating random English words, the kind of naming pattern that automated malware deployment tools produce. Each one was a different flavor of backdoor:
The first contained a shell executor — a PHP function that takes arbitrary commands from an attacker and runs them directly on the server. It also contained a decoder that could unpack obfuscated payloads on the fly. But its most important feature wasn’t offensive, it was defensive: it hooked into WordPress’s all_plugins filter (the internal mechanism WordPress uses to enumerate installed plugins) and removed all four dropper plugins from the list. This meant they were invisible; they didn’t show up in the WordPress admin panel, and they didn’t show up when you ran wp plugin list from the command line. As far as WordPress was concerned, they didn’t exist.
The second plugin contained deserialization functions that could process untrusted input and a file writer that could drop new files anywhere on the server.
The third was the most dangerous as it had two independent entry points for remote code execution. The first was an unauthenticated WordPress AJAX hook — meaning an attacker could trigger it over the internet without logging in, without a session, without any credentials at all, just by hitting a specific URL. The second was a standalone PHP file that bootstrapped WordPress independently, bypassing the plugin activation system entirely. Even if you deactivated the plugin, the standalone file would still work. It had a hardcoded authentication token and a hardcoded command-and-control domain.
The fourth plugin had its own persistence mechanisms: activation hooks that would reinstall it if removed, and file-writing capabilities to maintain its foothold.
Four plugins: all invisible, all active, all providing different methods of remote access to the server.
Deactivating the Invisible
Disabling them was its own challenge. The standard approach — wp plugin deactivate <slug> — didn’t work, because the all_plugins filter hook made the plugins invisible to wp-cli by slug. WordPress couldn’t deactivate what it couldn’t see.
The fix was to bypass WordPress’s plugin enumeration entirely. I used wp eval with the --skip-plugins flag to directly modify the active_plugins option in the database, surgically removing the four entries from the serialized array. The active plugin count dropped from fifteen to eleven. The droppers were down.
But the standalone backdoor file (the one that bootstrapped WordPress independently) was still physically sitting on the server. I couldn’t delete it as the WordPress core files and plugin directories were owned by root, and the SSH user had no sudo access. On GoDaddy’s Managed WordPress platform, you can modify files you own, but you can’t touch anything the platform installed. The backdoor file was sitting in a root-owned directory.
I could disable its WordPress integration, but I couldn’t remove it from disk. That constraint would matter later.
The Database
I exported the full database, transferred it to my local forensic workstation, verified the SHA256 hash, and shredded the export from the server.
The forensic analysis of the database revealed something I wasn’t expecting. The session tokens table showed four concurrent active sessions for the agent’s WordPress admin account during a suspicious time window. The sessions originated from IP addresses associated with bulletproof hosting providers — infrastructure specifically designed to be resistant to law enforcement takedowns and abuse complaints. These were not the agent’s sessions…someone was actively logged into his WordPress site as him, from multiple locations simultaneously, during a window where the standalone backdoor was reachable and operational.
This wasn’t an old infection sitting dormant, it was an active compromise with live attacker sessions.
Emergency Containment
I moved fast.
First, I wiped every active WordPress session from the database — all of them, legitimate and illegitimate alike.
Then I rotated all eight WordPress authentication salts in the configuration file. These salts are used to generate session cookies, so rotating them instantly invalidates every existing session across the entire site, even sessions that hadn’t been wiped from the database. Anyone holding a stolen cookie would find it worthless.
I reset the passwords for all three admin accounts — the agent’s account, and two accounts belonging to the marketing agency that had originally built the site. I revoked all application passwords, verified the cron schedule was clean, confirmed the active plugin count was still eleven, not fifteen, and then I re-checked the live site HTML and confirmed the SocGholish injection strings were no longer being served.
Nine verification checkpoints…all clean. Containment held.
I drafted a comprehensive incident report for the client and sent it through Upwork, explaining what I’d found, what I’d done, and what remained.
Preserving the Evidence
Before any further destructive steps, I needed to preserve the forensic evidence. The infection timeline pointed to late 2023 (the timestamp-named files, the SQL dump dated November 2023) meaning this compromise had (possibly) been running for over two years. If the agent needed to involve law enforcement, insurance, or attorneys, the evidence had to exist in a verifiable state.
I created three forensic archives on my local workstation: the database export, a complete archive of the web root (1.4 gigabytes, over 38,000 files), and the server configuration files. Each archive was checksummed with SHA256 and the hashes verified. The symlink in the home directory had caused an initial archiving mistake — tar archived the symlink itself (109 bytes) instead of following it to the actual content — which I caught and fixed by using the dereference flag.
The Email Side
With the immediate server threat contained, I turned back to the original engagement: email authentication.
The agent’s email ran through Microsoft 365 (a GoDaddy-reseller tenant) with Proofpoint Essentials as the inbound gateway. Mail came in through Proofpoint’s MX servers, got filtered, and was delivered to Exchange Online. Outbound mail went directly through Microsoft’s infrastructure.
I logged into the M365 admin center and navigated to the DKIM configuration. Both selectors showed a status of “CNAME Missing / Disabled,” last checked in January 2016. DKIM had never been enabled for this domain.
I published both DKIM CNAME records in the GoDaddy DNS zone, pointed them at the correct Microsoft endpoints, and enabled DKIM signing in the M365 Security portal. Both selectors went active.
Then I hardened DMARC. I updated the record from p=none — the monitor-only policy that had been doing nothing — to p=quarantine, which instructs receiving servers to treat unauthenticated messages with suspicion. I also added a reporting address pointed at my own infrastructure, so aggregate DMARC reports would flow to me for ongoing monitoring.
I also found something in the DNS zone that shouldn’t have been there: an orphaned DKIM selector TXT record from a previous provider that no longer resolved to anything…a dangling record. I flagged it but left it in place; removing records from a zone that’s actively serving email requires care.
One more thing while I was in the admin center: audit logging was disabled in the compliance portal…no historical audit data existed prior to my engagement. I enabled it so whatever happened going forward would at least be recorded.
The email authentication work (the original scope of the engagement) was done. SPF tightened, DKIM enabled on both selectors, DMARC at enforcement with reporting. A clean chain.
But…I wasn’t done looking at the email.
The Wire Fraud
I ran a message trace on the agent’s inbox, a standard step when hardening email for a high-risk domain. Message traces show you the path every inbound and outbound email took, including source IP addresses, authentication results, and delivery status.
One message jumped out: the subject line referenced a payoff quote for a real estate closing, with a deliberate misspelling which is the kind of subtle error that’s a hallmark of phishing. The source IP traced back to a hosting provider in Germany known for bulletproof infrastructure, the same category of provider I’d seen in the WordPress session analysis.
I checked whether Proofpoint had a record of this message., but it didn’t; the message had never passed through Proofpoint at all.
This was an MX bypass attack. The attacker had sent the email directly to Exchange Online’s public SMTP endpoint — the mail.protection.outlook.com address that every Microsoft 365 tenant exposes — completely skipping the Proofpoint gateway that was supposed to be filtering inbound mail. Exchange Online accepted it because there was no inbound connector restricting which sources were allowed to deliver mail to the tenant. The front door had a security guard, but the side door was propped open.
I dug deeper into the message trace logs and found multiple BEC campaigns targeting the agent’s inbox, each from a different attacker IP address, all using the same MX bypass technique. All of them referenced real estate transaction language (payoffs, wire transfers, closings, etc) and they had been delivered successfully to his inbox before my engagement began.
The agent confirmed he recognized several of these as spoofing attempts that had targeted active real estate transactions.
Closing the Side Door
The fix required two components in the Exchange admin center.
First, an inbound connector that whitelisted only Proofpoint’s IP ranges as authorized mail sources. I gathered all forty-two of Proofpoint’s published CIDR blocks and added them to the connector one at a time (the admin interface had no bulk import option).
Second, a transport rule: if a message arrives and did not come through the Proofpoint connector, quarantine it. I chose quarantine over reject deliberately because a reject action bounces the message back to the sender which, in a BEC scenario, means the attacker gets a confirmation that the target’s mailbox exists and that the bypass was detected. Quarantine silently silos the message where the admin can review it without giving the attacker any signal. I also didn’t want to risk at this stage having any legitimate mail rejected.
I verified the rule against the message trace logs. After the rule, the messages that came from the German bulletproof hosting IP that had delivered BEC messages to the inbox showed a status of “Quarantined.” Every one of the previously successful BEC campaigns I could find in the logs would now be caught.
The email infrastructure was locked down…authentication on the outbound side, gateway enforcement on the inbound side, and reporting flowing to me for ongoing visibility.
Burning It Down
Now came the problem of the website itself.
The WordPress installation was contained but not clean,the dropper plugins were deactivated but still on disk, the standalone backdoor file was still present and couldn’t be deleted without root access. The SocGholish injection had been suppressed by the containment steps, but the malware was still physically in the filesystem, and remember, Google had already flagged the domain on its Safe Browsing list so anyone visiting the site in Chrome, Firefox, or Safari would see a full-page warning that the site was dangerous.
The agent didn’t want the WordPress site anymore, he just wanted his domain to redirect to his brokerage’s agent page. The right move was to eliminate the entire WordPress installation, deploy a clean static page, and submit a security review to Google to get the Safe Browsing flag removed.
Simple in concept, harder in practice.
The Constraints
I couldn’t delete the WordPress files ast hey were root-owned, and the SSH user had no sudo access on GoDaddy’s Managed WordPress platform.
I tried using GoDaddy’s web-based file manager instead but my home IP had been rate-limited…probably triggered by the intensive SSH session during the forensic work, so that was blocked.
I tried an .htaccess redirect to bypass WordPress entirely and send all traffic to the brokerage page, but turns out GoDaddy’s Managed WordPress runs nginx, not Apache. Nginx ignores .htaccess files completely.
What did (partially) work was a PHP-level workaround: a .user.ini directive that told PHP-FPM to prepend a redirect script before WordPress ever loaded. Every browser request that hit PHP would execute a 301 redirect before WordPress could run. It was working — but it was a band-aid. Nginx could still serve static files directly without hitting PHP and the malware was still on disk, and Google’s Safe Browsing crawler might not see the redirect. And while I was testing, Firefox briefly showed “Connecting to proxyreflecttools.com” in the status bar — one of the confirmed SocGholish C2 domains. The infection was trying to reactivate.
The only real solution was to remove the hosting product entirely from GoDaddy’s platform and replace it with something clean. But I wasn’t going to nuke a client’s hosting product without explicit permission, and before I could even ask, I needed to figure out whether removing it would disrupt his other services. His email, his Microsoft 365 tenant, his DNS…all of it lived under the same GoDaddy account so pulling the wrong thread could take down more than the website.
I drafted a message explaining exactly what was happening, what had been tried, what had failed, and what I needed from him including a clear yes-or-no on removing the hosting. Then I waited.
The Infrastructure Chess Match
When the agent was available, we moved to the decommission, but the hosting removal had a constraint that made it delicate: his DNS zone.
The domain’s DNS was managed at GoDaddy, and the zone contained over twenty records, not just the A record pointing to the WordPress server but the entire email infrastructure. MX records routing inbound mail through Proofpoint, SPF, DKIM, and DMARC records that had just been configured, Microsoft 365 autodiscover and SIP records, SRV records for federation, the DKIM CNAMEs I’d published hours earlier…all of it lived in the same GoDaddy DNS zone as the A record for the compromised WordPress server.
The question was: if we canceled the Managed WordPress hosting product, would GoDaddy touch the DNS zone? Would it delete records? Inject a “parked” placeholder? Reset the zone entirely?
I studied the GoDaddy account and confirmed that the domain registration and the Managed WordPress hosting were two separate products. The DNS zone belonged to the domain registration, not the hosting. Canceling hosting should leave DNS untouched, but “should” isn’t good enough when twenty-one records are protecting live email infrastructure for a domain under active BEC attack.
Why Not Cloudflare
The original plan was to deploy the static redirect page on Cloudflare Pages. It’s free, it’s fast, and it’s what I’d normally reach for, but Cloudflare Pages requires moving your nameservers to Cloudflare and that would mean manually recreating every one of those twenty-plus DNS records in Cloudflare’s interface, with no room for error, for a domain where a single missing record could break email delivery.
One typo in an MX record, one missed DKIM CNAME, and inbound mail stops arriving or outbound mail starts failing authentication , and for a client who was actively being targeted by BEC attackers exploiting exactly those pathways the risk wasn’t worth it.
GitHub Pages
GitHub Pages works differently. It doesn’t require moving nameservers, you just point standard A and AAAA records at GitHub’s servers, set a CNAME for the www subdomain, and it serves your content. The rest of the DNS zone stays exactly where it is, exactly as it is.
I created a small repository containing a single index.html — a clean static page with a JavaScript redirect to the agent’s brokerage page. No scripts loading from external domains, no PHP, no database, no attack surface.
Getting the repository pushed required its own detour. My workstation’s default SSH key authenticated to a different GitHub account, and the server had three SSH keys configured for different purposes. I generated a new dedicated key for the correct account and pushed with an explicit SSH command specifying which key to use.
I configured GitHub Pages, set the custom domain, added a domain verification TXT record to the GoDaddy DNS zone, and updated the DNS: deleted the old A record pointing to the GoDaddy WordPress server, added four new A records pointing to GitHub Pages, and changed the www CNAME to point to GitHub’s Pages endpoint.
Everything propagated. DNS queries against both local and Google resolvers returned the correct GitHub Pages IPs. A curl to the domain returned 200 OK, Server: GitHub.com. The static page was live.
But GitHub Pages wouldn’t issue an HTTPS certificate.
The Missing Records
The GitHub Pages settings panel showed an error: “Domain does not resolve to the GitHub Pages server.” This was confusing because the A records were correct, DNS had propagated, and the site was actually serving HTTP traffic successfully. But without the DNS check passing, Let’s Encrypt wouldn’t provision a certificate, and HTTPS couldn’t be enforced.
I ran a systematic investigation. I queried the authoritative nameservers directly — no cached results, no resolver artifacts. All four A records were present and correct. No stale “parked” records. No CAA records blocking Let’s Encrypt. GitHub’s status page showed all systems operational.
The answer turned out to be straightforward but underdocumented: GitHub Pages requires AAAA records (IPv6) in addition to the A records; their DNS checker queries both record types. With only A records present, the IPv6 query came back empty, and the checker interpreted that as “not served by Pages.”
I added four AAAA records to the GoDaddy zone, verified propagation at the authoritative nameservers, and the DNS check passed. Let’s Encrypt provisioned a certificate within minutes. I enabled the “Enforce HTTPS” toggle, and confirmed the full redirect chain: HTTP requests returned a 301 to HTTPS, HTTPS served the clean static page with a valid certificate, and www redirected to the apex domain.
The compromised WordPress installation was completely disconnected from the internet.
Removing the Hosting
With the static page live and HTTPS enforced, and the agent’s go-ahead in hand, I removed the GoDaddy Managed WordPress hosting product. I monitored DNS immediately afterward watching for any “parked” record injection that GoDaddy’s system might trigger automatically (as their docs mentioned that would happen).
Nothing changed though and all four A records and all four AAAA records remained intact. Every email infrastructure record was untouched, the compromised WordPress environment, hosting product, server…all gone. Only the clean static page remained.
The Final Twist
There was one more thread to close. Earlier in the engagement, the agent had mentioned that his emails were landing in recipients’ spam. I knew the email authentication work would help, but I also suspected something else was at play. A domain on Google’s Safe Browsing blocklist doesn’t just get browser warnings, the flag can bleed into email.
With the site decommissioned and the static page live, I submitted a security review through Google Search Console.
Google had categorized the security issue as “Links to harmful downloads” — the classification for SocGholish’s fake browser update prompts. The flagged URLs included pages from the old WordPress site that no longer existed. On the current GitHub Pages deployment, those paths returned a 404 or the clean static redirect page. No scripts, no third-party includes, no dynamic content.
The review description I submitted explained the full remediation: the WordPress installation had been permanently removed, the hosting product canceled, the domain now served a single static HTML file on GitHub Pages with zero injection vectors, and HTTPS was enforced via Let’s Encrypt.
While waiting for Google’s review, I wanted to confirm my theory about the email. I had the agent send me test emails, one to a non-Gmail address and one to Gmail.
The non-Gmail message arrived in my inbox cleanly. The Gmail message landed in spam with a red “This message might be dangerous” banner.
I pulled the headers. SPF: pass. DKIM: pass on two separate selectors, both the Microsoft 365 key and a Proofpoint outbound signing key, both on his domain. DMARC: pass, disposition: none. ARC chain intact. Microsoft’s internal spam scoring: clean across every category.
The email authentication was perfect so there was nothing wrong with his sending infrastructure at all.
But Gmail was flagging his messages and the red banner wasn’t a spam classification — it was a safety warning. Gmail was telling recipients that the message contained something dangerous.
The something dangerous was his email signature.
His signature contained his email address…which contained his domain name. Gmail’s content scanner recognized the domain as being on Google’s Safe Browsing blocklist and flagged the message. Not because the email failed authentication or because the email contained malware but rather because the text of the email referenced a domain that Google currently considered dangerous.
Safe Browsing was not only blocking his website, but it was also bleeding into his email. Every message he sent that contained his own domain name (in his signature, in a link, in plain text) was being flagged as a threat by Gmail.
The temporary workaround was simple: remove the domain from his email signature until Google cleared the Safe Browsing flag. The permanent fix would come from Google’s review; Gmail pulls from the same Safe Browsing API, and once the domain was cleared from the blocklist, emails referencing it would stop triggering the warning.
Thankfully less than 24 hours later (quick turnaround) Google approved the review and cleared the domain. The browser warnings disappeared, the Gmail spam flags stopped, and his email started reaching inboxes again.
What Was Underneath
The agent had reached out about email spoofing. That was the whole engagement, as far as he knew. Someone was sending emails that looked like they came from his domain, and he wanted it stopped.
Underneath that spoofing complaint was a WordPress site that hadn’t been updated in years, running a significantly outdated theme, with four malicious plugins that had made themselves invisible to every administrative tool WordPress provides. Underneath that was a standalone backdoor that operated independently of WordPress’s activation system; it couldn’t be disabled by deactivating the plugin, because it didn’t need the plugin to run. Underneath that was a database dump containing every record the site had ever stored, sitting in a web-accessible directory for over two years. Underneath that was four concurrent attacker sessions from bulletproof hosting providers, operating the backdoors in real time during the investigation.
And running parallel to all of it, on the email side, were BEC campaigns targeting his real estate closings…wire fraud attempts that were reaching his inbox by bypassing his email security gateway entirely, slipping in through a side door that no one knew existed.
The spoofing complaint and the website compromise and the BEC campaigns were all happening to the same person, on the same domain, at the same time. But they weren’t the same attack, they were different attackers exploiting different gaps, gaps that had been open for years because the site was built, launched, and forgotten.
The technical work across this engagement (containment, forensics, email authentication, MX bypass remediation, infrastructure decommission, Safe Browsing clearance) was spread across multiple sessions and touched half a dozen different systems, but the lesson is simpler than the work.
A website you’ve forgotten about is not a website that’s been forgotten about by everyone. The less attention you pay to it, the more useful it becomes to someone else. And the damage doesn’t stay contained to the website..tt bleeds into email, into your domain’s reputation, into every transaction your business conducts through that domain name.
The agent’s email was never the problem. It was the canary.
This case study describes real work performed by Stonegate Web Security. Client details have been anonymized and certain identifying specifics altered. Technical details, methodologies, and findings are reported accurately.
Related Reading
-
Case Study: The Silent Impersonator — How One Local Business Lost Trust to Fake Emails
A domain impersonation case study showing how missing email authentication can turn customer trust into payment fraud. -
Case Study: What Modern Email Account Compromises Actually Look Like
A practical look at inbox takeover, attacker persistence, and the recovery steps that matter after email access is abused. -
Your Small Site: The Unwitting Host for 2025's Fake Amazon Phishing Boom
How compromised small business sites get repurposed into phishing and malware infrastructure while the real site appears normal. -
Case Study: The Slow Accumulation of Overlooked Misconfigurations
A routine WordPress audit uncovered full-site backups publicly downloadable for six months, a contact form routing submissions to a stranger, phantom login entries, and email authentication that looked fine until it wasn't.