img

The Fix That Made It Worse

A Case Study in Legacy Configuration, Email Authentication, and the Stakes of Getting It Right


Introduction: When the Alerts Say “Compromised”

The email from the agency was blunt: their DMARC monitoring service had flagged one of their client’s servers as a “misconfigured server with high probability of compromised host.” Thirty-seven messages had failed authentication. Fourteen more were categorized as threats. The server in question belonged to a nonprofit that delivers meals to homebound elderly residents — not the kind of organization that should be sending suspicious email.

The agency had built the nonprofit’s website years ago and still hosted it on their AWS infrastructure. They needed someone to investigate whether the server had been compromised, and if so, how deep the damage went. Adding to the urgency: the agency’s own emails had started landing in spam folders when reaching out to prospective clients. If the nonprofit’s compromised server was somehow damaging the agency’s domain reputation, their business development was at stake too.

What I found wasn’t a compromise at all. It was something more subtle: a configuration that had been perfectly appropriate when the site was built but had never been updated as the relationship evolved. The site worked. The forms worked. The donations worked. Nobody had reason to dig into the email settings until the authentication failures started triggering alerts. And buried in the server’s mail queue, I discovered something that made the stakes personal — real referrals from social workers, containing names, birth dates, and addresses of elderly residents who needed meal delivery services. Those referrals had been stuck for days, undelivered.

This is the story of an investigation that started with fears of malware and ended with a lesson about the hidden costs of technical debt.


Part One: The Initial Assessment

The Client and the Problem

The nonprofit in question serves a major metropolitan area, coordinating meal deliveries to thousands of homebound seniors. Their website handles donation processing, volunteer signups, and — critically — intake forms for new client referrals. Social workers, hospital discharge planners, and family members use these forms to request services for elderly residents who can’t prepare their own meals.

The agency that built and hosts the site reached out after their DMARC monitoring dashboard lit up with warnings. DMARC (Domain-based Message Authentication, Reporting, and Conformance) is an email security protocol that helps prevent spoofing. When properly configured, it tells receiving mail servers how to handle messages that fail authentication checks. The agency had invested in proper email security for their own domain — a strict “reject” policy that blocked unauthenticated messages entirely.

But the alerts weren’t about the agency’s domain. They were about messages claiming to be from the agency’s domain, sent from the nonprofit’s server. That’s what triggered the “compromised host” warning.

Understanding the DMARC Data

Before touching the server, I analyzed the DMARC reports. The monitoring service had collected data from major email providers about messages claiming to originate from the agency’s domain. The results told an interesting story:

Legitimate traffic (all passing authentication):

Source Volume Authentication Status
Google Workspace 975 100% pass
Campaign Monitor 11 100% pass
Amazon SES 1 100% pass

Problematic traffic:

Source Volume Authentication Status Action Taken
Nonprofit’s server 37 0% pass (SPF fail, DKIM fail) Rejected
Unknown servers 8 0% pass Rejected

The pattern was clear. The agency’s legitimate email infrastructure — their Google Workspace accounts, their marketing platform, their transactional email service — all passed authentication perfectly. But 37 messages from the nonprofit’s web server had failed completely, and been rejected by the receiving mail servers.

The key question: were those 37 messages the result of malware sending spam, or something else entirely?

DNS Analysis: Two Domains, Two Stories

I examined the DNS configuration for both domains before requesting server access.

The agency’s domain was textbook-perfect:

  • SPF record authorizing Google, Amazon SES, and their marketing platform
  • DKIM signatures for multiple services
  • DMARC policy set to “reject” with full reporting enabled
  • A specific IP address authorized in their SPF that I noted for later investigation

The nonprofit’s domain was a mixed bag:

  • Primary mail handling pointed to Microsoft 365 (the nonprofit’s actual email provider)
  • A secondary mail server listed at priority 50, pointing to a subdomain that resolved to a different IP
  • SPF record that only authorized Microsoft 365 — the web server wasn’t included
  • DMARC policy set to “quarantine” with no reporting addresses configured
  • A malformed DKIM-like record in the root zone containing invalid characters

The secondary mail server caught my attention. It pointed to a subdomain that suggested an old board portal or committee system. I filed that away as something to investigate.

Further digging revealed even more complexity: a third subdomain existed for [redacted] integration (their donor management platform), with its own separate SPF record. What looked like a simple WordPress site was actually sitting atop a surprisingly complex multi-domain email infrastructure.

But the real problem was already visible: the nonprofit’s web server was sending email, and it wasn’t in anyone’s SPF record. The client explained how this came about — at some point, messages from the nonprofit’s domain had started being flagged as suspicious, so the IT contact changed the From address to the agency’s domain as a workaround. It hadn’t fixed the underlying issue; it just shifted which domain’s authentication was failing.

Before fixing the configuration, though, I needed to rule out the possibility that the DMARC alerts were right — that the server actually had been compromised.


Part Two: The Security Audit

Initial Server Access

With SSH credentials provided, I connected to the server and began the investigation. The environment was typical for a managed WordPress site: Ubuntu on AWS, Apache, PHP, MySQL hosted on Amazon RDS. The WordPress installation used a non-default table prefix — a minor but positive security indicator.

One thing stood out immediately: file ownership. Instead of the standard www-data user, all web files were owned by a user called [redacted]. This suggested either a custom deployment setup or historical changes to the server configuration.

Before diving into the email issue, I needed to rule out the possibility of actual compromise. The DMARC monitoring service had flagged the server as potentially compromised, and I’d been hired to investigate that claim thoroughly.

Developing the Audit Methodology

I developed a comprehensive security audit broken into seven sections, each targeting different indicators of compromise:

Section 1: Environment Verification Confirming tool availability, database connectivity, and establishing a baseline of the server configuration.

Section 2: Malicious Code Detection Scanning for webshells, backdoors, and injected code using a tiered approach based on signal quality.

Section 3: WordPress User and Database Analysis Checking for rogue administrator accounts, database-stored malware, and unauthorized modifications.

Section 4: File Integrity Verification Comparing core WordPress files against known-good checksums and identifying unauthorized modifications.

Section 5: System-Level Persistence Examining SSH keys, sudo access, cron jobs, and other mechanisms that could survive a WordPress reinstall.

Section 6: Network and Process Analysis Looking for suspicious connections, listening services, and unauthorized processes.

Section 7: Access Log Analysis Reconstructing recent activity to identify attack patterns or successful intrusions.

The Tiered Detection Approach

For readers curious how we ruled out compromise with confidence, this section explains the methodology. If you’re more interested in what we found, feel free to skip ahead to “What the Audit Found.”

The site already had Wordfence installed — a solid security plugin that provides ongoing monitoring and signature-based malware detection. But for incident response work, I complement automated scanning with targeted manual analysis. Here’s why: automated scanners are excellent at catching known malware signatures and providing continuous protection, but they can produce significant noise in a WordPress environment. Legitimate plugins use functions like eval(), base64_decode(), and dynamic function calls for valid purposes. When you’re trying to definitively rule out compromise, you need to understand what each finding actually means, not just see a list of flagged files.

I developed a tiered approach based on signal quality:

Tier 1 (High Signal) — Patterns that almost always indicate malware:

  • eval(base64_decode(...)) — The classic obfuscation combo
  • gzinflate or gzuncompress chained with base64 decoding
  • preg_replace with the /e modifier (deprecated but still dangerous)
  • Shell execution functions (system, exec, passthru) with user input
  • assert() with variable input
  • create_function() with variable input

Tier 2 (Medium Signal) — Patterns requiring context:

  • Standalone eval() calls
  • str_rot13() usage
  • hex2bin() outside of known libraries

Whitelist Approach for Root PHP Files: Rather than excluding files matching wp-*.php patterns (which malware often mimics), I maintained an explicit whitelist of the 15 legitimate core PHP files that should exist in the web root. Anything else would be flagged for review.

What the Audit Found

Section 2 Results: No Malware Detected

The tiered scan completed with no high-signal hits. Every pattern match traced back to legitimate code:

  • preg_replace with /e modifier: Found in a widgets plugin and WordPress core compatibility layers
  • assert() with variables: HTMLPurifier, SimplePie, and PHPSecLib — all legitimate libraries
  • Shell execution: Adminer database tool, WordPress core, ManageWP worker plugin

The root directory scan found three PHP files outside the standard whitelist:

  1. adminersn.php — Adminer 4.8.1, a legitimate database management tool (but a security risk if discovered)
  2. test.php — A donation widget iframe embed, benign
  3. wordfence-waf.php — Wordfence firewall bootstrap, legitimate

One fun false positive: the webshell name pattern scan flagged multiple files containing “shell” in the filename. They turned out to be food photos — Italian shells and cheese — entirely appropriate for an organization that delivers meals.

Section 3 Results: 14 Administrator Accounts, All Legitimate

The WordPress user audit revealed 14 administrator accounts, created between 2014 and 2026. The oldest belonged to the agency’s original developers. The pattern told the story of a site that had been actively maintained over the years, with new team members getting access as needed.

The database checks found no evidence of stored malware:

  • No base64-encoded payloads in the options table
  • No script or iframe injection in posts or pages
  • Site URL and home URL intact (common hijacking targets)
  • All scheduled tasks belonged to legitimate plugins

Section 4 Results: Core Files Intact

WordPress core verification returned one discrepancy: readme.html was missing. This is actually intentional security hardening — the file discloses the WordPress version and is commonly removed. All actual PHP files matched their expected checksums.

The file integrity scan did find several items worth noting:

SSH Keys in the Web Root: A directory at /var/www/wordpress/.ssh/ contained:

  • An authorized_keys file with two SSH public keys
  • A private key file (id_rsa)
  • The corresponding public key

This was concerning. SSH keys in a web-accessible directory could potentially be exposed through directory listing vulnerabilities or path traversal attacks. The presence of a private key meant the server could SSH outbound to other systems. I noted this for immediate remediation.

Adminer Database Tool: In addition to adminersn.php in the web root, I found a second copy at /annual-report/adminer.php, created in December 2020. Adminer provides direct database access through a web interface — if an attacker discovered either file, they’d have complete control over the database.

World-Writable Directories: The /wp-content/uploads/filebase/ directory tree had permissions of 777 (world-writable), containing annual reports, financial documents, and tax filings. WP-Filebase (the download manager plugin) doesn’t actually require these permissive settings — 755 for directories works fine as long as the web server user has proper ownership. This was likely a legacy configuration from initial setup. Testing confirmed that correcting the permissions to 755 didn’t break any functionality.

Section 5 Results: SSH Keys Need Verification

The system-level persistence check found five SSH keys in /home/ubuntu/.ssh/authorized_keys:

  1. The AWS default EC2 key — expected
  2. A key from an internal AWS IP — likely deployment automation
  3. Three keys with personal machine identifiers — needed client verification

None of these indicated compromise, but they did indicate that multiple people had SSH access to the server over the years, and nobody had audited who still needed it.

Section 6 Results: Clean Network Profile

The network scan showed expected services:

  • SSH on ports 22 and 2222 (the alternate port is common hardening)
  • HTTP/HTTPS for web traffic
  • SMTP on port 25 for local mail delivery
  • FTP on port 21 — unusual for a modern setup, worth questioning

All established connections at audit time were legitimate: my SSH session and normal web traffic.

Section 7 Results: Adminer Never Discovered

The access log analysis revealed the most reassuring finding of the audit. While automated scanners had probed for common Adminer filenames:

  • adminer.php — blocked by existing security rules
  • adminer-4.5.0-mysql.php — not found

The actual files (adminersn.php and annual-report/adminer.php) had never been accessed by attackers — the only access in the logs was my own testing during the audit. The obscure naming had provided accidental security-through-obscurity, but this was luck, not protection.

Extended Security Checks

Beyond the core seven sections, I ran additional checks for:

  • Theme file inspection (functions.php, header.php, footer.php) — clean
  • JavaScript malware in frontend assets — none found
  • wp-config.php audit — found a duplicate configuration line but no malware
  • Mail queue status — this is where things got interesting
  • Conditional malware testing (user-agent or referrer-based redirects) — none detected
  • Auto-prepend file persistence — not present
  • DNS blacklist status — clean (a false positive from Spamhaus was explained by cloud IP query blocking)

Part Three: The Real Problem

Understanding the Full Picture

With compromise ruled out, I could focus on the email configuration. We knew the basic outline — the IT contact’s workaround had shifted the authentication failure from one domain to another. But the exact timeline was murky, and the client wasn’t entirely sure when things had broken or how long they’d been broken.

Here’s what we could piece together: the server IP had never been in anyone’s SPF record. Not the nonprofit’s, not the agency’s. So emails from this server had likely been failing SPF checks for as long as it had been sending mail — possibly years. At some point, inboxes started flagging messages sent from the client’s domain as suspicious. The IT contact’s fix was to change the From address to the agency’s domain, thinking a more established domain would have better deliverability. But this didn’t address the root cause; it just changed which domain’s SPF was failing.

What we don’t know is exactly when this workaround was implemented, or when it went from “limping along” to “completely broken.” It’s possible that the February 2024 enforcement changes — when Google, Yahoo, and Microsoft started strictly rejecting messages that failed authentication — turned a soft failure into a hard one. Or it’s possible the problem had been silently worsening for months before anyone noticed the DMARC alerts. Either way, by the time I was brought in, 100% of outbound mail was being rejected.

The WordPress database confirmed the scope of the problem. A query for the agency’s domain returned 907 matches in the options table:

admin_email = wordpress@[agency-domain].com

The configuration touched everything — the core admin email, plugin notification settings, e-commerce receipts, security alerts. Every outbound email was guaranteed to fail authentication because the server’s IP address wasn’t in the agency’s SPF record, and it couldn’t sign messages with the agency’s DKIM keys.

The Catch-22

The database also revealed something poignant. Someone had tried to fix this:

new_admin_email = [nonprofit]webmaster@gmail.com

WordPress requires email confirmation to change the admin address. When someone tried to update it to a Gmail account, WordPress sent a confirmation email. That confirmation was sent from wordpress@[agency-domain].com. It failed DMARC. It was rejected. The change could never be completed.

The system was broken in a way that prevented itself from being fixed through normal means.

The Stuck Queue

I checked the Postfix mail queue and found 85 messages waiting to be delivered. Some had been stuck for days. The errors fell into two categories:

Messages to external domains (like the agency’s Gmail-hosted email):

connect to alt4.aspmx.l.google.com[...]:25: Connection timed out

Messages to the nonprofit’s own addresses:

connect to [board-subdomain]:25: Connection timed out

The first failure was the intentional port 25 block — the precaution the AWS manager had taken when compromise was suspected. The second was that dead secondary MX record I’d noticed in the DNS analysis. Mail destined for the nonprofit’s domain was trying to route through an old board portal server that no longer responded.

The Human Cost

I extracted one of the stuck messages to understand what was being lost. The content made the stakes viscerally clear:

From: [Hospital System] Social Services
Subject: New Client Referral

Client Name: [Redacted]
Date of Birth: [1950s]
Address: [Redacted]
Phone: [Redacted]

Reason for referral: Recently discharged, lives alone, 
unable to prepare meals independently.

These weren’t spam. They weren’t test messages. They were referrals from hospital social workers and case managers, trying to connect elderly patients with meal delivery services. Each stuck message represented a real person — someone recently discharged from the hospital, living alone, unable to cook for themselves — whose request for help was sitting in a queue, undelivered.

The email system had been silently failing for an unknown period. Form submissions appeared to succeed from the user’s perspective. The “thank you” page loaded. But the actual notification never reached anyone who could act on it.


Part Four: The Fix

DNS Changes

The remediation required changes to the nonprofit’s DNS, which was managed through AWS Route 53:

1. Update SPF to authorize the web server:

Before: v=spf1 include:spf.protection.outlook.com -all
After:  v=spf1 ip4:[server-IP] include:spf.protection.outlook.com -all

2. Remove the dead secondary MX record: The old board portal server was no longer responding. Leaving it in the MX records caused mail delivery to timeout before falling back to the primary (Microsoft 365).

3. Fix a recursive SPF loop on the board subdomain: While investigating, I discovered the board portal subdomain had a broken SPF record that referenced itself, causing validation loops. This was corrected as well.

WordPress Configuration

With the DNS in place, the WordPress changes were straightforward:

1. Update admin_email: Changed from wordpress@[agency-domain].com to info@[nonprofit-domain].org, matching the contact address on their privacy policy.

2. Clear the stuck pending change: Deleted the new_admin_email option that had been trapped in confirmation limbo.

3. Database cleanup: Used Better Search Replace (already installed) to update the remaining agency domain references in plugin settings throughout the database.

Security Hardening

While the email issue was the primary engagement, the security audit had identified several items requiring remediation:

Immediate removals:

  • Both Adminer files (security risk if discovered)
  • The .ssh directory from the web root (backed up first)
  • Orphaned test files
  • Backup .htaccess files

Permission fixes:

  • Changed wp-config.php from 644 to 640
  • Fixed world-writable directories in the uploads folder

Configuration cleanup:

  • Removed duplicate FORCE_SSL_ADMIN define in wp-config.php

The Network Block

Testing revealed an unexpected complication. The server couldn’t establish outbound connections on port 25 to any mail server.

When I raised this with the client, the explanation made perfect sense: “When this all started, we were concerned that the site might be infected with malware, so our AWS manager shut down the port just in case it was being used to relay spam.”

This was exactly the right precaution to take when compromise was suspected. Now that the security audit had ruled out malware, they reopened the port. The approximately 18 recently queued messages with the old sender address flushed out — all destined for [nonprofit-domain].org addresses, where they were rejected by Microsoft due to the authentication failure. But after that final batch, the system was clean.

Exporting the Queue

Before flushing the stuck mail queue, I exported all 85 messages to a compressed archive. These contained client referrals with protected health information. The nonprofit would need to manually process these to ensure no one fell through the cracks during the period of email failure.


Part Five: Resolution and Follow-Up

Confirming the Fix

With the port reopened and the WordPress configuration updated, I sent test emails. The first attempt revealed one more issue: Postfix wasn’t configured for TLS encryption, meaning emails had been transmitted in plaintext between servers. For a system carrying referrals with names, birth dates, and addresses of elderly residents, that’s a real exposure — anyone positioned to intercept traffic between mail servers could have read the contents. A quick configuration change (smtp_tls_security_level=may) and retest confirmed everything was working:

  • SPF: Pass
  • DKIM: Pass (via Microsoft 365 for replies)
  • DMARC: Pass
  • TLS: Encrypted

The database cleanup was complete: six plugin settings changed from the agency’s domain to info@[nonprofit-domain].org. No code path remained that would send mail from a [agency-domain].com address. The only remaining references were either recipient addresses (who receives notifications) or non-email plugin data.

Blocklist checks across MXToolbox, Spamhaus, SpamCop, Barracuda, and others all came back clean.

Circling Back to the Agency’s Deliverability

With the nonprofit’s email fixed, it was time to address the other half of the original concern. I ran comprehensive blacklist checks on the agency’s domain and server IP across multiple services (MXToolbox, Spamhaus, SpamCop, Barracuda, etc.) and everything came back clean. But blacklist status doesn’t always tell the whole story so I asked the agency to send test emails to three different inboxes. This would allow me to verify real-world delivery rather than just relying on clean technical results.

The results came back and unforauntely weren’t what we hoped: their emails were landing in spam at Gmail and Outlook, even though all authentication was passing perfectly.

The initial assumption was that the [nonprofit-domain].org misconfiguration had damaged the agency’s sender reputation. But something about that explanation didn’t sit right.

When WordPress had sent emails claiming to be from [agency-domain].com, Gmail and Outlook checked the authentication, saw it failed, and rejected those messages outright per the p=reject DMARC policy. They were never delivered — no one ever saw them in their inbox or spam folder. That’s DMARC working exactly as designed.

If failed authentication attempts caused reputation damage, DMARC would be useless. Every domain would be vulnerable to sabotage by anyone willing to send a few spoofed emails. The system can’t work that way.

Gmail’s spam classification message provided the clue: “previous messages from [agency-domain].com were marked as spam.” That implies actual Gmail users received emails from [agency-domain].com and clicked “Mark as Spam.” You can’t mark something as spam if it was never delivered to you in the first place.

This pointed to a different problem entirely: the reputation damage was likely coming from the agency’s legitimate email, not the [nonprofit-domain].org misconfiguration.

The Limits of Visibility

What makes issues like this notoriously difficult to diagnose is that a sending domain’s “reputation” is not a single score, and it’s not fully transparent. Each major mailbox provider maintains its own internal reputation models, built from dozens of signals evaluated continuously and weighted dynamically. We can observe some of those signals — authentication results, complaint rates, bounce behavior, engagement patterns — and providers publish limited guidance around best practices.

What we don’t have is access to the full model: the exact weight of each signal, how they interact, how quickly they decay, or how they differ by recipient population and traffic context. Those details are intentionally opaque, both to protect the integrity of the filtering systems and to prevent adversarial gaming. As a result, reputation problems are rarely solved by a single configuration change; they require careful inference, controlled adjustments, and time to observe how the system responds.

Google’s spam complaint threshold is 0.3% — just 3 people out of 1,000 clicking “spam” instead of unsubscribing or deleting can trigger penalties. Even a small amount of cold outreach or recipients who habitually mark newsletters as spam (rather than unsubscribing) could explain what we were seeing.

Resolution

After ruling out technical causes — the agency’s DNS was solid, authentication was working correctly, and they weren’t on any blocklists — it seemed the issue was sender reputation driven by recipient engagement on legitimate mail.

I recommended setting up Google Postmaster Tools to get visibility into their domain reputation score and spam complaint rate. It’s free, takes minutes to configure, and would provide the data needed to diagnose the issue definitively. The client’s team declined, preferring to monitor the situation and see if reputation recovered on its own now that the nonprofit’s misconfiguration was resolved.

The engagement ended there. The nonprofit’s email was fixed. The agency’s deliverability issue remained uninvestigated.


Lessons Learned

For Organizations

1. Email configuration doesn’t update itself when relationships change. When an agency builds your site, the email settings they configure make sense for that relationship — they’re managing the site, so sending notifications from their domain is logical. But when the engagement evolves, those settings remain. The admin_email in WordPress determines the “From” address for all system messages. If it still points to your vendor’s domain years later, your emails will fail authentication — and you may never know. This isn’t anyone’s fault; it’s just how handoffs work unless someone explicitly audits the configuration.

2. “It just works” can mask silent failures. Form submissions that display a success message may not actually send email. Without delivery monitoring or test submissions to addresses you control, failures can go unnoticed for months or years.

3. Dead DNS records cause cascading failures. An old MX record pointing to a decommissioned server doesn’t just fail — it causes timeouts that delay or prevent delivery to your working mail server. Audit your DNS periodically.

4. Security tools left behind become liabilities. Adminer is a powerful database tool. Left on a production server with an obscure filename, it’s a ticking time bomb. The only reason it wasn’t exploited was luck.

5. DMARC p=reject protects your reputation from spoofing. When investigating the agency’s subsequent deliverability issues, a key insight emerged: failed authentication attempts from unauthorized servers don’t damage your domain’s reputation if you have a reject policy. Those messages are blocked before delivery — no recipient ever sees them, so no one can mark them as spam. If your domain’s reputation is suffering, look at your legitimate mail first.

For Investigators

1. “Compromised host” alerts deserve investigation, not panic. The DMARC monitoring service was doing its job — flagging unusual authentication failures. But “SPF fail + DKIM fail” can mean malware sending spam, or it can mean a misconfigured legitimate server. The investigation determines which.

2. Ruling out compromise is as valuable as finding it. This engagement found no malware. That’s not a failure — it’s a definitive answer to the client’s question. The comprehensive audit methodology documented exactly what was checked and what was found, providing confidence in the “all clear” verdict.

3. The human stakes matter. Finding stuck referrals for elderly meal recipients changed this from a technical exercise to an urgent operational issue. Understanding what the system is supposed to do helps prioritize remediation.

4. Legacy configurations have context. The wp-config.php file contained a comment: // Added by [Agency]. The original developers had done their job correctly — the configuration was appropriate for when they were actively managing the site. The issue arose not from negligence, but from the natural evolution of a client relationship where the technical details didn’t get revisited.

5. Question your initial hypothesis. When the agency’s own email started landing in spam, the obvious assumption was that the [nonprofit-domain].org misconfiguration had damaged their reputation. But that didn’t hold up to scrutiny: DMARC p=reject means failed messages are never delivered, so they can’t generate spam complaints. Gmail’s error message — “previous messages were marked as spam” — pointed to a different cause entirely. The reputation damage was coming from legitimate mail, not the misconfiguration. Always trace the logic through before committing to an explanation.

6. Free diagnostic tools exist for a reason. Google Postmaster Tools takes minutes to set up and provides definitive data on domain reputation and spam complaint rates. When deliverability issues persist after technical fixes are confirmed, visibility into the actual reputation scores is the logical next step — not waiting and hoping.


Technical Appendix: Audit Methodology

Section 1: Environment & Dependencies

  • Verify required tools (wp-cli, stat, systemctl)
  • Confirm database connectivity
  • Document user context and sudo access

Section 2: Malicious Code Detection

  • Tier 1 scans for high-signal obfuscation patterns
  • Tier 2 scans for medium-signal indicators
  • Whitelist comparison for root PHP files
  • Webshell name pattern matching
  • PHP files in uploads directory
  • Image file PHP injection detection
  • Variable function calls in high-risk directories

Section 3: WordPress Users & Database

  • Administrator account enumeration
  • Recent user creation review
  • Site URL/home URL integrity
  • Options table malware patterns
  • Large option inspection
  • Scheduled task audit

Section 4: File Integrity

  • WordPress core checksum verification
  • Recent PHP file modifications
  • Must-use plugins inspection
  • Drop-in file analysis
  • Theme file review
  • Permission audit
  • Hidden file detection

Section 5: System Persistence

  • SSH authorized_keys (all users)
  • UID 0 and sudo group membership
  • System and user crontabs
  • PHP auto_prepend_file directives

Section 6: Network & Processes

  • Listening services enumeration
  • Established connection analysis
  • Process ownership review

Section 7: Access Logs

  • Sensitive file access history
  • Attack pattern identification
  • Blocked IP correlation
  • User agent analysis

Section 8: Extended Checks (Supplementary)

  • Theme functions.php inspection
  • JavaScript malware detection
  • wp-config.php security audit
  • Mail queue analysis
  • Conditional malware testing
  • External reputation verification

This case study documents a real engagement with client details anonymized. The investigation, methodology, and findings are presented accurately to illustrate the process of security assessment and email deliverability troubleshooting.

Related Reading