Introduction: When the Declines Won’t Stop
The first sign of trouble was the declines. Lots of them.
The client, a regional nonprofit ministry association, noticed something wrong in their Authorize.net dashboard. Their [redacted] donation form was receiving a surge of submissions, all for exactly $500, all being declined by the payment gateway. No money was actually being stolen, but the pattern was unmistakable: someone was testing stolen credit cards against their donation form.
They were worried. Would this damage their email reputation if failure notifications were going out? Would Authorize.net suspend their account for suspicious activity? They knew they needed bot protection so they attempted to deploy a new donation form with Cloudflare Turnstile but they couldn’t get the new form to display correctly on their site. The old form, the one being attacked, had no protection at all.
What followed was an investigation that revealed nearly two thousand fraudulent attempts over twelve days, a surprisingly amateur attacker with a predictable schedule, and a series of defensive gaps that let the whole thing happen in the first place. Along the way, I discovered why their new form wouldn’t render, and why the protection they’d been trying to deploy had been blocked by an unrelated configuration issue the whole time.
Part One: Understanding the Attack
What Is Card Testing?
Before diving into the technical details, it helps to understand what card testing actually is and why attackers do it.
Credit card numbers get stolen constantly through data breaches, phishing attacks, compromised point-of-sale systems, and countless other methods. A rogue restaurant server snapping photos of customers’ cards during their shift; a retail worker memorizing numbers during checkout; a compromised e-commerce database leaking millions of records at once; the supply of stolen card numbers is endless. But a stolen card number isn’t immediately useful. The attacker doesn’t know if the card is still active, if it’s been reported stolen, if it has available credit, or if it will trigger fraud alerts when used.
Card testing solves this problem. Attackers submit small transactions through legitimate payment forms (often donation forms on nonprofit websites) to see which cards get approved. Cards that pass the test are “validated” and become significantly more valuable on the black market. A validated card might sell for ten times what an untested number fetches.
The damage to the nonprofit isn’t just reputational. High volumes of declined transactions can trigger scrutiny from payment processors. Excessive chargebacks (when cardholders dispute fraudulent charges that did go through) can result in account termination or being placed on industry blacklists. And the operational burden of sorting legitimate donations from fraud consumes staff time that should be spent on the organization’s actual mission.
The Initial Data
I started by requesting an export of the failed donations from GiveWP. What came back was eye-opening.
$ analyze-donations --failed --export
CARD TESTING ATTACK ANALYSIS
============================
Total Fraudulent Attempts ...... 1,873
Attack Period .................. January 19–30, 2026 (12+ days)
Unique IP Addresses ............ 1,746
Target Amount .................. $500.00 (99.8%)
Billing Address Submitted ...... 0%
[!] Pattern detected: automated card testing
Nearly two thousand attempts, almost all for exactly $500, from over seventeen hundred different IP addresses. And not a single one included billing address information.
That last detail turned out to be the key to stopping everything.
Following the Patterns
The data revealed several distinct signatures that marked this as automated fraud rather than legitimate donation attempts gone wrong.
The email pattern was remarkably consistent. Every single fraudulent submission used an email address in the format firstname.lastname##@domain.com — where ## was a two-digit number. Examples like [email protected] and [email protected] appeared throughout the dataset. Only four email domains were used: Gmail, AOL, Yahoo, and Outlook.
The names came from a dictionary. The first and last names weren’t random strings — they were common American names pulled from what appeared to be a census list. Michael, John, David, Jennifer. Johnson, Smith, Williams, Brown. Realistic enough to pass a glance test, but the combination of pattern plus volume exposed them.
The IP addresses were sophisticated…and useless to block. Ninety-three percent of IP addresses appeared exactly once. The attacker was rotating through a residential proxy network, using IP addresses that belonged to real home internet connections and mobile carriers. T-Mobile, Comcast, Verizon. These weren’t datacenter IPs that could be easily blacklisted.
The IP distribution told me something important: traditional IP blocking wasn’t going to solve this problem. With 93% of addresses being single-use, any blocklist would be chasing yesterday’s attack while today’s submissions came from entirely new addresses.
Why $500?
The amount caught my attention. Traditional card testing (the kind most people have heard about) uses micro-transactions: $1.00, $0.99, $2.00; small amounts that fly under the radar, trigger fewer fraud alerts, and minimize damage if the card turns out to be dead. That’s still common on retail and e-commerce sites.
But donation forms are different, and attackers know it.
On a nonprofit donation page, $500 isn’t suspicious; it’s normal. Donation forms routinely offer preset amounts like $100, $250, $500, $1,000. A $500 charge doesn’t trigger “micro-transaction testing” heuristics; it doesn’t look like fraud to pattern-matching algorithms; it looks like a generous donor.
The amount sits in a strategic sweet spot: below the thresholds that trigger manual review, above the amounts that scream “card testing,” and squarely within the range that legitimate donors actually give. On a retail checkout, a $500 purchase of nothing would stand out. On a donation form, it’s invisible.
There’s another reason to test at higher amounts. Card testing is about answering one question: does this card work for real transactions? A successful high-value authorization proves the card is valid, active, not locked, and capable of handling significant charges. That card becomes valuable. It can be sold on dark web marketplaces or used for actual fraud elsewhere. A $1 test sometimes passes when higher charges won’t; a $500 success is a stronger signal.
This also explains why nonprofits, churches, and small organizations get targeted disproportionately. They’re running older plugins, lighter monitoring, less aggressive velocity rules, and their fraud tuning is minimal because fraud wasn’t supposed to be their problem. Attackers know this. They seek out the path of least resistance, and a regional ministry association running a legacy donation form is exactly that.
The $500 amount reflects a deliberate calculation rather than a random figure.
Part Two: Profiling the Attacker
Here’s where things got interesting. The timing data painted a remarkably clear picture of who I was dealing with.
The Sleep Schedule
I analyzed when the attacks occurred, hour by hour, across all twelve days of data. The pattern was unmistakable:
| Time Window | Attack Volume | Activity Level |
|---|---|---|
| 10pm – 4am | 21 attempts | Near zero |
| 4am – 9am | 312 attempts | Moderate |
| 9am – 3pm | 376 attempts | Sporadic |
| 3pm – 4pm | 8 attempts | Break |
| 4pm – 6pm | 111 attempts | Building |
| 6pm – 9pm | 1,022 attempts | Peak (55% of all attacks) |
| 9pm – 10pm | 37 attempts | Winding down |
Over half of all attacks occurred between 6pm and 9pm Central time. Activity dropped to nearly nothing between 10pm and 4am. There was even a consistent dip around 3-4pm…an afternoon break.
This wasn’t a botnet. It wasn’t a sophisticated criminal operation running 24/7 from servers around the world. It was one degenerate person, almost certainly based in the Eastern or Central US time zone, running scripts during their evening leisure hours.
They went to bed around 10pm, woke up and occasionally ran the script in the early morning, took breaks in the afternoon, and every evening after what was probably dinner, they’d fire up their card testing operation and let it run while they did whatever else people do with their evenings.
The Bot Signatures
Server log analysis revealed additional details about the attack methodology. The attacker operated in two phases:
Phase 1: Reconnaissance. The attacker’s script first visited the donation page to scrape the form structure. This request came with a telling user agent: python-requests/2.25.1. They weren’t even trying to hide that this was an automated script.
Phase 2: Submission. The actual donation submissions came with spoofed browser user agents, attempting to look like legitimate Chrome traffic. But the version numbers were fabricated, things like Chrome/54.0.1470.1951 that don’t correspond to any real Chrome release. Real Chrome version numbers follow a specific pattern; these were obviously generated by concatenating random digits.
The submissions included proper referrer headers (claiming to come from the donation page) and attempted to mimic real browser behavior. But the tells were everywhere if you knew where to look.
Why This Profile Matters
The attacker profile shaped my remediation strategy. A sophisticated 24/7 botnet operation would require different countermeasures than what I was facing. This attacker was using residential proxies (showing some technical awareness) but making basic operational security mistakes (the Python user agent, the fabricated Chrome versions, the obvious email pattern).
More importantly, this profile suggested the attack would likely stop once I added significant friction. A professional operation would adapt, find workarounds, probe for weaknesses in new defenses. An evening hobbyist running semi-automated scripts was more likely to simply move on to an easier target.
That shaped how I prioritized the fixes.
Part Three: Why the Existing Defenses Failed
Before implementing new protections, I needed to understand why the client’s existing security measures weren’t working.
The CAPTCHA That Wasn’t
The client believed they had eventually successfully configured Cloudflare Turnstile to protect their donation forms. They were half right.
Turnstile was installed and configured…on their new donation form, built with GiveWP’s Visual Form Builder. But the form being attacked was a legacy form, built with GiveWP’s older Options-Based form builder. The legacy form architecture didn’t support Turnstile integration.
The client had enabled Turnstile thinking it would protect all their forms. Instead, it protected only the new ones they hadn’t fully deployed yet. The legacy form (the one with the established URL, the one linked from their emails and printed materials, the one that actually received donations) was completely unprotected.
The WAF That Did Nothing
Cloudflare was in place, proxying traffic to the site. But reviewing the security analytics revealed almost no protective activity. Over a 24-hour period with thousands of malicious requests, Cloudflare’s security features had mitigated exactly five.
The site was configured for Cloudflare’s CDN and performance benefits, but no custom security rules had been implemented. The default protections weren’t catching anything because the attack traffic looked superficially normal; it came from residential IPs, spoofed legitimate browser characteristics, and targeted a legitimate endpoint.
Without custom rules tuned to this specific threat, Cloudflare was essentially in pass-through mode.
Part Four: Shutting It Down
With a clear picture of the attack and why existing defenses had failed, I implemented a multi-layered response targeting different parts of the attack chain.
The Kill Switch: Payment Gateway Hardening
The fastest and most effective fix came from the payment gateway itself.
Remember that 100% of fraudulent submissions included zero billing address data? Authorize.net has a feature called Address Verification Service (AVS) that checks whether the billing address submitted with a transaction matches what the card issuer has on file. When no address is provided at all, the transaction returns AVS Code B.
The client’s Authorize.net account was configured to “authorize and hold for review” transactions that triggered AVS Code B. This sounds protective, but it’s not. “Authorize and hold” means the transaction is still sent to the card issuer; the attacker still gets the validation data they’re seeking. The transaction is simply flagged for later human review.
I changed this setting to “Decline”, but that alone wasn’t enough. If the form didn’t collect billing address data, legitimate donors would be declined too. So alongside the gateway change, I ensured the donation form actually required billing address fields. The attacker’s script submitted nothing in those fields; now it would fail validation before even reaching the gateway.
One configuration change at the gateway level, plus required fields on the form, and 100% of the observed attack pattern was blocked at the source.
I applied similar changes to CVV (Card Verification Value) settings:
$ authnet-config --compare gateway-settings
AUTHORIZE.NET FRAUD SETTINGS
────────────────────────────────────────────────────────
AVS Code B (no address)
- Hold for review
+ Decline
CVV Code N (mismatch)
- Hold for review
+ Decline
CVV Code S (not provided)
- Allow
+ Decline
Velocity Filters
- Hold for review
+ Decline
[✓] 4 settings updated
The velocity filter change was notable. Authorize.net’s logs showed this filter had triggered 480 times during the attack period, clear evidence the attack was reaching the gateway in volume. But because the action was set to “hold,” all 480 transactions were processed and the attackers received their validation data.
Cloudflare WAF: Blocking the Scout
With the gateway hardened, I added Cloudflare rules to stop attacks earlier in the chain.
The most obvious target was the Python user agent appearing in the reconnaissance phase. I created a WAF rule to block any request containing python-requests in the user agent when accessing donation-related paths.
Now to be clear, this rule is trivially easy for a sophisticated attacker to bypass; they just need to spoof a different user agent. But remember the attacker profile: an evening hobbyist, not a professional. Breaking their existing script creates friction. They might adapt, or they might move on. Given the profile, I bet on the latter.
I also implemented a more surgical rule blocking POST requests to the donation processing endpoint that didn’t include a valid referrer header from the client’s domain. This wouldn’t stop attacks that properly spoofed the referrer (as the submissions did), but it provided another layer and caught the reconnaissance requests.
The Form Migration: Closing the Real Gap
Gateway hardening provided immediate protection, but the underlying vulnerability remained: a legacy donation form with no bot protection, no CAPTCHA, no way to distinguish humans from scripts at the application layer.
The client had already built a new donation form using GiveWP’s Visual Form Builder with Turnstile enabled. But it wasn’t deployed; they’d encountered display issues that made the form appear cut off and broken on their site.
Diagnosing the display problem led to a separate but related discovery: Cloudflare’s Rocket Loader feature was breaking GiveWP’s iframe-based form rendering.
The fix required Cloudflare Page Rules to disable Rocket Loader specifically for GiveWP routes:
| URL Pattern | Setting |
|---|---|
*/donations/* |
Rocket Loader OFF |
*?givewp-route=* |
Rocket Loader OFF |
With Rocket Loader disabled for donation pages, the new form rendered correctly. I also added responsive CSS fixes to handle edge cases where the theme’s mobile breakpoints caused the iframe to collapse.
The client could now deploy their Turnstile-protected form and retire the vulnerable legacy form.
Part Five: Outcomes and Lessons
The Resolution
Once the gateway hardening was in place, the attack stopped. Completely.
The combination of declining transactions without billing addresses and requiring the address field to be filled out on the form meant the attacker’s script simply stopped working. Every attempt would be rejected…not held for review, not flagged for later, just declined. No validation data. No value.
The attacker could have adapted. They could have updated their script to generate fake addresses, tried different amounts, probed for other weaknesses…but they didn’t. Whether they figured they’d been caught, decided it wasn’t worth the effort to retool, or simply moved on to an easier target, the traffic ceased entirely once the protections went live.
That’s consistent with the evening hobbyist profile. A professional operation would have tested the new defenses, looked for gaps, iterated on their approach. Someone running card testing scripts as a side activity during their free time? They move on; there are easier marks.
The client deployed their new Turnstile-enabled form, retired the legacy form, and documented the configuration requirements for their hosting setup (Rocket Loader exceptions, responsive CSS). Their email reputation remained intact, a blacklist check against 70 monitored lists came back clean.
The evening hobbyist, presumably, found someone else’s donation form to abuse.
What This Case Teaches
Frontend security isn’t security. A CAPTCHA that only runs in the browser can be bypassed by anyone who doesn’t load the page. Server-side validation (at the application layer, at the gateway layer, or both) is where real protection happens.
Legacy systems are security debt. The vulnerable form had been in production for years. It worked fine, it collected donations, but it couldn’t be retrofitted with modern protections. Migration was the only real fix, and migration had been deferred because “it works.”
URL obscurity is not protection. Moving pages doesn’t protect endpoints. Attackers target the processing infrastructure, not the frontend presentation.
Gateway configuration deserves attention. Most organizations set up their payment gateway once and never revisit it. Settings that sound protective (“hold for review”) may provide no actual protection against the threats you face.
Attacker profiling informs response. The same technical attack from a professional operation would require different countermeasures than one from an evening hobbyist. Understanding who you’re dealing with helps you allocate resources appropriately.
Logs tell the story. The attack methodology, the bypass technique, the attacker’s schedule…all of this was visible in the data. Without log analysis, I would have been guessing at solutions; with it, I could target specific behaviors with specific countermeasures.
Technical Appendix: Detection Patterns
For organizations wanting to identify similar attacks in their own data, here are the patterns I documented:
Email regex pattern:
^[a-z]+\.[a-z]+\d{2}@(gmail|aol|yahoo|outlook)\.com$
Bot user agent (reconnaissance phase):
python-requests/2.25.1
Fabricated Chrome versions (submission phase):
Look for Chrome version numbers that don’t match real releases, particularly patterns like Chrome/XX.0.XXXX.XXXX where the build numbers exceed realistic values.
Billing address indicator: AVS Code B (no address provided) appearing on high volumes of similar transactions.
Timing signature: Near-zero activity during US nighttime hours combined with evening peaks suggests a single domestic operator rather than a distributed botnet.
The client’s donation infrastructure is now protected by layered defenses: gateway-level address verification, Cloudflare WAF rules, and Turnstile-enabled forms with server-side validation. The attack was opportunistic, the attacker was unsophisticated, and the fix was relatively straightforward once I understood what was actually happening.
The harder problem, the one that let this happen in the first place, was the gap between perceived protection and actual protection. The client thought they had a CAPTCHA,they thought changing URLs would help, they thought their gateway settings were secure. None of that was true, and without investigation, they wouldn’t have known until something worse happened.
That’s usually how it goes.
Related Reading
-
Case Study: The Anatomy of an E-Commerce Breach
When payment security fails, attackers don't just test cards — they steal them. -
Case Study: Click Fraud Investigation
Another case where Cloudflare protections were completely bypassed — and why. -
The Rise of Automated Attacks: Why Small Businesses Are Prime Targets
Learn why small organizations face the same threats as enterprise targets.