The Cost of Sending to Everyone
A Case Study in Email Deliverability, Apple Mail Privacy Illusions, and the Gmail Blind Spot That Cost a Nonprofit Its Inbox
Introduction: Two Million Emails a Year and Falling
The organization — a U.S.-based nonprofit depended on email the way most nonprofits do: for everything. Newsletters, trip updates, interest-based communications, fundraising appeals, event invitations, podcast announcements…email was the connective tissue between the organization and its community of supporters.
And it was breaking.
The symptoms were subtle at first. Open rates that had once hovered around 31% on the flagship newsletter had been quietly sliding for over a year. By late 2025, they were down to 21%…same content, same sending patterns. Fundraising emails to the full contact list were performing worse with each cycle. Supporters who had donated faithfully for years were telling staff they hadn’t seen an email from the organization in months.
When I was brought in, the organization was sending from two platforms simultaneously: a legacy email marketing platform handling 92% of volume, and a newer CRM-integrated platform handling the rest. However, there was o engagement-based segmentation on either. The combined mailing list held roughly 15,000 contacts and on any given send to the full list, 75–80% of recipients never opened the email.
Nearly two million emails a year were going out, but the vast majority were either landing in spam folders or being ignored entirely, and every one of those ignored emails was quietly teaching Gmail that this organization sent mail nobody wanted.
Part One: Ruling Out the Usual Suspects
Before examining sending behavior, I needed to confirm that the deliverability problems weren’t rooted in something more fundamental like a misconfigured DNS record, a compromised sending IP, or an authentication failure can tank inbox placement regardless of how good the content or segmentation is. The first phase of the engagement was a full infrastructure audit.
DNS Authentication Cleanup
An audit of the organization’s DNS authentication records turned up several issues that needed correcting.
Two unauthorized IP addresses were sitting in the SPF record: the first traced to an unconfigured address in the UK with no reverse DNS, and the second traced to a cloud platform instance that appeared on six blacklists. Neither had any business being there, and neither showed up in DMARC report data as an active sender, confirming both were stale entries of unknown origin. They were removed.
Three obsolete DNS records were left behind from previous hosting environments: an old 1024-bit DKIM key from a pre-migration provider, a legacy DomainKeys policy record that predates modern email authentication entirely, and an autodiscovery record pointing to a hosting platform the organization no longer uses. All were cleaned up.
The organization’s DMARC policy was set to “none,” meaning receiving servers were told to report failures but take no action and the previous DMARC monitoring subscription had expired months earlier, leaving the organization blind to what was happening with its email authentication.
DMARC Forensics: Reading the Raw Reports
With no active monitoring tool in place, I couldn’t simply pull up a dashboard. Instead, I downloaded and manually analyzed dozens of DMARC aggregate reports.
The analysis covered reports spanning multiple weeks and identified 172 unique IP addresses claiming to send email as the organization’s domain. I traced each one, and the results confirmed that virtually all observed sending activity came from known, legitimate services: the CRM-integrated email platform, the legacy email platform, the organization’s hosted email provider, and relay infrastructure from their productivity suite; no unauthorized senders were detected. The two suspicious IPs I’d removed from the SPF record weren’t present in the report data either, confirming they were stale entries — not active threats.
This was painstaking work. Manual XML analysis means reviewing records one by one rather than relying on the automated correlation and dashboards that dedicated DMARC monitoring tools provide, but it gave me confidence in the answer: the authentication infrastructure was not the problem.
Blocklist Check
I checked the domain and all associated infrastructure against 61+ blacklists and everything came back clean. The deliverability problems were not caused by traditional DNS-based blocklisting — they were entirely at the mailbox provider level, driven by complaint rates and engagement signals. There were no delisting requests to file, no blacklist appeals to make.
Phase 1 established the foundation. The DNS infrastructure was clean. Authentication was passing. No one was impersonating the domain. Whatever was killing this organization’s deliverability, it wasn’t technical. It was behavioral.
Part Two: Seeing What Gmail Sees
With the infrastructure confirmed clean, I turned to the question of how Gmail was actually evaluating the organization’s email. I registered the domain with Google Postmaster Tools, which gave us access to the same reputation data Gmail’s algorithms use to decide what goes to the inbox and what goes to spam.
The picture was worse than anyone expected.
A Reputation Stuck in Second Gear
Google rates sender domains on four tiers: High, Medium, Low, and Bad. At High, your email reliably reaches the inbox. At Medium, Gmail delivers to recipients who regularly engage with your mail but starts filtering messages to spam for anyone who hasn’t recently interacted. At Low and Bad, most email goes straight to spam regardless of who it’s addressed to.
The organization’s domain had been rated Medium every single day for at least 120 days. Not fluctuating — stuck…four months of flatline. This meant that even emails to their most loyal supporters were being evaluated with suspicion, and emails to anyone who hadn’t recently engaged were likely going straight to spam. The donors who said they weren’t seeing emails? Gmail was almost certainly burying them.
The damage wasn’t limited to bulk campaigns. One active supporter (someone deeply involved in the organization’s ministry) had stopped receiving all email from the organization for over a year. Not just newsletters or fundraising appeals but private, one-on-one replies from staff. The organization could receive his emails just fine; their replies simply never arrived. He had two separate email addresses, both affected. Staff had no idea their direct correspondence was vanishing until he mentioned it. This wasn’t a case of someone ignoring a newsletter, but rather Gmail quietly deciding that the organization’s domain wasn’t trustworthy enough to deliver even personal messages.
That’s what Medium reputation looks like in practice.
Spam Complaint Rates: 46 Times the Limit
Google’s acceptable spam complaint rate is 0.3% which is one complaint per 300 recipients. The organization’s actual numbers were staggering:
On one day in mid-November 2025, the spam complaint rate hit 13.7% — forty-six times Google’s threshold. A week earlier, it had been 9.2%. Through December, spikes of 4–7% were routine. Even the “quiet” days in January were hitting 0.7%, still more than double the acceptable limit.
The pattern was unmistakable: every spike corresponded to a full-list blast from the legacy email platform — 15,000 recipients, no engagement filtering, two to four times per week. During the worst months, weekly send volume from that platform alone exceeded 60,000 emails.
One bright spot: February 2026 was dramatically cleaner. When the large blasts slowed, spam rates dropped to near zero for almost the entire month. This was the clearest evidence that the domain’s reputation wasn’t permanently damaged…it was being actively damaged by specific sends to specific people.
The Proof in the Data
I pulled the complete campaign history from both platforms: 261 campaigns from the legacy platform and 378 from the newer CRM-integrated platform. In total, 639 campaigns and over two million messages. The analysis confirmed what Postmaster Tools was showing.
The legacy platform was responsible for 1,927,867 emails in the past year and carried a weighted average open rate of 9.8% across all campaigns. More than nine out of ten emails sent through this platform went completely unopened. The 127 campaigns that went to more than 10,000 recipients accounted for 91% of all sending volume, and not a single one of those full-list blasts exceeded a 25% open rate.
Meanwhile, the newer platform told a different story. Its trip communication system (small, targeted sends to people who had signed up for specific trips) was averaging a 46% open rate across 139 campaigns, with some trip-specific sends hitting 70–100%. The platform also had 712 carefully built segments, almost all trip-specific, and none of its 198 scheduled campaigns targeted the full contact list.
The infrastructure for responsible email sending already existed, it just wasn’t being applied to the organization’s largest campaigns.
The core issue wasn’t the content of the emails. It was who they were being sent to. The engaged supporters who opened, read, and clicked were being penalized because the same emails were also going to thousands of contacts who had long since stopped paying attention.
Part Three: The First Fix and the March Spike
Armed with the data, the first recommendation was straightforward: stop sending to people who aren’t reading. The path forward was engagement-based segmentation: building defined audience segments that would filter every send to contacts who had demonstrated recent activity. But before that work was finished, the organization needed to send a newsletter.
The First Test
On February 27th, the organization had a newsletter ready to go that included a time-sensitive trip fundraising campaign. The plan was to send it to the full list through the legacy platform, as usual. I recommended filtering instead: send it to contacts who had opened in the last 90 days rather than blasting all 15,000. The actual send went out to roughly 9,200 recipients using a six-month open window on the legacy platform, a looser filter than I’d suggested, but still cutting nearly 6,000 of the least active contacts from the list. It wasn’t a formal segment, just the quickest improvement available for a send that couldn’t wait.
The Proof of Concept
A few days later, on March 3rd, I ran a more controlled test. I had built two base segments in the CRM-integrated platform using email opens as the engagement signal with a 90-day lookback window. Base A was for informational sends: newsletters, event invitations, trip menus, podcasts. Base B was for donation solicitations, with override rules that kept donors and past trip travelers in the audience even if they hadn’t opened recently, on the reasoning that someone who has given money or traveled on a trip has demonstrated commitment that goes beyond email behavior.
The organization’s flagship biweekly newsletter — which had been averaging an 11.5% open rate on the legacy platform — was sent through the newer platform to the approximately 3,900 contacts in the Base A segment. The audience was smaller than the February 27th send because it reflected only the newer platform’s own 90-day engagement data, not the legacy platform’s broader six-month window.
The final results: over 60% open rate, zero spam complaints, a 0.5% bounce rate, seven unsubscribes. The same newsletter, the same content sent to a fraction of the audience and open rates increased more than fivefold.
The segmentation strategy was working exactly as expected. Then, a week later, I checked Postmaster Tools and the picture changed.
The Spike That Shouldn’t Have Been There
On March 10th, I pulled up Google Postmaster Tools and found a spam complaint rate of 5.3% recorded on March 2nd, the day before the proof of concept.
Google Postmaster records complaints on the day the recipient reports spam, not the day the email was sent. A send on a weekday typically generates complaints over the next two to five days as recipients work through their inboxes. Tracing back from March 2nd, the most likely trigger was a send on February 27th: roughly 9,200 recipients, filtered to contacts who had opened at least one email in the last six months.
That send had already cut nearly 6,000 of the least active contacts from the list but it still produced a spam rate almost eighteen times Google’s threshold.
The Counterintuitive Finding
Here’s where the data got strange. The February 27th filtered send actually produced a higher spam rate than the full 15,000-recipient blasts that had preceded it.
The explanation has to do with how Gmail makes delivery decisions. Gmail doesn’t apply a blanket verdict to every email from a domain…it evaluates each recipient individually, based on that person’s own engagement history with the sender. On the full-list blasts, the extra 6,000 contacts who hadn’t registered an open in months had little to no engagement history. Gmail was quietly filtering many of them to spam before they ever saw the message. They couldn’t report spam on something they never received in their inbox, and they weren’t counted in Postmaster’s complaint math.
When the organization sent only to contacts who had “opened in the last six months,” they were sending to people Gmail had more individual reason to trust, so a higher percentage of emails reached the inbox. But many of those “opens” were false positives. The recipients appeared engaged in the data but weren’t actually reading. The filtered send put the email in front of more disinterested people than the full-list blast had, and more of them reported it.
Segmenting the list had made the spam rate worse — not because segmentation was wrong, but because the engagement signal it relied on was lying.
The answer forced a deeper examination of something I’d been treating as a solved problem: what “engaged” actually means.
Part Four: The Ghost in the Open Rate
To understand why the segmented list was still generating spam complaints, I had to confront a problem that’s been quietly undermining email marketing data for years.
Apple’s Privacy Curtain
In 2021, Apple introduced Mail Privacy Protection in the Mail app on iPhones, iPads, and Macs. The feature automatically pre-loads email tracking pixels for every message delivered to an Apple Mail user, regardless of whether they actually read the email. From the sender’s perspective, this makes it look like the recipient opened the message. The open gets logged, the engagement metric ticks up, and the contact stays in the “active” segment.
The implications for the organization’s segments were immediate. A contact using Apple Mail who hadn’t genuinely read a single email in two years would still register as “opened in the last 90 days” on every single delivery, because Apple’s system fires the tracking pixel automatically. That contact would remain in Base A and Base B and continue receiving campaigns…even though they might be the very person clicking “Report Spam” in Gmail.
Wait. Gmail?
The Blind Spot
This is where the investigation took its most important turn.
Most major inbox providers (Yahoo, Outlook, AOL, and others) operate what are called feedback loops. When a recipient marks an email as spam in Yahoo Mail, for example, Yahoo sends a report back to the sending platform identifying the specific contact who complained. The platform then automatically suppresses that contact, removing them from future sends. It’s a self-correcting safety net: the moment someone flags your mail as spam, they stop receiving it.
Gmail doesn’t participate in feedback loops.
When a Gmail user marks an email as spam, Google doesn’t tell the sender who it was. The only signal is the aggregate spam complaint rate visible in Postmaster Tools, the number that was spiking to 5.3% and higher. The organization could see that Gmail users were complaining, but there was no way to identify or suppress the individual complainers.
This created a critical asymmetry hiding inside the mailing list:
Non-Gmail contacts were protected by the feedback loop. Even if some of their “opens” were phantom Apple Mail activity, the system was self-correcting. If a phantom engager reported spam, the feedback loop caught them and they were automatically removed; the risk resolved itself.
Gmail contacts had no such protection. Apple Mail phantom opens kept them classified as “engaged” in the segments. There was no feedback loop to catch complainers. Unengaged Gmail users who appeared active based on opens could keep reporting spam indefinitely, and the organization had no mechanism to identify or stop them.
And Gmail contacts made up 61% of the entire mailing list.
The spam spikes weren’t coming from a failure of segmentation. but rather from a blind spot in how engagement was being measured, concentrated in the one provider where the consequences were invisible and unrecoverable.
The key insight: the fix didn’t require applying uniformly strict criteria across the entire list. The risk was concentrated at Gmail, where there was no feedback loop protection. For non-Gmail providers, opens were a workable engagement signal because the feedback loop provided the safety net. This opened the door to something I hadn’t seen widely discussed in deliverability literature: provider-aware segmentation.
Part Five: Two Lists, Two Standards
The concept was simple in principle and somewhat unusual in practice: instead of applying the same engagement criteria to every contact, the segments would apply different criteria depending on the recipient’s inbox provider.
Gmail Contacts: Trust Clicks, Not Opens
For Gmail contacts, opens were unreliable and unprotectable. The only engagement signal that couldn’t be faked by Apple Mail Privacy Protection was a click…a deliberate action that requires a human being to interact with the email content.
Gmail contacts would qualify for the active segments only if they had clicked any link in any campaign in the last 12 months. At two to four campaigns per month, that meant 24–48 opportunities over a year to click a single link (a (very) generous test). But if a Gmail user hadn’t clicked anything in a full year of emails, there was no reliable way to confirm they wanted the mail and no feedback loop to catch them if they didn’t.
Non-Gmail Contacts: Opens Are Fine
For non-Gmail contacts the original 90-day open window remained appropriate. Even if some opens were phantom Apple Mail activity, the feedback loop acted as a safety net because any contact who reported spam would be automatically identified and suppressed before the next send. The feedback loop did the work that click-based criteria needed to do for Gmail.
The Hygiene Layer
Underneath both base segments, I built a hygiene layer that determined whether a contact was eligible for email at all.
For Gmail, this created three tiers. Active contacts (clicked in the last 12 months) were eligible for all campaigns. Re-Engagement contacts (showing opens in the last 90 days but no clicks in 12 months — the phantom open zone) would need to be confirmed through a re-engagement sequence before returning to the regular segments. Dormant contacts (no opens, no clicks) would be suppressed immediately, with no re-engagement attempt. Without a feedback loop, even a carefully throttled re-engagement email to a truly disengaged Gmail user risks generating a complaint that can’t be traced.
For non-Gmail, it was simpler: two tiers. Active contacts (opened in the last 90 days) were eligible for campaigns. Dormant contacts (no opens in 90 days) would go through a re-engagement sequence, and because the feedback loop makes this safe, the sequence could be longer and more persistent than anything advisable for Gmail contacts.
Bridging the Data Gap
There was one more complication. Because the organization was mid-migration between platforms, engagement history was split across both systems. A contact might have been clicking links in legacy platform campaigns that the newer platform had no record of. Using only the newer platform’s click data for Gmail contacts would incorrectly classify legitimately active contacts as unengaged.
To solve this, I ran a filtered export from the legacy platform of all subscribed contacts who had clicked any campaign in the last 12 months, tagged them, and imported them into the newer platform. This ensured the Gmail segments captured engagement from both platforms. As the organization sent exclusively through the new platform going forward, this imported data would age out naturally.
Part Six: The Cross-Platform Forensics
Building the segments required a level of cross-platform data analysis that went significantly beyond what either platform’s built-in tools could provide.
Deduplicating 15,000 Contacts Across Two Platforms
The legacy platform held over 15,000 subscribed contacts. The newer platform held roughly 13,500. Using email address as the join key, I performed a contact-level cross-reference that revealed the two lists were largely the same people — about 86% overlap. The combined deduplicated mailable list totaled just under 15,700 unique addresses.
Of those, roughly 8,900 had opened at least one email on either platform in the last 6 months. The remaining 6,800 — nearly 43% of the combined list — had zero opens across both platforms in three months. That was the dead weight, generating disengagement signals on every send.
587 Campaigns, One Pattern
The full analysis spanned 587 campaigns from the legacy platform sent between March 2024 and March 2026. One finding shaped the entire segmentation strategy: the main send list had grown from roughly 10,700 to roughly 15,400 contacts over two years, but engagement hadn’t grown with it. On the flagship newsletter, open rates held steady around 31% when the list was at 11,000. Once the list crossed 13,000 in early 2025, open rates started dropping, hitting 21% by late 2025. The same content, the same sending patterns; the only difference was more unengaged contacts diluting the results.
The data also revealed a stark and consistent divide between the engaged and unengaged portions of the audience. The organization frequently sent the same campaign as two versions: one to donors and past trip travelers (typically 1,000–1,600 recipients) and one to everyone else (typically 12,000–14,000 recipients). On the flagship retreat invitation series, donors opened at 14–30% while non-donors opened at 5–7%. A holiday newsletter: donors at 24%, non-donors at 13%. Trip fundraising letters: donors at 16%, non-donors at 6%. A consistent two-to-three times engagement gap.
And the abuse complaint data matched perfectly. Campaigns sent to donors and travelers consistently showed 0.00% abuse. The same campaigns sent to non-donors and non-travelers generated nearly all of the abuse complaints.
The routine use of resends compounded the damage. The organization regularly resent campaigns to recipients who hadn’t opened the original, which by definition targeted the least engaged contacts. Every resend in the data showed open rates 30–50% lower than the original. The worst performers were almost exclusively resends: one trip outreach resend achieved a 3.0% open rate on 11,562 recipients. At that rate, 97% of recipients were ignoring the email, sending a concentrated negative signal to Gmail on every delivery.
Part Seven: The Re-Engagement Blueprint
Suppressing unengaged contacts doesn’t mean abandoning them. Many of these people originally opted in; they were former donors, event attendees, website visitors who had signed up years ago, and it makes sense to give them a chance to confirm whether they still want to hear from the organization.
I designed a re-engagement campaign that accounts for the same asymmetry that drives the segmentation strategy. The campaign was mapped out and ready to deploy.
Gmail: Two Emails and Done
Gmail contacts in the Re-Engagement tier will receive a two-email sequence. The first email leads with the organization’s recent impact (concrete accomplishments, stories, funded projects, etc) with a single clear call-to-action linking to a fuller impact report on the website. The goal is to remind recipients why they subscribed and generate a trackable click. Seven to ten days later, a second email offers a preference center where recipients can adjust their email frequency rather than face a binary subscribe-or-unsubscribe choice.
That’s it, two emails. If a Gmail contact doesn’t click either one, they get suppressed. Without a feedback loop, a longer sequence risks compounding the problem.
Non-Gmail: Three Emails with a Safety Net
Non-Gmail contacts get one additional email: a “final notice” on day 21–25 explaining plainly that the organization is cleaning up its mailing list, with a single “Keep me subscribed” button. The feedback loop makes this safe. If anyone complains during the sequence, they’ll be automatically identified and removed before the next send.
Throttled, Monitored, Careful
Re-engagement sends will be capped at 500 per day total, with Gmail batches limited to 200 per day during the morning window. After each batch, Postmaster Tools will be monitored. If the spam rate exceeds 0.5% on any given day, Gmail sends pause for 48 hours before resuming at a reduced volume.
The non-Gmail track will launch first, deliberately. These contacts have feedback loop coverage, making them the safer population to start with. Gmail re-engagement will launch two weeks later, after the non-Gmail track has established a baseline and confirmed that the domain’s reputation isn’t taking hits from the process.
Industry data for nonprofit re-engagement campaigns targeting borderline contacts typically shows a 5–15% recovery rate. The contacts who don’t respond get suppressed, and that’s a normal, healthy outcome that protects deliverability for the contacts who do respond.
Suppression Isn’t Forever
Contacts who re-engage through non-email channels can be returned to the active list at any time. I also recommended an annual reconnect effort: a short direct mail piece or targeted social media ad aimed at suppressed contacts, giving them a pathway back through a channel other than the email they stopped engaging with.
Part Eight: What Changed
The results came from the segmentation work alone — the provider-aware segments, the engagement-based audience filtering, and the discipline of sending only to contacts who had demonstrated they wanted the email. The re-engagement campaigns haven’t launched yet. Everything that follows is the result of simply changing who receives the emails.
The Numbers
Domain reputation reached High for the first time ever. After being stuck at Medium for at least 120 consecutive days — meaning Gmail was actively filtering the organization’s email to spam for less-engaged recipients — the domain climbed to Google’s highest trust tier. At High reputation, Gmail delivers reliably to the inbox for all recipients, not just those who recently engaged. In over four months of monitoring, the organization had never reached this tier. Now it has.
Open rates doubled and tripled. The flagship newsletter went from an 11.5% average on the legacy platform to over 60% on the proof-of-concept send…and successive campaigns sustained open rates two to three times their historical averages.
Fundraising results improved. The organization reported better-than-average results on their most recent donation campaign, sent to the refined segments. Fewer emails sent, more money raised. The math of engaged audiences is not complicated: when every recipient is someone who actually reads the email, more of them act on it.
Spam complaint rates dropped from catastrophic to negligible. The 5–14% spikes that had been hammering the domain’s reputation for months disappeared. The domain now operates well within Google’s 0.3% threshold, with comfortable margin.
The sending list went from roughly 15,000 contacts to roughly 7,500–8,000. The organization sends half as many emails and gets dramatically better results on every metric that matters — opens, clicks, donations, and the reputation score that determines whether any of those emails reach the inbox at all.
And the re-engagement campaigns haven’t even started yet.
Lessons for Organizations That Depend on Email
A bigger list is not a better list. This is the single hardest thing for organizations to accept, especially nonprofits where every contact represents a potential donor. But the data is unambiguous: the organization’s list grew from 10,700 to 15,400 over two years, and the only thing that grew with it was the damage to deliverability. The engaged core was always roughly the same size. Everything added on top was dilution.
Apple Mail Privacy Protection has broken open rates as an engagement signal — but only for some providers. The phantom open problem is well known. What’s less discussed is that it matters far more for Gmail contacts than for anyone else, because Gmail is the only major provider that doesn’t participate in feedback loops. For Yahoo, Outlook, and AOL contacts, the feedback loop catches phantom engagers who complain. For Gmail, there’s no safety net. If your list is majority Gmail (and most lists are) this distinction should be driving your segmentation strategy.
Provider-aware segmentation isn’t standard practice, but it should be. The idea of applying different engagement criteria to different inbox providers isn’t widely discussed in email marketing. Most segmentation advice treats the mailing list as a single population, but the risk isn’t uniform. Gmail contacts without feedback loop protection need stricter criteria than Yahoo contacts with it. One-size-fits-all segmentation leaves a blind spot exactly where the consequences are highest.
Resends to non-openers are uniquely destructive. Every resend in the data performed 30–50% worse than the original. This makes intuitive sense — you’re explicitly targeting people who already ignored the first email — but the cumulative reputation impact is severe. At 3–4% open rates, a resend is essentially a concentrated signal to Gmail that you send unwanted mail.
Suppression isn’t permanent, and it shouldn’t feel like giving up. Contacts who re-engage through non-email channels can be returned to the active list at any time. The goal isn’t to lose supporters but rather to communicate with them through the channels where they’re actually paying attention.
The CRM sync can be the silent saboteur. The organization’s CRM was automatically pushing every new contact into the email marketing platform with no engagement filter. This meant every new database entry (regardless of whether they’d ever opened an email) immediately started receiving campaigns. The list grew steadily and engagement rates dropped just as steadily. If your CRM syncs to your email platform, check whether engagement is part of the criteria. If it isn’t, your list is growing in a way that actively undermines your deliverability.
Conclusion: The Inbox Is Earned
There’s a tempting narrative in email marketing that more reach equals more impact. Send to everyone, because you never know who might open this one, cast the widest net.
The data from this engagement tells the opposite story. For over a year, this organization sent nearly two million emails annually to a list where three-quarters of recipients never engaged. The result wasn’t broader reach though; it was Gmail progressively deciding that the entire domain sent unwanted mail, which made it harder to reach even the supporters who wanted every email.
The fix was counterintuitive: send fewer emails to fewer people; cut the list in half; apply different standards to different providers based on the actual risk each one carries; accept that a contact who hasn’t clicked a link in a year probably isn’t going to click the next one, and that continuing to email them shouldn’t be thought of as persistence. The reality is it’s more of a reputation tax paid by everyone else on the list.
The organization now sends to roughly half the contacts it used to, its domain reputation is the highest it’s ever been, open rates have doubled and tripled, fundraising results have improved, and the supporters who depend on those emails to stay connected with the work they care about are finally, reliably, seeing them in their inbox.
The inbox isn’t a right…it’s earned, And the way you earn it is by only showing up when you’re wanted.
Identifying details in this case study have been modified to protect client confidentiality. Technical findings, metrics, and methodologies are presented accurately.