measuring-email-deliverability

Let me start by saying I’m no mathematician, nor am I statistician, nor do I play one on TV. So occasionally, I need to look up the definitions of mean and median (cheat sheet: they’re the arithmetic average and the middle point in a series). I don’t think I’m alone in occasionally mixing up these closely-related (but nonetheless distinct) concepts.

Similarly, email deliverability, acceptance rate, seed list testing, and panel data are different ways of measuring delivery of email to the inbox. These metrics are related to one another, but they don’t mean the same thing. So, what exactly do they mean? And how can email marketers use these different approaches to evaluate the success of their campaigns?

Deliverability is the broadest of these terms. Deliverability is a fundamental metric for email marketers and other senders, because there’s no chance for a recipient to open, read, and respond to an offer if the email never arrives. Full stop.

Moreover, deliverability seems straightforward. If I send 1000 emails, and my server’s log files say 900 of those were accepted by receiving systems, then my deliverability is 90%. Simple, right? Yes… but no. What this figure represents is really the message acceptance rate. That’s because all the server knows is that the receiving system took the message, but not what was done with it. Did it go to the inbox? The spam folder? The sender really doesn’t know, because in both cases, the SMTP transaction is logged as as successful “250 OK”—SMTP itself doesn’t differentiate spam from legitimate email. That ambiguity is why server-side measures of message acceptance are just a starting point.

Acceptance rates don’t say anything directly about inbox placement. And inbox placement is what we really care about, for the simple reason that that’s overwhelmingly the most likely place a recipient is actually going to read our email. It’s safe to say that recipients who take the time to hunt through the various unsavory contents of a spam folder for a legitimate marketing offer are few and far between.

Addressing the question of inbox placement is why companies like 250OK, Return Path, and IBM Email Optimization offer seed list testing services. Here’s how it works: a sender includes special “seed” (test) email addresses at various ISPs among the recipients of their campaign. The seed list service providers then monitor those accounts with tools that determine where your email landed in their seed account—the inbox, the spam folder,… or perhaps if didn’t arrive at all. Because seed lists employ a known (and relatively small) set of addresses to test, they can give an answer about email performance of a particular campaign to those specific addresses. However, the information learned from seed lists should be considered at best directional. They can’t give you a comprehensive assessment of performance.

That’s where one more way to measure deliverability performance really becomes important: panel data. While seed lists use definitive results for specific email messages at a small number of addresses to extrapolate overall campaign performance, panels take something of the reverse approach. Panel providers like eDataSource monitor millions of real-world recipient inboxes (the owners of said mailboxes have agreed to participate in the research, by the way!) and aggregate data about message characteristics and performance over time. Thus, while seed lists are good leading indicators of the efficacy of a particular campaign, panel data is best for assessing broad slices of real-world performance. This includes overall message volume by sender, the behavior of senders in responding to bounces and feedback loops, and aggregate inbox placement across campaigns and time. Panel data is a powerful way to take in the big picture and to compare senders.

In short: email deliverability is a broad concept. It can be measured in multiple ways, including reporting message acceptance rates, performance to seed lists, and aggregate behavior as measured by panel data. All three methods of testing and measuring deliverability can be useful. All three also have their limitations.

When I consulted for email senders, I used to advise my clients to use a mixture of the three. Acceptance rates are the most blunt measure of how systems are technically working, but they don’t say much of anything about message performance per se. On the other hand, while seed lists provide a quantifiable way to see how a particular campaign is proceeding (and how specific tweaks to content, templates, etc., affect performance), they don’t give much of a big picture view. And panel data analysis gives you a good insight to how you’re doing relative to your industry and provides longitudinal benchmarks across campaigns over time.

(In fact, we use a combination of all of these methods to monitor different aspects of our own email deliverability performance at SparkPost. Empirical evidence is something we’re proud to stand behind when we talk about SparkPost’s inbox placement rate.)

Deliverability Inbox Placement

However you slice it, measurement, testing, benchmarking, and tracking inbox performance are critical to the success of every sender. Getting a message to the inbox is just the beginning of your customer conversation—but it’s the only way it can get started.

 

9 Things ISPs Really Want Email Marketers to Know