Delivery and deliverability debunked

I've been fielding a lot of questions lately about delivery. Sometimes the askers of these questions really mean deliverability. I take a deep breath, keeping in mind that most of the uninitiated still think they are the same thing, harmonize my chakras, and calmly correct them and help them ask the right questions. Let's start this brief diatribe, and I promise to be as succinct as possible, with a few definitions:

Delivery: the act of delivering something. In this case, it's the act of delivering email.

Deliverability: the act of specifically delivering email to the inbox versus the spam folder versus having it blocked outright for various violations or offenses.

Delivery and deliverability debunked

Delivery metrics are generally a precursor to deliverability metrics. Delivery metrics inform marketers about how much email was sent and how much arrived. This is an important metric to help them understand the cost of their mailings, from a CPM standpoint, for those that outsourced their email marketing to a third party.

When we try to get at metrics beyond the raw numbers of messages sent, and we ascertain where they went -- which ISPs blocked the sending IP -- then we're in the realm of deliverability. Deliverability measurements are disputed and don't always jive. Some mailers will measure deliverability using a formula similar to this:
 
Method one: deliverability = (total sent minus total soft bounces) divided by total sent
Method two: deliverability = (total sent minus (total soft bounce plus total hard bounces)) divided by total sent

Right away you can see that deliverability, at the surface level of analysis, is not getting at the inbox versus the spam folder. Email that lands in the spam folder is mail that has been delivered, but it simply didn't get to the place you wanted. From the standpoint of a machine (mail server), email that's accepted for inbox delivery rather than a spam folder or a special folder a consumer sets up looks exactly the same.

We're looking strictly at the acceptance rate of email. In this example, we have two ways of measuring or ascertaining a deliverability rate. In the first example, we're not counting hard bounces. The logic I've heard for this is that hard bounces are messages that shouldn't have been sent in the first place. In the second example, we have a more transparent metric that incorporates all forms of non-delivered email, soft bounces and hard bounces.

To those that use the first method, I have to say this: If hard bounces are messages that shouldn't have been sent, or were sent by accident, then why send them at all? There's no such thing as mailing with no hard bounces -- every campaign will contain some amount of hard bounces. Users that change email addresses or close down accounts are banned for abuse by an ISP, and they all issue a hard bounce code. The reality here is that bounce codes are a bit of the Wild West. Every ISP does it a bit differently, and there's no true consensus at times on what should be a hard or soft bounce code -- or, in the parlance of RFCs "transient versus permanent," failure codes.

So what's a hapless marketer to do when confronted with so much variation and uncertainty in how marketing metrics are measured?

Ask your mail provider how they measure deliverability. Most email service providers use one of the two formulas above and will substantiate those measurements with specific hard- and soft-bounce rules. If the mailer is doing it right, you should expect to find at least one verified hard bounce and the address is suppressed. If this is not the case, I suggest having a chat with your email provider. It could be doing your reputation more harm than good by constantly retrying verified dead addresses.

Measure your own deliverability and get at underlying issues by employing a deliverability tracking service or mechanism. Seed tracking is a powerful tool in the fight against poor deliverability. Through seed tracking, marketers can determine whether the content of their messages will harm or help their overall deliverability, track the efficacy of in-flight campaigns, determine if their IPs are suffering from system reputation problems, or diagnose deliverability issues as they arise and quickly turn them around to ensure ongoing normal operations.

Set a baseline for your deliverability, and work to achieve small incremental successes. If you have access to deliverability tools and determine that your deliverability is above 98 percent, you will never really get it higher. Seriously, the average around the industry is about 83 percent. If anyone promises you 100 percent inbox deliverability, run, don't walk, and never look back. If your email is below the industry benchmark, there's definitely room for improvement. Learn this mantra, and repeat it in meetings when someone says you're not sending enough email: The worst performing 1 percent of your list when omitted can raise your deliverability 5 percent. What's 1 percent of deliverability worth to your company? If you don't have an ROI calculator, get one and help put real dollars behind your deliverability.

Educated your company on what deliverability is. It's not a stop gap measure for losing revenue. It is the ongoing practice of differentiating your email practices as best of breed and helping create brand stickiness through an understanding of how email marketing should look and smell in today's world. Some may think of deliverability as crisis management, but that's because they don't understand how smoothly an email program can run through a disciplined approach.

Yes, it's possible to measure everything, and everyone seems to have a slightly different take on how things get measured and what those measurements mean. At the end of the day, you have to decide what is truly meaningful for your business; I would venture to say that more data is more meaningful than less. Look at everything, and start to analyze things through non-traditional dimensions. For example, what's my deliverability doing relative to users who open my emails on mobile devices? Am I seeing higher deliverability there? Should I send them mobile optimized emails? Questions such as these will help tease out important details about the nature of your company's deliverability and help frame the metrics around your delivery.

Len Shneyder is the marketing manager at IBM's Enterprise Marketing Management (EMM) Group.

On Twitter? Follow iMedia Connection at @iMediaTweet.

"Hand pushing laptop computer" image via Shutterstock.

 

Comments

Mark Smith
Mark Smith May 12, 2012 at 7:08 AM

The two formulas miss a major factor: not being in compliance with Sender Policy Framework (SPF). At a client firm, we discovered that emails sent one by one, each in response to a prospect requests for information, were not being delivered. When this happened, there were no error, bounce or other email error messages that let us know this. I'm not talking about email delivered as spam. Rather, this was email that was thrown away without any notification. We had to call up the recipients to discover the non-deliveries, and then continued to call to track the problem

That email address sending the quotations requested by the recipients was not being used for any mass emails. In fact, there were no email addresses at the firm that were used for mass emails.

When we started tracking, these failures increased from 4 to 9 to over 11 percent.

I had to figure out what the hosting provider the client was using could not: SPF-compliant text had to be placed in the correct directory.

Perhaps everyone reading here is way past this sort of problem. Perhaps. But since this incident, I have found deliverability problems among some of my client's customers, also because their hosting providers were unaware of SPF.