When comparing email service providers (ESPs), a common trend is to compare open rates between ESPs. However, this is not an accurate way to decide the best ESP for you. Instead, we recommend going with an ESP that offers better tooling and reporting features because it's easier to improve your deliverability over time.
Why we don't recommend comparing ESP open rates
Open rates are not an ideal way to measure performance between ESPs because it's almost impossible to control all variables to get meaningful results. Some of these variables are listed below.
Lists and segments
When comparing open rates between lists or segments, there is no guarantee that those lists or segments are weighted with subscribers who are equally likely to open. In addition, if emails are sent to the same list or segment, you've eliminated that variable but introduced others.
An example of this is subscriber fatigue. Many recipients don't open every email they receive. If they opened your email yesterday and today's email is similar or identical, they may see no reason to open it. If the emails are different, you've introduced the subject line and content as new variables that may have influenced the test results.
Day and time of send
Similarly, the day and time of the email sent can make a difference in open rates. There are many articles out there that discuss the best day and time to send an email to maximize open rates. If you are using the same recipients to test metrics between ESPs, you should not send those two identical emails back-to-back from different ESPs. Doing so introduces new variables by staggering the sends and sending on different days and potentially different times of the day.
Recipient-level email filters are an unknown variable that can influence open rates.
Let's take, for example, ESP A and ESP B. If a recipient sets filters that put your emails from ESP A in a spot where they're sure to open and read them, but that filter doesn't apply to emails from ESP B, ESP A will appear to perform better. This is because your recipients don't know you're switching ESPs, and your email may land in an unexpected folder.
Mailbox provider reputation
Reputation at a given mailbox provider is another unknown variable that can influence open rates.
For example, if you have been sending for a long time on ESP A, your emails from ESP A are a recognizable pattern to providers like Gmail, Microsoft, and Verizon.
When you start sending emails from ESP B, mailbox providers now have to decide whether to bounce or filter your message. To them, your emails from ESP B are from an unknown sender. They need to identify if these emails are coming from you or a spammer impersonating you.
IP reputation also comes into play with open rates, and differences in IP reputation can be temporary.
If you are on a brand new account with your ESP, it is common practice to be placed in an IP pool with other new senders. ESPs do this because they need to protect their IP reputation and understand the quality of your email program better.
This means that you're typically not sending out of your ESP's best IP addresses until you send some emails and they know that they can trust you. Differences in IP reputation between ESP A and ESP B might not reflect the actual IP reputation you will experience once there is enough data about your sending practices to assign you to your final IP pool.
Authentication and differences in measuring open rates
Authentication can also play a factor if you have authenticated email at ESP A and not at ESP B. It is also possible that ESPs don't measure open rates similarly. For example, if ESP A measures open rates as Opened / (Sent - Bounced), and ESP B measures open rates as Opened / Sent, this will further confuse the issue.
Some ESPs show you a default open rate of "total opens," whereas other ESPs show you a default open rate of "unique opens." Make sure the number you're comparing between ESPs is actually being calculated the same way.
Recommendations for comparing ESPs
We recommend seed testing to get an unbiased snapshot of your email performance. This can be performed with vendors such as GlockApps, Return Path, and others.
However, it's important not to take seed test results as a universal truth. You may not have built up enough reputation on a particular ESP to achieve better results. More sends on that platform could correct that issue.
The best test is what you can achieve with a particular vendor. For example, let's say that ESP A has a higher performance today, according to your testing with not a lot of other offerings, and ESP B has better tooling and reporting features. We recommend going with ESP B because it is easier to improve your deliverability so that it is as good as or better than ESP A. And because ESP B has better tooling and reporting, it's easier to understand how to have more meaningful engagements with your customers, even if it means temporary lower open rates at first.
To read more about this topic, visit this post.