- Developer Hub
- SparkPost API
- Free Tools for Email Teams and Developers
- Slack Channel
- User Guides & Migrations
- Submit a Ticket
- SparkPost Academy
- Email Deliverability Resources
- Email Explained
- White Papers & Guides
- Webinars & Videos
- SparkPost vs. SendGrid
- Customers Stories
- Contact Us
Rules for AB Testing Email
When businesses A/B test, they’re testing two web pages at the same time to see which one performs better. Similarly, when we’re AB testing email, we’re testing two emails against one another to a subset of our subscriber base to see which performs better so that we can send the winning email to the remainder of our list.
AB testing email can be one of the most profitable and insightful things a company can do to optimize conversions. You want to make sure you:
Do try and do this
- Pick your variables (decide what you’re testing)
- Pick what you want to measure (#opens, #clicks, open rate %, deliverability %, etc.)
- Determine which recipients to send to
- Select a time frame to run your test
- Pick a winner
But your success is dependent on you not making one of these common mistakes in AB testing email:
Don’t make these mistakes
- Testing the wrong thing
If you have a low email open rate and want to AB test to increase your open rate, then changing the layout, headlines or imagery within the email is not going to help. However, changing the subject line and time of day you send your email, will. Make sure you are testing the right thing.
- Measuring multiple items
Measure one thing per test. This means if you want to test subject lines, then only swap out the subject line but keep everything else the same — including the layout, number of images, button placements, etc. If you want to test your layout, then keep the subject line and content the same, but change the layout. The more changed elements you introduce, the harder it will be to measure.
- Unlimited calls to action with different goals
Try to limit your calls to action to the same goal. If you have multiple calls to action in one email going toward the same goal (same landing page, sign-up, buy, etc.) that’s fine. But when you mix CTAs or try to introduce multiple CTAs, it clouds the results. For example, if you want them to make a purchase, read a blog post and subscribe to your newsletter, did they not click on the action you were hoping for of “buy” because you mixed CTAs? The point of A/B Testing your email is to make sure that you can accurately measure the results. When you mix variables, it makes the statistical analysis that much harder.
- Do not constantly alter the “From Name”
Figure out what your relationship is to your subscriber. Are they more familiar with your company name, a thought leader in your company — such as the CEO, a mascot, or product name? People are more likely to open emails from names or brands they recognize. If you constantly change the ‘from name’, you may start to see a decline in your email open rate.
- Sample size is too small
You don’t want to draw your conclusion on a samples size that is too small. You can A/B test on any sample size greater than 100. A good rule of thumb to test is usually 20-30% of your subscriber list so that your confidence level is high. (Click for more on sample size and confidence levels.)
SparkPost © 2018 All Rights Reserved
- Testing different content pieces
Email is not a good place to test how content performs because it could be that one piece of content performed better simply because it was written better or the headline of the article was more appealing to that particular audience member at that particular time.