How does A/B testing work?
Let’s say you want to see which of two subject lines is more effective at getting your audience to open an email. Here’s an example of how that test could be set up:
- Email with subject line version A: Sent to 10% of the overall audience
- Email with subject line version B: Sent to 10% of the overall audience
- Time passes to allow the audience to engage with the email and then winner is declared
- Email with winning version of subject line: Sent to remaining 80% of audience
Note: If you need your email to go out to everyone at the same time, you can split your test so half get A and half get B at the same time. Then, you can see which performs better and apply what you’ve learned to future sends.
How does A/B testing determine a winner?
During setup, you’ll choose if you want the winner to be determined by whichever version has the higher open rate or click through rate. Depending on your email and what you’re testing, one measure may be more helpful for determining a winner than the other.
In addition to looking at the winner, it’s important to look at the actual numbers behind the outcomes for each version of the test for a couple reasons:
- If there is a tie, A/B testing will default to declaring version A the winner.
- Even if one version won, it may not be a significant difference. You can easily determine the statistical significance with online calculators by following these steps:
- Select “Test Evaluation”
- In “Visitors” put the number of messages sent for version A in “Visitors A” and the number of messages sent for version B in “Visitors B”
- In “Conversions” put the number of unique opens or clicks (whichever you’re using to declare a winner) for version A in “Conversions A” and the number of messages sent for version B in “Conversions B”
- In “Settings” select “Two-sided” and we recommend a confidence interval of 95%
- Apply changes and look at if your result is statistically significant
- If your difference is not significant, you can continue testing over time to see if you reach statistical significance, or you can determine that even a small difference is enough for you to make a decision.
Note: We recommend doing multiple tests before drawing conclusions. A/B testing should be approached as an ongoing way to measure effectiveness, not as a “one and done” test.