Marketers need to know exactly how effective their actions are and how they affect their bottom line. When they try to evaluate their actions, they tend to attribute success or lack of success to various exogenous variables such as increased revenue around Black Friday.
Experiments bring a simple, yet very effective way to test your messages along multiple variables at the same time and find the highest-converting message.
In this article, we’ll give
- an overview on what Experiments are,
- inspiration for what you may want to test with Experiments,
- a detailed guide to how to run Experiments,
- a friendly notice to what you should pay special attention to.
What are Experiments?
Simply put, you can now run controlled experiments to find out
- what message types work best,
- if your campaigns perform better than no campaigns at all,
- what segments have the highest conversion rates,
- what combination of messages bring the best results.
Experiments can be run including any existing campaign - both active and inactive on the same domain. You can pick any campaign to test against each other.
Inspiration for what you may want to test
1. A/B test message variants
If you want to test different design variants, CTAs, colors, copy or images in your messages, you can do that with Variant A/B testing within a campaign without running a full Experiment.
However, if you want to test different templates or message types against each other, you're at the right place. Experiments allows you to test
- gamification templates against standard list builders,
- lead magnet list builders against discount offers,
- teaser vs. no teaser campaigns
- and much more!
2. A/B testing segments & audiences
- test different geolocations agains each other eg. UK vs US
- test different Klaviyo segments & lists eg. new subscribers vs. potential purchasers
- test audience from different traffic sources eg. organic vs. paid traffic
- test segments with different cart values eg. low vs. high cart value
3. A/B testing the overall effectiveness of messages
Test messages against a control group to find out whether your campaigns have an impact on your revenue at all.
4. A/B testing complete sets/groups of campaigns
Test entire experiences to find the best combination of messages.
- Discount Code vs.(Discount Code + Discount Code Reminder)
- Discount Code vs. (Discount Code + Post-Purchase Feedback Popup)
- Discount Code vs. (Discount Code + Free Shipping)
- Free Shipping Bar vs. (Free Shipping Bar + Discount Code)
- Welcome Back Recommender vs. (Welcome Back Recommender + Discount Code
How to run Experiments
1. First, log in to your OptiMonk account at https://app.optimonk.com/login/en
2. Select the Experiments icon from the left navigation bar.
3. Here, you’ll see all your previously created experiments - if you have any - listed by their names and domains. If you haven’t run an experiments before, don’t be surprised to find this page empty. To launch your (first) experiment, click on New Experiment in the top right corner.
4. First, select which domain you’d like to run the experiment on.
⚠️ Keep in mind that an experiments can only run a single domain a.k.a. you cannot test two campaigns running on two different domains within the same experiment.
5. Name your experiment to help you identify it later.
💡 Here’s a hint: include the domain and the subject of your A/B test in the name for easy identification. For example: US vs. UK segment | List builder | Sissora.com
6. Add campaigns to display to different visitor segments by clicking on Add campaign and browsing your existing campaigns.
⚠️ If you add an active campaign to an experiment, you’ll receive a small notification that this campaign will be inactivated and activated again once you start the experiment. The reason for this is that a campaign cannot run independently from an experiment.
⚠️ You can create up to 5 visitor groups.
7. Once you selected which campaigns you’d like to test against each other, determine the traffic share between the campaigns. You may want to display both campaigns in the same ratio to your visitors, but you may want to show Campaign A to most of them and only test Campaign B with a small traffic share. This option allows you to run tests with a lower risk.
8. To launch your experiment, hit on Start. Once you launch an experiment, if you wish to end it, you’ll have to do it manually.
9. You can end your experiment any time by clicking on End.
10. Once your experiment has been completed, you can view its results on the Experiments page.
⚠️ Keep in mind that the stats here only reflect the results achieved within the timeframe of the experiment. Further data can be measured in Google Analytics.
Keep in mind regarding Experiments
- If a campaign is part of an experiment, it will be indicated with a small icon on the campaign list page.
- Once a campaign is added to an experiment, you cannot inactivate it from the campaign list.
- Experiments can be run only once and cannot be restarted.
- A given campaign can only be part of one experiment at once. Once that experiment has been completed, it can be added to a new experiment freely.
- Users can only add campaigns to experiments that run on the same domain a.k.a. the domain of campaigns and the domain the experiment must match.
- Results of each experiment are displayed on the experiment’s page. The statistics shown here refer only to the results achieved during the time window of the experiment. All-time performance of campaigns is displayed under Campaigns or Campaign Analytics.
- Detailed data on experiments such as average order value, number of orders placed, revenue attributed to a campaign can be tracked in Google Analytics. OptiMonk shows data on impressions, conversions and conversion rates exclusively.
That's it! If you have any further questions or need any help, just let us know at firstname.lastname@example.org and we would be happy to assist you :)