A/B testing is a critically important tool for improving long-term growth and sustainable revenue for an app. It helps you understand how your users are converting, and provides insights for a few key pieces of information:
What is attracting new users. How users operate within your app. What will continue resonating with them to keep them engaged.
So how can you implement this important optimization technique? Read on to learn proven tips and strategies for A/B testing to help accelerate growth and retention.
What is A/B Testing?
A/B testing allows you to divide an audience into two (or more) groups, change a variable within the content flow being served to those groups, and observe how the change affects the test group.
Armed with data, you can then make data-based decisions to improve your users’ experience, help acquire new users, and better monetize your app.
Different Types of A/B testing
There are two primary places A/B testing occurs:
Marketing campaigns, which includes ASO (App Store Optimization) and user acquisition strategy. In-app, where you test UX/UI, onboarding, and other elements while monitoring things like session time, retention, engagement, and any other app-specific behaviors you want to watch.
In-App testing – Within your app itself, you can use A/B testing to test a variety of elements.
Design – Is your CTA (Call-to-Action) button placed in the best location? Does it get clicked more on the left or the right side? Do different colors attract more clicks?
Player Engagement – Does showing competitive leaderboards increase player engagement? A/B testing can provide insight into engagement.
Monetization – Try different banners or test different in-app purchases to find out which receive the most clicks.
Retention Rate: According to these reports on iOS and Android app retention, average day-30 retention rates come out to 4.13% and 2.6%, respectively. Churn is all too common, but even though those numbers seem grim, they more importantly represent which percentage of apps are doing it right.
After someone has downloaded your app, you could test any of the following to help figure out what keeps your users coming back:
How long should I test?
Generally, 1-4 weeks is probably enough time to get enough meaningful data, with an upper maximum of around 4 weeks. Why 4 weeks? Because A/B testing is not a strictly controlled environment and many sudden, unexpected factors (sudden outages, unexpected trends, etc.) could potentially affect your data. It’s important to allow enough time to collect data, but not so much that you run the risk of polluting it due to unexpected external factors.
How many users should I test?
According to Nielsen Norman Group, statistical significance is the probability that an observed result could have occurred randomly without an underlying cause. This number should be smaller than 5% to consider a finding significant. For example, if you test 2 colors for buttons A and B, track the clicks (conversion rate) for each, and find button B’s conversion rate to be significantly higher, then you can be 95% certain that the conversion rate for all your users will be higher using button B.
Define what you want to test. You can test and analyze nearly anything, but it’s a good idea to start with an hypothesis. Test one change at a time. While you can make multiple changes (called multivariant testing) in an A/B test, it’s best to start with a single change. This will make it easier to zero in on something specific that is improving (or harming) results.
This article first appeared on AppLovin