A/B tests are crucial tools for product managers as they provide the ultimate proof that you are really moving the metrics and they also help us to continually learn about our users. In some cases, A/B tests should be used to prove that we are not making the user experience worse, for example, by introducing a new interface that might get a poor reception from some users.

A/B testing is the core of data-driven decision-making, but the tests sometimes do not show any clear results. In this article, we’ll discuss what we do in such situations.

When do we need to run an A/B test?

We usually conduct A/B tests in the following situations:

●      We need to check which of two (or more) user experiences performs better. This could be user interfaces, different feature implementations, certain conditions for users (such as pricing), specific wordings, notifications, and marketing strategies.

●      We need to measure if a new feature brings some value to users. This can be useful not only when we need to make sure the improvement really works but also when we need to decide what directions we should pursue.

●      We need to back up our decision with data. A/B test results can be a compelling tool to prove to sceptical stakeholders that the change really is valuable.

We should not run an A/B test when:

●      We don’t have enough users. If your user base is small, it doesn't make any sense to run an A/B test because you will not have enough data to draw any reasonable conclusions. For a new product, you should invest in valuable features, not A/B tests.

●      The decision is not important. You need to remember that A/B testing is more expensive in terms of development than just implementing an immediate change. So, if you change something small, you don’t need to occupy the engineers, data analysts, and other team members’ time doing an A/B test. Don’t do an A/B test to choose between red and blue as a button colour.

●      The change is inevitable. For example, when you need to alter your product to comply with some legal requirements or the change is necessary according to the company’s strategy.

What should we do if we see contradictory results?

Sometimes, when analyzing an A/B test, we see that some metrics improve and some worsen. In this scenario, you must decide which metrics are more critical. When improving the funnel, the rule of thumb is to check the change in the last step. Also, the priority of the metrics can be defined by the company’s strategy and the stage of your company.

Generally, small companies are trying to grow the user database and optimize the funnel, while more mature companies concentrate more on monetization metrics. When calculating the investment in an A/B test, it’s always better not to measure the immediate cost right after the A/B test is completed but rather the long-term cost, expressed in LTV or pLTV (predicted lifetime value).

Contradictory results can also be seen when different audience segments show different results. You can roll out the change only for the particular segment that showed improvement or analyze how to improve the user experience for all segments.

In some cases, the data collected can be unexpected because it has some errors. This could be because of some bugs in the product’s code or data collection and processing, errors in metrics calculations, changes in the system during the A/B test, or running too many A/B tests simultaneously. Check every step of data collection once again to eliminate any technical errors.

Occasionally, our expectations are very different from users’ behavior. In such situations, the numbers can look odd and may confuse us. You should do a deeper analysis of users’ perceptions of the change in order to better understand your customers. User interviews, UX tests, and other user research tools can help you.

What should we do if we don’t see any results?

Sometimes, we run an A/B test and do not see any change in metrics. What should we do in such a case? 

●      Deeper analysis: You can narrow the audience (discussed in my previous article) to detect the differences. You can also dive deeper into user reactions to the change to find some hints about how they embraced the change.

●      Check the p-value: We usually say that the change is statistically significant only when the p-value is less than or equal to 0.05. But sometimes, when you need to make a decision, you can increase that number slightly.

●      Keep the A/B test for a longer timeframe: The longer you keep the A/B test, the more likely you can detect a change. Use one of the online calculators to decide if you are ready to invest more time into your A/B test. However, you should keep in mind that in our fast-moving industry, time is crucial, and it’s sometimes better to avoid analytical paralysis and simply make a decision.

●      Consider different metrics: We usually check the last step in the funnel, retention, or money in the A/B test, but in many cases, it’s challenging to move these metrics. Check if more actionable metrics (such as registration or activation) have changed.

●      Do nothing: Sometimes, no result is a result, and no change can prove your hypothesis was wrong. You can decide whether to accept or reject the change, taking into consideration other aspects, such as qualitative data, company strategy, technical constraints, or even your professional intuition.

Conclusion

 A/B testing is an excellent tool for product managers, but it’s not always an easy way to get an answer to a difficult question. Sometimes, A/B tests do not show any results, and that’s OK. If no clear results emerge, dive deeper into the collected data, adjust your statistical significance levels, extend the testing period and audience, or re-evaluate your metrics.

Sometimes, the absence of a statistically significant change in metrics also brings valuable insights. Ultimately, blending quantitative data with qualitative insights, company strategy, and professional judgment will lead to more informed and effective decision-making, which is crucial for successful product management.


Testing can require some intense insight into metrics. Don't let the numbers overwhelm you.

Sign up to our PLG Metrics: Masters course and get 5 modules, 2+ hours' of content, official certification and go 100% at your own pace.

It's time to master measurement strategies. What's stopping you?