Page 1 of 1

In practice: experimenting with AB testing

Posted: Tue Dec 17, 2024 4:18 am
by arzina566
One big winner
The conclusion of your experiment suddenly changes from 'only a negligible difference' to 'the difference is not based on chance, so we are on the right track'. In other words: instead of coming up with something new, you can build on the information you have just gathered by means of new experiments.

Also very important to know: most experiments flop! That is of course not bad at all, because in the end you learn from every experiment and you continue with the winning experiments. Jeff Bezos, the boss of Amazon, says about this: "One big winner pays more than enough for all the losing experiments." That is very important to understand. And especially to clarify in the board room. Experimenting is too often wrongly labeled as unsuccessful and aborted.

For reference:

At Booking.com, less than 10% of experiments generate positive results.
At Google and Bing 10% to 20% .
Microsoft has 30% winners and 30% losers.
A/B testing tool VWO sees 13.4% winning tests among its customers (2018).

A pleasant and proven method for experimenting is performing A/B testing. With A/B testing, you show two different versions of, for example, your website or your newsletter to different test subjects. You change one element each time that you expect to produce a difference in behavior.

The advantage of this type of test is that the test subjects iran telegram data do not know that they are taking part in a test. This means objective data! In addition, it is the way to work on increasing conversion step by step, because you are always making small changes and always building on previous results.

A/B testing calculator
An indispensable pearl in testing that I would like to share with you is the AB+ Test Calculator from CXL. This calculator can be used in two ways when doing A/B testing:

Image

1. Pre-test analysis
In this mode you can use the tool to determine the parameters for your test:

What is the minimum detectable effect I can test for?
This is very useful to test in advance for your tests. For example, if this comes out to 20%, then you know in advance that your change needs a conversion improvement of at least 20% to make a significant difference. That sounds quite ambitious, so in such a case you can consider testing larger changes or merging different tests.
How long do I need to test to reliably measure the effect?