G
Statistics

False positives

The result of a statistical test which wrongly indicates that an effect is significant, when in fact it is not. In CRO andA/B testing, a false positive can lead to the validation of an ineffective variation, which may result in counter-productive decisions (e.g.: generalization of a version that brings no real gain).

False positives are linked to the confidence level chosen for a test (often 95%): there is always a probability (e.g. 5%) of concluding that a difference is simply due to chance. This phenomenon is all the more critical if many tests are carried out simultaneously, with no control over the false positive rate (e.g. via Bonferroni correction or false discovery test).

False positives are the opposite of false negatives, which miss an effect that is actually present. Striking the right balance between these two types of error is essential for reliable experimentation and a sound CRO strategy.

Talk to a Welyft expert

The Data-Marketing agency that boosts the ROI of your customer journeys

Make an appointment
Share this article on

Tell us more about your project

We know how to boost the performance of your digital channels.
CRO
Data
User Research
Experiment
Contact us