AA Test
.webp)
A type of test in which two groups receive exactly the same experience (same page, same content, same conditions) in order to verify the reliability of the test system. In a CRO (Conversion Rate Optimization) context, an A/A test ensures that :
- the distribution of traffic between the variants is well balanced,
- conversion tracking works correctly,
- there is no technical or behavioral bias in the testing tool,
- the differences observed between variants in real-life conditions are not due to chance or poor parameterization.
This type of test is often used before launching a critical A/B test, especially when :
- a new testing tool has been installed (e.g. Dynamic Yield, AB Tasty, Optimizely),
- a new segmentation or targeting logic is in place,
- server-side variations are used and must be validated.
💡 Good to know: It's normal for small statistical differences to appear in an A/A test, but none should be significant. If one variant outperforms the other with a high level of confidence, this may indicate a bias in the configuration, a tracking bug or a randomization problem.