What is A/A testing? A/A testing compares two identical versions of a page. This is typically done to ensure that the tool used to conduct the experiment is statistically accurate. If the A/A test is conducted successfully, the tool should report no change in conversions between the control and variation. Why Would You Want to Test Identical Pages? In some circumstances, you may choose to monitor on-page conversions on the page where the A/A test is being conducted in order to track conversions and establish a baseline conversion rate prior to initiating an A/B or multivariate test. In most other circumstances, the A/A test is used to verify the A/B testing software’s usefulness and correctness. You should check to identify if the software shows a statistically significant difference between the variation and control (>95 percent statistical significance). If the software; indicates that a statistically significant difference exists, there is an issue. You’ll want to verify that the program is properly installed on your website or mobile application. Consider the Following when Conducting A/A Testing: When conducting an A/B test, it’s critical to keep in mind that a conversion rate discrepancy between identical test and control pages is always a possibility. This is not always a negative reflection on the A/B testing platform, as testing will always include an element of chance. Bear in mind that the statistical significance of your data is a likelihood, not a certainty, while doing any A/B test. Even a statistical level of significance value of 95% implies a one-in-twenty possibility that the observed results are due to random chance. In most circumstances, your A/A test should conclude that the difference in conversion rates between the control and variation is statistically inconclusive—because the fundamental fact is that there is none to find.