A/A Testing is a way to test the statistical calculations of an A/B testing software. It is done to ensure your experiments will be statistically fair before proceeding with A/B tests. To run an A/A test, you need to start an experiment with identical pages. The results of an A/A test are expected to be inconclusive. There should not be a statistical difference between identical pages. If the experiment finds a significant difference, you'll want to investigate to make sure the software is integrated correctly.
For example, in an A/A test we saw a revenue conversion increase of 0.25% as compared to the baseline. This is a negligible difference.
This is what we expect to see in an A/A test where the original and the variant are the same. The pages are identical, so there shouldn't be any difference in conversions between them. We see very slight growth percentages due to chance, but they are not meaningfully different, which is reflected in the confidence percentage.