When companies perform A/B tests, experimenters report how one version changed a particular metric compared to the other version. They also report a statistic called a p-value, which shows the probability that the difference they observed was due to chance.[99] Usually, if p < 0.05 (i.e. there’s a less than 5% chance that the difference was just random), they can assume the change was meaningful, or “statistically significant.”[100] Otherwise, they can’t be sure that their results weren’t just dumb luck.