In the 1960s physicists at the University of California, Berkeley saw evidence of new, unexpected particles popping up in data from their bubble chamber experiments.

But before throwing a party, the scientists did another experiment. They repeated their analysis, but instead of using the real data from the bubble chamber, they used fake data generated by a computer program, which assumed there were no new particles.

The scientists performed a statistical analysis on both sets of data, printed the histograms, pinned them to the wall of the physics lounge, and asked visitors to identify which plots showed the new particles and which plots were fakes.

No one could tell the difference. The fake plots had just as many impressive deviations from the theoretical predictions as the real plots.

Eventually, the scientists determined that some of the unexpected bumps in the real data were the fingerprints of new composite particles. But the bumps in the fake data remained the result of random statistical fluctuations.

So how do scientists differentiate between random statistical fluctuations and real discoveries?

To read more, click here.