On a sunny spring day one of us (Dixon) entered the London Underground at the Mile End station on his way to Heath­row Airport. Eyeing a stranger, one of more than three million daily passengers on the Tube, he idly wondered: What is the probability the stranger would emerge at, say, Wimbledon? How could you ever figure that out, given that the person could take any number of routes? As he thought about it, he realized that the question was similar to the knotty problems that face particle physicists who seek to make predictions for particle collisions in modern experiments.

The Large Hadron Collider (LHC) at CERN near Geneva, the premier discovery machine of our age, smashes together protons traveling at nearly the speed of light to study the debris from their collisions. Building the collider and its detectors pushed technology to its limits. Interpreting what the detectors see is an equally great, if less visible, challenge. At first glance, that seems rather strange. The Standard Model of elementary particles is well established, and theorists routinely apply it to predict the outcomes of experiments. To do so, we rely on a calculational technique developed more than 60 years ago by the renowned physicist Richard Feynman. Every particle physicist learns Feynman’s technique in graduate school. Every book and magazine article about particle physics for the public is based on Feynman’s concepts.

To read more, click here.