I am standing in front of a gigantic touch screen in a garagelike laboratory at Google’s facility in Goleta, Calif., using my finger to move little squares containing symbols—an X, a Y, an H and other, more arcane glyphs—across the display. The squares represent functions that can be performed on a quantum bit—a qubit—inside a large, silvery cylinder nearby. Of the myriad functions on offer, some cause the bit to flip from 1 to 0 (or from 0 to 1); one makes it rotate around an axis.


Another square on the display reveals the state of the qubit, represented by what looks like a lollipop moving around inside a sphere, its stick anchored in the center. As it moves, numbers beside it oscillate between 1.0000 and 0.0000. This is one of the strengths of qubits: they do not have to be the all-or-nothing 1 or 0 of binary bits but can occupy states in between. This quality of “superposition” allows each qubit to perform more than one calculation at a time, speeding up computation in a manner that seems almost miraculous. Although the final readout from a qubit is a 1 or 0, the existence of all of those intermediary steps means it can be difficult or impossible for a classical computer to do the same calculation. To the uninitiated, this process may appear a bit like magic—a wave of the hands, a tap of a touch screen and, presto, a rabbit is pulled from a quantum hat. Google has invited me here—along with a select group of other journalists—to pull back the curtain on this wizardry, to prove it is not magical at all.

To read more, click here.