In today’s data-hungry world, photons beat electrons in their ability to carry information from place to place. A single optical fiber can simultaneously carry signals at several wavelengths, and those signals can propagate for long distances with far lower losses than are suffered by electrical signals in a wire. The result: more data transmitted in less space and with less energy.
Could light be used not just for data transmission but also for data processing? Many researchers think it’s possible. Photonic computers would differ greatly from conventional electronic ones: Instead of passing binary digits among networks of transistors, they would perform computations by manipulating the continuous-valued amplitudes of tiny beams of light. That approach would be hard to adapt into a general computing platform, but it could be well suited to some specialized applications, including the neural networks used in many artificial intelligence (AI) models. With the large and growing energy demand of AI applications (see Physics Today, April 2024, page 28), the promise of compact, low-power optical computers holds strong appeal.
Unfortunately, one of light’s big advantages for data transmission—the fact that photons don’t interact with one another—is an enormous disadvantage for computing. It’s easy to compute linear functions of optical signals, such as superposing two beams to add them together or passing a beam through a filter to multiply it by a constant number. Nonlinear functions, such as multiplying several signals or raising them to an exponential power, are much harder, but they’re necessary for neural-network computing. Although some materials have optical responses that are nonlinear functions of the light intensity, they require impractically high optical power.
But nonlinear computation can be done with low-power light and linear optics, as has now been shown by three groups: Clara Wanjura and Florian Marquardt of the Max Planck Institute for the Science of Light in Erlangen, Germany; Demetri Psaltis, Christophe Moser, and colleagues at the Swiss Federal Institute of Technology in Lausanne; and Sylvain Gigan of the École Normale Supérieure in Paris, Hui Cao of Yale University, and colleagues. Although the groups’ approaches differ, their common innovation is in how the input data (such as an image to be analyzed) are encoded: not in the light signals themselves but in the system parameters, such as the positions of an array of tiny mirrors. Computing a nonlinear function of the data could then be as easy as bouncing a light beam off the mirror array twice.
To read more, click here.