Artificial intelligence (AI) tasks have become increasingly abundant and complex, fueled by large-scale datasets. With the plateau of Moore's law and end of Dennard scaling, energy consumption becomes a major barrier to more widespread applications of today's heavy electronic deep neural models, especially in terminal/edge systems.

The community is seeking next-generation computing modalities to break through the physical constraints of electronics-based implementations of artificial neural networks (ANNs).

Photonic computing has been a promising avenue for overcoming the inherent limitations of electronics and improving energy efficiency, processing speed and computational throughput by orders of magnitude.

Such extraordinary properties have been exploited to construct application-specific optical architectures for solving fundamental mathematical and signal processing problems with performances far beyond those of existing electronic processors.

Unfortunately, existing ONNs suffer "catastrophic forgetting" are still struggling with simple onefold tasks. The main reason is that they inherit the widespread problem of conventional computing systems, which are prone to training new models that interfere with formerly learned knowledge, rapidly forgetting the expertise gained from previously learned tasks when trained on something new.

Such an approach fails to fully exploit the intrinsic properties in sparsity and parallelism of wave optics for photonic computing, which ultimately results in poor network capacity and scalability for multi-task learning.

In a recent paper published in Light: Science & Applications, a team of scientists, led by Professor Lu Fang from Sigma Laboratory, Department of Electronic Engineering, Tsinghua University, Beijing, China, and co-workers have developed L2ONN, a reconfigurable photonic computing for lifelong learning.

To read more, click here.