When making an image with light waves or sound waves, you can’t capture details smaller than the wavelength unless you use some clever tricks. These tricks often require great expense and computational power, but a new technique using machine learning is much simpler and better suited to situations where modifying the object to be imaged is not possible [1]. The method captures details as small as 1/30 of the wavelength employed and should be useful in areas such as biomedical imaging and nondestructive materials testing.
The conventional limitation on resolution for a given wavelength is called the diffraction limit, and it arises because of an important difference between the so-called near-field and far-field waves either emitted by or reflected from an object. The far-field waves, which travel long distances, contain information about features larger than a wavelength. The near-field waves carry information on finer scales but die away before traveling very far and therefore don’t reach lenses that might gather the waves to produce an image.
To overcome this problem, researchers in the past have placed some secondary objects near the object to be imaged. These secondary objects have the correct size and shape to serve as resonant cavities that amplify the near-field waves, re-radiating the information on the object’s finer details in the form of nondecaying waves. However, the creation of a final image from this re-radiated information has required sophisticated technology and intensive processing, and such methods acquire images only slowly. Other methods require less processing but need radiating elements to be placed inside the object to be imaged [2] and so don’t allow for a noninvasive technique that would be useful for medical imaging. Romain Fleury and Bakhtiyar Orazbayev of the Swiss Federal Institute of Technology in Lausanne (EPFL) have now developed an imaging system that uses external, secondary resonators but needs far less computing power than previous methods.
To read more, click here.