Researchers from UCLA and the United States Army Research Laboratory have laid out a new approach to enhance artificial intelligence-powered computer vision technologies by adding physics-based awareness to data-driven techniques.
Published in Nature Machine Intelligence, the study offered an overview of a hybrid methodology designed to improve how AI-based machinery sense, interact and respond to its environment in real time — as in how autonomous vehicles move and maneuver, or how robots use the improved technology to carry out precision actions.
Computer vision allows AIs to see and make sense of their surroundings by decoding data and inferring properties of the physical world from images. While such images are formed through the physics of light and mechanics, traditional computer vision techniques have predominantly focused on data-based machine learning to drive performance. Physics-based research has, on a separate track, been developed to explore the various physical principles behind many computer vision challenges.
It has been a challenge to incorporate an understanding of physics — the laws that govern mass, motion and more — into the development of neural networks, where AIs modeled after the human brain with billions of nodes to crunch massive image data sets until they gain an understanding of what they “see.” But there are now a few promising lines of research that seek to add elements of physics-awareness into already robust data-driven networks.
To read more, click here.