In the rapidly emerging world of large-scale computing, it was just a matter of time before a game-changing achievement was poised to shake up the field of 3D visualizations.

Adobe Research and the Australian National University (ANU) have announced the first artificial intelligence capable of generating 3D images from a single 2D image.

In a development that will transform 3D model creation, researchers say their new algorithm, which trains on massive samplings of images, can generate such 3D images in a matter of seconds.

Yicong Hong, an Adobe intern and former graduate student of the College of Engineering, Computing, and Cybernetics at ANU, said their large reconstruction model (LRM) is based on a highly scalable neural network containing one million datasets with 500 million parameters. Such datasets include images, 3D shapes, and videos.

"This combination of a high-capacity model and large-scale training data empowers our model to be highly generalizable and produce high-quality 3D reconstructions from various testing inputs," Hong, the lead author of a report on the project, said.

"To the best of our knowledge, [our] LRM is the first large-scale 3D reconstruction model."

To read more, click here.