A team of researchers at San Francisco-based OpenAI, has announced the development of a machine-learning system that can create 3D images from text much more quickly than other systems. The group has published a paper describing their new system, called Point-E, on the arXiv preprint server.

Over the past year, several groups have announced products or systems that can generate a 3D-modeled image based on a text prompt, e.g., "a blue chair on a red floor," or "a young boy wearing a green hat and riding a purple bicycle." Such systems generally have two parts. The first reads the text and tries to make sense of it. The second, trained on internet searches, renders the desired image.

Because of the complexity of the task, these systems can take a long time to return a model, ranging from hours to days. In this new effort, the researchers built a similar system that returns results within minutes, though they readily acknowledge that the results "fall short of the state-of-the-art in terms of sample quality."

To read more, click here.