Generative artificial intelligence (AI) has notoriously struggled to create consistent images, often getting details like fingers and facial symmetry wrong. Moreover, these models can completely fail when prompted to generate images at different image sizes and resolutions.
Rice University computer scientists' new method of generating images with pre-trained diffusion models ⎯ a class of generative AI models that "learn" by adding layer after layer of random noise to the images they are trained on and then generate new images by removing the added noise ⎯ could help correct such issues.
Moayed Haji Ali, a Rice University computer science doctoral student, described the new approach, called ElasticDiffusion, in a peer-reviewed paper presented at the Institute of Electrical and Electronics Engineers (IEEE) 2024 Conference on Computer Vision and Pattern Recognition (CVPR) in Seattle.
To read more, click here.