This June, in the political battle leading up to the 2024 US presidential primaries, a series of images were released showing Donald Trump embracing one of his former medical advisers, Anthony Fauci. In a few of the shots, Trump is captured awkwardly kissing the face of Fauci, a health official reviled by some US conservatives for promoting masking and vaccines during the COVID-19 pandemic.

“It was obvious” that they were fakes, says Hany Farid, a computer scientist at the University of California, Berkeley, and one of many specialists who examined the pictures. On close inspection of three of the photos, Trump’s hair is strangely blurred, the text in the background is nonsensical, the arms and hands are unnaturally placed and the details of Trump’s visible ear are not right. All are hallmarks — for now — of generative artificial intelligence (AI), also called synthetic AI.

Such deepfake images and videos, made by text-to-image generators powered by ‘deep learning’ AI, are now rife. Although fraudsters have long used deception to make a profit, sway opinions or start a war, the speed and ease with which huge volumes of viscerally convincing fakes can now be created and spread — paired with a lack of public awareness — is a growing threat. “People are not used to generative technology. It’s not like it evolved gradually; it was like ‘boom’, all of a sudden it’s here. So, you don’t have that level of scepticism that you would need,” says Cynthia Rudin, an AI computer scientist at Duke University in Durham, North Carolina.

To read more, click here.