Think that AI will help put a stop to fake news? The US military isn’t so sure.
The Department of Defense is funding a project that will try to determine whether the increasingly real-looking fake video and audio generated by artificial intelligence might soon be impossible to distinguish from the real thing—even for another AI system.
This summer, under a project funded by the Defense Advanced Research Projects Agency (DARPA), the world’s leading digital forensics experts will gather for an AI fakery contest. They will compete to generate the most convincing AI-generated fake video, imagery, and audio—and they will also try to develop tools that can catch these counterfeits automatically.
The contest will include so-called “deepfakes,” videos in which one person’s face is stitched onto another person’s body. Rather predictably, the technology has already been used to generate a number of counterfeit celebrity porn videos. But the method could also be used to create a clip of a politician saying or doing something outrageous.
DARPA’s technologists are especially concerned about a relatively new AI technique that could make AI fakery almost impossible to spot automatically. Using what are known as generative adversarial networks, or GANs, it is possible to generate stunningly realistic artificial imagery.
To read more, click here.