Public opinion shifts, skewed election results, mass confusion, ethnic violence, war. All of these events could easily be triggered by deepfakes—realistic seeming but falsified audio and video made with AI techniques. Leaders in government and industry, and the public at large are justifiably alarmed. Fueled by advances in AI and spread over the tentacles of social media, deepfakes may prove to be among the most destabilizing of forces humankind has faced in generations.
It will soon be impossible to tell by the naked eye or ear whether a video or audio clip is authentic. While propaganda is nothing new, the visceral immediacy of voice and image give deepfakes unprecedented impact and authority; as a result, both governments and industry are scrambling to develop ways to reliably detect them. Silicon Valley startup Amber, for example, is working on ways to detect even the most sophisticated altered video. You can imagine a day when we can verify the authenticity and provenance of a video by way of a digital watermark.
Developing deepfake detection technology is important, but it's only part of the solution. It is the human factor—weaknesses in our human psychology—not their technical sophistication that make deepfakes so effective. New research hints at how foundational the problem is.