There’s a popular sci-fi cliché that one day artificial intelligence goes rogue and kills every human, wiping out the species. Could this truly happen? In real-world surveys AI researchers say that they see human extinction as a plausible outcome of AI development. In 2024 hundreds of these researchers signed a statement that read: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
Okay, guys.
Pandemics and nuclear war are real, tangible concerns, more so than AI doom, at least to me, a scientist at the RAND Corporation. We do all kinds of research on national security issues and might be best known for our role in developing strategies for preventing nuclear catastrophe during the cold war. RAND takes big threats to humanity seriously, so I, skeptical about AI’s human extinction potential, proposed a project to research whether it could.
My team’s hypothesis was this: No scenario can be described where AI is conclusively an extinction threat to humanity. In other words, our starting hypothesis was that humans were too adaptable, too plentiful and too dispersed across the planet for AI to wipe us out using any tools hypothetically at its disposal. If we could prove this hypothesis wrong, it would mean that AI might be a real extinction threat to humanity.
To read more, click here.