Philosopher Nick Bostrom believes it's entirely possible that artificial intelligence (AI) could lead to the extinction of Homo sapiens. In his 2014 bestseller Superintelligence: Paths, Dangers, Strategies, Bostrom paints a dark scenario in which researchers create a machine capable of steadily improving itself. At some point, it learns to make money from online transactions and begins purchasing goods and services in the real world. Using mail-ordered DNA, it builds simple nanosystems that in turn create more complex systems, giving it ever more power to shape the world.

Now suppose the AI suspects that humans might interfere with its plans, writes Bostrom, who's at the University of Oxford in the United Kingdom. It could decide to build tiny weapons and distribute them around the world covertly. "At a pre-set time, nanofactories producing nerve gas or target-seeking mosquito-like robots might then burgeon forth simultaneously from every square meter of the globe."

For Bostrom and a number of other scientists and philosophers, such scenarios are more than science fiction. They're studying which technological advances pose "existential risks" that could wipe out humanity or at least end civilization as we know it—and what could be done to stop them. "Think of what we're trying to do as providing a scientific red team for the things that could threaten our species," says philosopher Huw Price, who heads the Centre for the Study of Existential Risk (CSER) here at the University of Cambridge.

To read more, click here.