Does artificial intelligence represent an “existential threat” to humanity? Some very smart people think so: Elon Musk, Stephen Hawking, Bill Gates, Sam Altman and particularly Oxford Professor Nick Bostrom. He wrote the book, Superintelligence that got Musk so tweetably exercised. Technologists are supposed to be rationalists, and yet Musk waxed supernatural about the threat of renegade AI. “We are raising the demon,” he told an audience at MIT last October.

Demons are not quite what Bostrom has in mind. He is thinking more about risks and probabilities. He defined existential risk in a paper from 2002 as “One where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.” From his perspective, these risks are worth mitigating even if their probability is very low because the potential costs are so high. The causes of such a catastrophe could be purposeful or accidental, and come from many realms including biotechnology and nanotechnology as well as artificial intelligence.

To read more, click here.