“A group of industry leaders warned on Tuesday that the artificial intelligence technology they were building might one day pose an existential threat to humanity and should be considered a societal risk on a par with pandemics and nuclear wars.” —the Times.
Dear Fellow-Humans,
We write this letter as humbly as a collection of overeducated and overcompensated executives can, in the hope that you will hear our cries and do something about A.I. before it’s too late. Humanity is in grave danger of becoming extinct, and we—the world’s most prominent A.I. researchers and executives who got us into this mess—are (wait for it) writing a letter. That’s right. We are writing this very letter in order to: 1) show you how serious we are about writing letters, and 2) demonstrate how capable we are in our letter-writing skills. Also, this letter serves as a warning that the human race will definitely be wiped out because of the humanlike A.I. systems that the undersigned are responsible for developing and releasing into the world. But, again, rather than do anything about this existential threat to human life, we wrote a letter. (This one.)
To read more, click here.