War is a big responsibility for global leaders, but what if the decision to start conflict was transferred, albeit unintentionally, into a non-human domain?
In an unremarkable room somewhere in the midst of an unidentified city, a machine has learned to crack the protein-folding problem.
Within seconds it has emailed sets of DNA strings to several laboratories that offer DNA synthesis, peptide sequencing and FedEx delivery. The superintelligent AI soon persuades a gullible human to mix the subsequent vials in a specified environment. The proteins now form a primitive ‘wet’ nanosystem, which is able to receive instructions from a speaker attached to its beaker. This nanosystem can now build more advanced versions of itself until finally it masters molecular nanotechnology. Not long after, billions of microscopic self-replicating nanobots silently fill the world, patiently awaiting the instruction from their master to emerge and destroy humanity.
It sounds like the plot of a science-fiction story, but it might not be as far-fetched as it seems. In fact this AI takeover scenario wasn’t concocted by a writer or fantasist but put forward in a 2008 scientific paper by AI researcher Eliezer Yudkowsky. Other prominent thinkers, including Stephen Hawking, Elon Musk and Sam Harris, have expressed their concerns about what unchecked AI research could lead to. Musk went as far as saying it would be a far likelier cause of World War Three than North Korea.
To read more, click here.