Elon Musk’s new plan to go all-in on self-driving vehicles puts a lot of faith in the artificial intelligence needed to ensure his Teslas can read and react to different driving situations in real time. AI is doing some impressive things—last week, for example, makers of the AlphaGo computer program reported that their software has learned to navigate the intricate London subway system like a native. Even the White House has jumped on the bandwagon, releasing a report days ago to help prepare the U.S. for a future when machines can think like humans.
But AI has a long way to go before people can or should worry about turning the world over to machines, says Oren Etzioni, a computer scientist who has spent the past few decades studying and trying to solve fundamental problems in AI. Etzioni is currently the chief executive officer of the Allen Institute for Artificial Intelligence (AI2), an organization that Microsoft co-founder Paul Allen formed in 2014 to focus on AI’s potential benefits—and to counter messages perpetuated by Hollywood and even other researchers that AI could menace the human race.
AI2’s own projects may not be very flashy—they include an AI-based search engine for academic research called Semantic Scholar, for example—but they do address AI areas such as reasoning, which will move the technology beyond developing what Etzioni calls “narrow savants that can do one thing super well.”
Scientific American spoke with Etzioni at a recent AI conference in New York City, where he voiced his concerns about companies overselling the technology’s current capabilities, in particular a machine-learning technique known as deep learning. This process runs large data sets through networks set up to mimic the human brain’s neural network in order to teach computers to solve specific problems on their own, such as recognizing patterns or identifying a particular object in a photograph. Etzioni also offered his thoughts on why a 10-year-old is smarter than Google DeepMind’s AlphaGo program and on the need to eventually develop artificially intelligent “guardian” programs that can keep other AI programs from becoming dangerous.
[An edited transcript of the interview follows.]
To read more, click here.