David Bau is very familiar with the idea that computer systems are becoming so complicated it’s hard to keep track of how they operate. “I spent 20 years as a software engineer, working on really complex systems. And there’s always this problem,” says Bau, a computer scientist at Northeastern University in Boston, Massachusetts.
But with conventional software, someone with inside knowledge can usually deduce what’s going on, Bau says. If a website’s ranking drops in a Google search, for example, someone at Google — where Bau worked for a dozen years — will have a good idea why. “Here’s what really terrifies me” about the current breed of artificial intelligence (AI), he says: “there is no such understanding”, even among the people building it.
The latest wave of AI relies heavily on machine learning, in which software identifies patterns in data on its own, without being given any predetermined rules as to how to organize or classify the information. These patterns can be inscrutable to humans. The most advanced machine-learning systems use neural networks: software inspired by the architecture of the brain. They simulate layers of neurons, which transform information as it passes from layer to layer. As in human brains, these networks strengthen and weaken neural connections as they learn, but it’s hard to see why certain connections are affected. As a result, researchers often talk about AI as ‘black boxes’, the inner workings of which are a mystery.
To read more, click here.