Imagine that your neighbor calls to ask a favor: Could you please feed their pet rabbit some carrot slices? Easy enough, you’d think. You can imagine their kitchen, even if you’ve never been there — carrots in a fridge, a drawer holding various knives. It’s abstract knowledge: You don’t know what your neighbor’s carrots and knives look like exactly, but you won’t take a spoon to a cucumber.

Artificial intelligence programs can’t compete. What seems to you like an easy task is a huge undertaking for current algorithms.

An AI-trained robot can find a specified knife and carrot hiding in a familiar kitchen, but in a different kitchen it will lack the abstract skills to succeed. “They don’t generalize to new environments,” said Victor Zhong, a graduate student in computer science at the University of Washington. The machine fails because there’s simply too much to learn, and too vast a space to explore.

The problem is that these robots — and AI agents in general — don’t have a foundation of concepts to build on. They don’t know what a knife or a carrot really is, much less how to open a drawer, choose one and cut slices. This limitation is due in part to the fact that many advanced AI systems get trained with a method called reinforcement learning that’s essentially self-education through trial and error. AI agents trained with reinforcement learning can execute the job they were trained to do very well, in the environment they were trained to do it in. But change the job or the environment, and these systems will often fail.

To get around this limitation, computer scientists have begun to teach machines important concepts before setting them loose. It’s like reading a manual before using new software: You could try to explore without it, but you’ll learn far faster with it. “Humans learn through a combination of both doing and reading,” said Karthik Narasimhan, a computer scientist at Princeton University. “We want machines to do the same.”

To read more, click here.