Modern robots know how to sense their environment and respond to language, but what they don't know is often more important than what they do know. Teaching robots to ask for help is key to making them safer and more efficient.

Engineers at Princeton University and Google have come up with a new way to teach robots to know when they don't know. The technique involves quantifying the fuzziness of human language and using that measurement to tell robots when to ask for further directions. Telling a to pick up a bowl from a table with only one bowl is fairly clear. But telling a robot to pick up a bowl when there are five bowls on the table generates a much higher degree of uncertainty—and triggers the robot to ask for clarification.

The paper, "Robots That Ask for Help: Uncertainty Alignment for Large Language Model Planners," was presented Nov. 8 at the Conference on Robot Learning.

To read more, click here.