Helping Robots solve Ambiguous Requests

Juan David Hernandez-Vega's work on helping robots interpret human requests

 Juan David Hernandez-Vega

If you are a robot, interpreting a human request can be tricky. Juan David Hernandez-Vega, postdoctoral research associate with Rice CS, proposed an approach that permits a robot to solve the issue of ambiguous requests.

Hernandez-Vega’s paper, “Lazy Evaluation of Goal Specifications Guided by Motion Planning,” was presented at the IEEE Conference on Robotics and Automation in Montreal. His co-authors are Lydia Kavraki, the Noah Harding Professor of Computer Science and professor of bioengineering at Rice, and Mark Moll, senior research scientist.

“Robotics is a multi-disciplinary area. Computer science contributes algorithms for motion planning, mapping, localization and more. These are integrated with control and mechanics to have a functional robotics system,” Hernandez-Vega said.

The natural language processing community has been working on tools and systems to interpret human commands for years, including such cloud-based voice services as Alexa and Siri. When it comes to robotic systems, multi-interpretation or ambiguous interpretation can easily happen.

“In many cases, those requests can be clarified using natural language or gestures but there are a few scenarios in which even the use of natural language or gestures is not enough to clarify these commands,” he said.

The paper presents a motion-planning algorithm that lazily finds a feasible path to any of the valid groundings or interpretations of tasks.

“For example, if I tell the robot to get one cup and put it on the table, it needs to know what ‘cup’ means and which cups are available. The robot needs to know what the table and what the cup are. Grounding refers to the translation of this command into a unique interpretation,” Hernandez-Vega added.

“Juan’s work is an important first step towards a new approach to commanding robots that is focused on providing an intuitive way for non-roboticists to specify tasks without having to worry how exactly a robot should do it,” Moll said.

“Whenever there are many ways for a robot to achieve a certain goal, the robot can automatically compute a plan that is feasible and satisfies the high-level specification. It is often difficult for people to assess feasibility of a robot action for example, whether an object on a cluttered shelf can be picked up by a robot, so it makes perfect sense for a robot to solve this problem instead,” he said.

The main contribution of the paper, Hernandez-Vega said, is the delayed or lazy variable grounding. Instead of trying to have one unique grounding, the algorithm will provide the motion planner with all the available options, so the motion planner can select one that is feasible for the robot.

“What we could have is a collaborative robot that will help us and share our tasks. In this scenario, we will share the workspace. We want the robot to understand high-level commands without having to provide a lot of low-level commands,” Hernandez-Vega said.

“Dr. Hernandez-Vega’s work can critically help in cases where a task specification is ambiguous or in cases where multiple outcomes are equally valid. Off-loading work to algorithms is not only desirable for a human working with the robot, also necessary to avoid cognitive overload,” Kavraki said.

Cintia Listenbee, Communications and Marketing Specialist in Computer Science