Body

Talking to Robots: New Algorithm Aids Human-Machine Interaction

Explainable AI tool, AI Teacher, offers personalized training for medicine and disaster response workers

Peizhu "Pam" Qian

Rice CS PhD student Peizhu "Pam" Qian and a team of researchers from the Human-Centered AI and Robotics (HCAIR) Group have developed an algorithm called Personalized Policy Summarization (PPS) that could be a breakthrough in explainable AI (XAI). Their new PPS algorithm can help humans and machines better understand one another, a critically important aspect of using robots in medicine and disaster response. 

The research team also includes Qian's advisor, Rice CS Assistant Professor Vaibhav Unhelkar, alongside fellow graduate student Harrison Huang.

Qian presented the paper detailing their research, PPS: Personalized Policy Summarization for Explaining Sequential Behavior of Autonomous Agents, at the AAAI/ACM Conference on AI, Ethics, and Society last fall. 

The PPS Algorithm: Users as Students and Explainable AI

Qian and her colleagues' work in XAI is exactly what it sounds like: demystified AI that is explainable to the everyday person. “Un-blackboxing AI,” as she phrases it, and few things help demystify a process like teaching it to someone. 

The PPS algorithm, which is version 3.0 of an ongoing project called AI Teacher, borrows ideas from pedagogy and cognitive science to improve human’s understanding of robots. 

Users are first asked a series of questions to gauge their knowledge of the robot. A firefighter, for example, might be asked what she believes a robot designed to aid in rescue operations is capable of. The PPS algorithm uses that information and creates a set of instructions tailored to the individual, much like one-on-one tutoring.

“The idea is that every user probably comes in with different prior knowledge and different assumptions about the robot. That’s why…we ask the user a couple of questions to see what assumptions those users have about the robot. Then, comparing those assumptions with the robot’s actual policy is how we generate those personalized instructions.”

Qian explained that two pedagogical ideas are central to the way PPS works: Theory of Mind and Knowledge Inference. The first describes our tendency to personify things like robots and animals, assigning them motivation where it might not exist; we assume these things have a “mind” similar to ours guiding their actions. 

The second, Knowledge Inference, is a family of algorithms that computationally track and evaluate a student’s understanding of concepts. The system asks users questions and then determines how well they understand a given concept from their answers. If they can execute and explain the concept well, they score higher. 

“When we think about human-machine interaction, the human part comes first, right? So it’s very important that we include human factors and understand how humans perceive these machines they interact with,” said Qian.

Previous attempts at explainable AI had plenty of expert knowledge to impart to the user. Where it fell short was personalization—a one-size-fits-all approach reminiscent of conventional classrooms. PPS is more like a personal tutor.

Robots, XAI, and What’s Ahead

Qian hopes this technology can be expanded to include not only robotics but worker training in a variety of fields. 

“I really hope that these algorithms and these systems can help workers learn new skills, and just provide people with more resources and more environments to learn,” said Qian. “We’re living in an environment where workers both need to constantly maintain their old skills and also upskill…but we don’t have the training opportunities for everyone.”

Qian’s AI nursing tutor project, currently being deployed at Houston Methodist Hospital, is one such example. It uses PPS to explain concepts to nursing students and assess their competency. The robotic tutor can also simulate real-world distractions so those students are better prepared for an actual hospital environment.

“It’s important to train users beforehand because today’s robots are not perfect,” said Qian. “They can make mistakes, they can receive noisy sensor data.”

And it appears to be working. A human study run by Qian’s group, outlined in the paper, showed that PPS was more effective than prior methods at helping people understand the robots they were working with—including previous algorithms she’d written herself.

The end goal of using PPS is to help those working with robots, like first responders and medical professionals, know what these robots are capable of and how to use them before they step into a disaster scenario or the exam room.

John Bogna, contributing writer