A recent study succeeded in endowing a robot with a rudimentary form of empathy, enabling the robot to predict its robotic partner’s goals.

A recent study succeeded in endowing a robot with a rudimentary form of empathy, enabling the robot to predict its robotic partner’s goals and actions.

A team of engineers from Colombia University’s School of Engineering and Applied Science designed a robot that can successfully predict the actions and goals of a partner robot based on a few frames of a video. The engineers argue that the robot and its AI display a primitive form of empathy and that future research in this area could help give robots a “theory of mind”.

The recent study was led by Professor Hod Lipson of Columbia Engineering’s Creative Machines Lab. The research conducted by Lipson and his team is just one part of a broader academic endeavor to enable robots to understand and predict the goals of other robots, and potentially humans. This prediction must be done entirely through the analysis of data collected by sensors, primary visual data.

Robot Observing Robot
The team of researchers constructed a robot to operate within a playpen approximately 6 square feet in area. The robot was programmed to look for green circles and move towards those green circles, but not all the green circles within the playpen were visible to the robot. Some of the target green circles were easy to see from the robot’s starting position, but other circles were hidden behind a large cardboard box.

A second robot was programmed to observe the first robot, watching the robot in the pen for approximately two hours. After observing it’s partner robot, the observing bot was able to predict both the goal and paths of its partner the majority of the time. The observing bot was able to correctly predict the path taken by the other bot with 98% accuracy, even though the observing bot was unaware of the exploring bot’s inability to see behind the box.

One of the lead authors on the study, Boyuan Chen, explained via ScienceDaily that the results of the study demonstrate the ability of robots to interpret the world from another robot’s perspective.

“The ability of the observer to put itself in its partner’s shoes, so to speak, and understand, without being guided, whether its partner could or could not see the green circle from its vantage point, is perhaps a primitive form of empathy,” explained Chen.

The research team expected that the observer robot could predict the short-term actions of the exploring robot, but what they found was that the observer robot was not only able to predict short-term actions, but rather it could accurately predict more long-term actions based on just a few frames of video.

“Theory of Mind”
Evidently, the behaviors exhibited by the exploring robot are simpler than many actions carried out by humans, and therefore predicting the goals and behaviors of humans is quite a ways off. However, the researchers argue that what is common between predicting the actions of a human and the actions of a robot is employing a “Theory of Mind.” Psychological research suggests that humans begin to develop a theory of mind around the age of three. A theory of mind is necessary for cooperation, empathy, and deception. The research team hopes that further research into the technology driving the interactions between their robots will help scientists develop even more sophisticated robots.

As mentioned, while empathy is typically referred to as a positive trait that enables cooperation, it’s also required for more negative actions like deception. In order to successfully deceive someone, you need to understand their desires and intentions. This opens up some ethical questions, as once robots can potentially deceive humans, what is to prevent them from being employed by bad actors to manipulate and extort people?

While the observing robot was trained exclusively on image data, Lipson believes that in principal a similar predictive system could be designed based on human language, noting that people often imagine things in their mind’s eye, thinking visually.

The efforts of the Colombia research team are part of a larger push to endow AI with a theory of mind and empathy. Euan Matthews, director of AI and innovation at Contact Engine, recently argued that in order for AIs to become more empathic, they will need to be able to consider multiple intentions, not just one. Humans frequently have multiple intentions, sometimes conflicting desires and feelings about a topic, and AIs will need to become more flexible when dealing with human intentionality.

Originally published at UNITE AI