The “waggle dance” is performed by honeybees to alert one another to the location of nectar-rich flowers. Inspired by this technique, a study published by Frontiers has devised a way for robots to communicate. The new technique could be most valuable when robots are needed yet network communications are unreliable, such as in a disaster zone or in space.
Honeybees excel at nonverbal communication by wiggling their backside while parading through the hive letting other honeybees know about the location of food. The direction of this “waggle dance” lets other bees know the direction of the food with respect to the hive and the sun. The duration of the dance lets the bees know how far away it is – a simple method to communicate geographical coordinates.
Typically, robots communicate using digital networks, but these can become unreliable during an emergency or in remote locations, making it difficult for human communication as well.
To address this, the researchers designed a visual communication system for robots with cameras using algorithms that allow the robots to interpret what they see. The system allows a human to communicate with a “messenger robot,” which supervises and instructs a handling robot that performs the task.
The human can communicate with the messenger robot using gestures, such as a raised hand with a closed fist. This is recognized by the robot’s camera and skeletal tracking algorithms. Once the human has shown the messenger robot where the package is, it conveys this information to the handling robot.
The messenger robot must position itself in front of the handling robot and trace a specific shape on the ground. The orientation of the shape indicates the direction of travel, while the length of time it takes to trace it indicates the distance. The researchers put this to a test, where robots interpreted the gestures correctly 90 and 93.3 percent of the time.
“This technique could be useful in places where communication network coverage is insufficient and intermittent, such as robot search-and-rescue operations in disaster zones or in robots that undertake space walks,” said study senior author Professor Abhra Roy Chowdhury.
“This method depends on robot vision through a simple camera, and therefore it is compatible with robots of various sizes and configurations and is scalable,” added study first author Kaustubh Joshi.
The research is published in the journal Frontiers in Robotics and AI.
—
By Katherine Bucko, Earth.com Staff Writer