Step aside Deep Blue, the first supercomputer to beat a reigning chess champion, as MIT engineers have developed a robot to play Jenga using tactile learning.
While many robots can be trained to learn using visual cues, programming a robot to learn using physical touch is much harder.
A new report published in the journal Science Robotics details how MIT engineers designed the robot with an external camera and a force sensing wrist cuff which helps the robot learn the best way to carry out a task
In this case, the task is to remove a block from a Jenga tower or move on to a different block.
Teaching the robot to play Jenga was an excellent way to incorporate tactile learning into the robot’s programming, as mastery of Jenga requires an ability to test different blocks to ensure that removing them won’t cause the tower to fall.
For the game, 54 blocks are stacked in layers of three, eighteen layers high. Each player has to remove a block and place it at the top of the tower and the first player to remove a block that collapses the tower loses.
The robot was trained to carefully test the blocks, pushing against different blocks to assess the stability of the tower and consider the outcome if the block is removed. The robot used both visual cues and physical touch to carry out the required tasks.
“Unlike in more purely cognitive tasks or games such as chess or Go, playing the game of Jenga also requires mastery of physical skills such as probing, pushing, pulling, placing, and aligning pieces. It requires interactive perception and manipulation, where you have to go and touch the tower to learn how and when to move blocks,” said Alberto Rodriguez, one of the robot’s developers.
To train the robot, the researchers had to find a more efficient machine learning method rather than programming the robot to understand every possible scenario with every possible block move. The researchers instead trained the robot on 300 rounds of removing different blocks from a Jenga tower.
With each attempt, a computer recorded the force the robot exerted and whether or not the attempts were successful. These measurements were then clustered together based on similar outcomes.
“The robot builds clusters and then learns models for each of these clusters, instead of learning a model that captures absolutely everything that could happen,” said Nima Fazeli, the lead author of the paper.
After testing their cluster approach against other machine learning algorithms, the researchers were able to gain a better understanding of how the robot would learn in the real world.
The team also tested their machine learning program against human players, and the researchers found that the robot was able to mostly keep up with the human players.
However, it will be some time before the new robot is prepared to play against humans competitively like Deep Blue.
The researchers say their robot and cluster method has many applications in other industries.
“In a cell phone assembly line, in almost every single step, the feeling of a snap-fit, or a threaded screw, is coming from force and touch rather than vision,” said Rodriguez. “Learning models for those actions is prime real-estate for this kind of technology.”
—
By Kay Vandette, Earth.com Staff Writer