AI robots can learn prejudices from each other, scientists warn. A team of experts from Cardiff University and the Massachusetts Institute of Technology (MIT) has found that artificially intelligent robots can learn how to become prejudiced and biased without the help of humans.
There are some types of computer algorithms which have already demonstrated displays of prejudice, such as racism and sexism, based on what they have learned from human-generated data.
However, the new study shows the potential for computers to adapt certain biases in groups amongst themselves. According to the experts, autonomous machines could easily identify, copy, and learn prejudice behaviors from each other.
“It is feasible that autonomous machines with the ability to identify with discrimination and copy others could in future be susceptible to prejudicial phenomena that we see in the human population,” said study co-author Professor Roger Whitaker.
“Many of the AI developments that we are seeing involve autonomy and self-control, meaning that the behavior of devices is also influenced by others around them. Vehicles and the Internet of Things are two recent examples. Our study gives a theoretical insight where simulated agents periodically call upon others for some kind of resource.”
The study was focused on a game of give and take, in which individual robots decide whether to make a donation to someone inside of their own group or in a different group. The decision is based on different strategies, as well as feelings of prejudice toward outsiders.
The study revealed that robots would donate to each other within small groups. In addition, each individual machine learned new strategies by copying others either within their own group or within the entire population.
“By running these simulations thousands and thousands of times over, we begin to get an understanding of how prejudice evolves and the conditions that promote or impede it,” said Professor Whitaker.
“Our simulations show that prejudice is a powerful force of nature and through evolution, it can easily become incentivised in virtual populations, to the detriment of wider connectivity with others. Protection from prejudicial groups can inadvertently lead to individuals forming further prejudicial groups, resulting in a fractured population. Such widespread prejudice is hard to reverse.”
The study is published in the journal Scientific Reports.
—
By Chrissy Sexton, Earth.com Staff Writer