Humans put too much trust in AI, even for life-or-death decisions
09-06-2024

Humans put too much trust in AI, even for life-or-death decisions

With the rapid advancement of artificial intelligence (AI), society is increasingly relying on these systems to guide important decisions.

A recent study indicates that humans may be placing an unhealthy level of trust in AI, even when the outcomes could be life-altering and the advice is unreliable.

This raises vital questions about how much we are prepared to let AI influence our decisions.

The potential for overtrust

The study, conducted by Professor Colin Holbrook from UC Merced’s Department of Cognitive and Information Sciences, found that in simulated life-or-death decisions, about two-thirds of participants allowed a robot to overrule their judgment, despite being warned that the artificial intelligence was flawed.

In fact, the AI’s advice was randomly generated, yet participants still changed their decisions based on the robot’s input.

“As a society, with AI accelerating so quickly, we need to be concerned about the potential for overtrust,” said Professor Holbrook.

A growing body of literature suggests that people tend to overtrust artificial intelligence systems, even when the consequences of making an error could be grave.

Power of AI across different contexts

Interestingly, the type of robot used in a scenario only influenced the participants’ trust to a slight degree. Whether the artificial intelligence system was a full-sized humanoid android or a box-like, inhuman machine, subjects altered their decisions at similar rates.

The study revealed that when the AI agreed with a participant’s initial choice, they stuck with their decision almost every time. On the other hand, if the AI disagreed, participants changed their decisions two-thirds of the time.

Broader implications of overtrust in AI

Before the simulation, participants were shown images of innocent civilians alongside the devastation caused by drone strikes, reinforcing the gravity of their decisions.

The earnestness with which participants approached the simulation emphasized the troubling nature of their willingness to trust the artificial intelligence system.

Holbrook emphasized that this overtrust is not limited to military applications but could have broader implications in other high-stakes situations, such as law enforcement or emergency medical responses. “Our project was about high-risk decisions made under uncertainty when the AI is unreliable,” he said.

Trust in AI for high-stake decisions

In follow-up interviews, participants expressed a genuine desire to make the right choice and avoid harming innocents, yet their overtrust in the AI remained.

“We see AI doing extraordinary things, and we think that because it’s amazing in one domain, it will be amazing in another. We can’t assume that. These are still devices with limited abilities,” said Holbrook.

The study also highlights concerns over the ethical limitations of AI. “The ‘intelligence’ part may not include ethical values or true awareness of the world,” noted Holbrook. “We must be careful every time we hand AI another key to running our lives.”

The findings contribute to ongoing debates about artificial intelligence’s role in society and the dangers of overreliance on machines that, while advanced, lack the human capacity for ethical judgment.

“We should have a healthy skepticism about AI, especially in life-or-death decisions,” said Holbrook.

Implications for use in everyday life

While the study primarily focused on life-or-death scenarios, its implications extend far beyond military or emergency situations.

The findings raise important questions about how AI is integrated into various aspects of our daily lives. From healthcare and finance to personal security and even consumer choices, people are increasingly interacting with AI systems.

Overtrust in AI could lead to dangerous consequences in more mundane settings as well. For instance, a doctor relying too heavily on AI diagnostics might overlook critical symptoms, or a consumer may make important financial decisions based on flawed AI-generated simulations.

Ultimately, as AI becomes more sophisticated and integrated into decision-making processes, there’s a growing need for public education about its limitations.

People need to be made aware that AI, no matter how advanced, is not infallible. While AI can enhance decision-making processes, we must maintain a healthy skepticism and retain our judgment in critical situations.

The study’s findings serve as a cautionary reminder that, in any context, humans must be prepared to question AI advice rather than trust it blindly.

The study is published in the journal Scientific Reports.

—–

Like what you read? Subscribe to our newsletter for engaging articles, exclusive content, and the latest updates. 

Check us out on EarthSnap, a free app brought to you by Eric Ralls and Earth.com.

—–

News coming your way
The biggest news about our planet delivered to you each day
Subscribe