Truth, deception, and the future of AI lie detection machines
06-29-2024

Truth, deception, and the future of AI lie detection machines

There is an old saying that honesty is the best policy, but in reality, people lie quite a bit. Despite this, we often refrain from calling out liars due to societal norms and a preference for politeness. However, a new player is entering the lie detection game and may dramatically change these norms – Artificial Intelligence (AI).

The game-changing role of AI in lie detection was highlighted in a recent study published in the journal iScience. The research uncovered that people are far more prone to accuse others of deceit when an AI system makes the initial accusation.

Is society ready for AI lie detection?

This insight into the societal implications of AI-based lie detection could have significant repercussions as policy makers explore and implement such technologies.

The lead proponent of this research is Nils Köbis, a behavioral scientist and senior author at the University Duisburg-Essen in Germany.

“Our society has strong, well-established norms about accusations of lying. It would take a lot of courage and evidence for one to openly accuse others of lying,” Köbis explains.

“But our study shows that AI could become an excuse for people to conveniently hide behind, so that they can avoid being held responsible for the consequences of accusations.”

Implications of truth-default theory

The truth-default theory guides much of human interaction. This theory asserts that we intuitively believe what we hear is true.

Consequently, our inherent trust in others makes us terrible lie detectors. In fact, prior research has shown people are no better at detecting lies than pure chance.

With AI coming into the picture, Köbis and his team were keen to understand how its presence might alter existing social norms and behaviors related to accusations.

Testing AI lie detection

To put this to the test, they enlisted 986 individuals to write a truthful and false account of their plans for the following weekend.

The team used this data to train an AI model that correctly distinguished between truth and lies 66% of the time, a success rate far higher than the average human’s.

Building on this, researchers brought in over 2,000 judges tasked with reading these accounts and determining their veracity. They divided these individuals into four groups – “baseline,” “forced,” “blocked,” and “choice.”

The ‘baseline’ group had no AI assistance, while the ‘forced’ group always received an AI prediction before making their judgement.

The ‘blocked’ and ‘choice’ groups could opt to receive an AI prediction, but the ‘blocked’ group was denied it, while the ‘choice’ group was granted it.

What the team learned

Contrary to our inherent trust, the outcomes were revealing. Participants in the ‘baseline’ group correctly identified truthful and false accounts only 46% of the time.

Just 19% of this group accused the accounts of being false, despite knowing that half of the statements were indeed lies.

In comparison, the ‘forced’ group, who always received an AI prediction, saw over a third of participants accusing the statements of being lies – significantly higher than the ‘baseline’ and ‘blocked’ groups, which had no AI predictions.

Surprisingly, even when armed with an AI prediction, a mere 13% of participants called out false statements when the AI claimed them to be true.

However, when AI labelled a statement as false, over 40% of participants readily echoed the accusation. And a staggering 84% of participants who received an AI prediction willingly adopted it, readily calling out lies when AI identified them as false.

Public perception of AI lie detection

Despite the apparent accuracy, people seemed rather hesitant to embrace AI for lie detection. In the ‘blocked’ and ‘choice’ groups, only a third of participants requested the AI prediction.

This reluctance to use AI might be due to overconfidence in our own lie detection abilities, despite evidence to the contrary.

AI, despite its potential, is known for frequently reinforcing biases and making mistakes. In light of this fact, Köbis warns that this reliance on AI, especially for sensitive matters like screening asylum seekers at borders, should be taken with caution.

“There’s such a big hype around AI, and many people believe these algorithms are really, really potent and even objective. I’m really worried that this would make people over-rely on it, even when it doesn’t work that well,” Köbis cautions.

Ethical dilemma: AI, bias, and over-reliance

In summary, AI is changing how we trust each other. Dr. Köbis’s study shows that people quickly believe AI when it says someone is lying.

This is a big deal because it could change how we interact with our friends, family, and authority figures. We don’t usually call others liars, but with AI, we might do it more often.

AI lie detection could cause problems in our relationships and society. We need to be careful about using AI for lie detection, especially for important decisions.

Leaders should think hard about when and how to use this technology. We all need to remember that AI can make mistakes. It’s important to use our own judgment and not just rely on what AI tells us.

As we use more AI in our lives, we must find ways to keep trust and understanding between people. This research helps us see the challenges we face as AI becomes more common.

The full study was published in the journal iScience.

—–

Like what you read? Subscribe to our newsletter for engaging articles, exclusive content, and the latest updates.

Check us out on EarthSnap, a free app brought to you by Eric Ralls and Earth.com.

—–

News coming your way
The biggest news about our planet delivered to you each day
Subscribe