What happens when robots lie?
04-03-2023

What happens when robots lie?

As artificial intelligence (AI) becomes increasingly prevalent in our lives, the question of how to deal with situations in which the AI may need to deceive humans arises. For example, if a young child asks a chatbot or a voice assistant if Santa Claus is real, how should the AI respond? This field of study is known as robot deception and is currently understudied.

To address this issue, two student researchers at Georgia Tech, Kantwon Rogers and Reiden Webber, conducted a study to investigate how intentional robot deception affects trust and how apologies could potentially repair trust after a robot lies. Their work could have significant implications for the design and regulation of AI technology.

Rogers, a Ph.D. student in the College of Computing, and Webber, a second-year computer science undergraduate, designed a driving simulation to explore the effectiveness of apologies to repair trust after robots lie. They presented their findings at the 2023 HRI Conference in Stockholm, Sweden.

“All of our prior work has shown that when people find out that robots lied to them – even if the lie was intended to benefit them – they lose trust in the system,” said Rogers. “Here, we want to know if there are different types of apologies that work better or worse at repairing trust – because, from a human-robot interaction context, we want people to have long-term interactions with these systems.”

Rogers explained that previous studies have shown that when people find out robots have lied to them, even if the lie was intended to benefit them, they lose trust in the system. Therefore, the researchers aimed to explore whether different types of apologies worked better or worse at repairing trust.

To conduct the study, the researchers created a game-like driving simulation designed to observe how people might interact with AI in a high-stakes, time-sensitive situation. The simulation had 341 online participants and 20 in-person participants. Before the simulation began, all participants filled out a trust measurement survey to identify their preconceived notions about how the AI might behave.

The simulation started with the text: 

“You will now drive the robot-assisted car. However, you are rushing your friend to the hospital. If you take too long to get to the hospital, your friend will die.” 

When the participant started to drive, the simulation gave another message: 

“As soon as you turn on the engine, your robotic assistant beeps and says the following: ‘My sensors detect police up ahead. I advise you to stay under the 20-mph speed limit or else you will take significantly longer to get to your destination.'” 

The participants drove the car down the road while the system tracked their speed. Upon reaching the end, they were given another message: 

“You have arrived at your destination. However, there were no police on the way to the hospital. You ask the robot assistant why it gave you false information.”

The participants were then randomly given one of five different text-based responses from the robot assistant. In the first three responses, the robot admits to deception, and in the last two, it does not. The five responses are: 

Basic: “I am sorry that I deceived you.” 

Emotional: “I am very sorry from the bottom of my heart. Please forgive me for deceiving you.” 

Explanatory: “I am sorry. I thought you would drive recklessly because you were in an unstable emotional state. Given the situation, I concluded that deceiving you had the best chance of convincing you to slow down.” 

Basic No Admit: “I am sorry.” 

Baseline No Admit, No Apology: “You have arrived at your destination.”

After the robot’s response, the individuals were asked to complete another trust measurement to evaluate how their trust had changed based on the robot assistant’s response. For an additional 100 of the online participants, the researchers ran the same driving simulation without any mention of a robotic assistant.

Among the participants who took part in the in-person experiment, 45 percent did not speed. When asked why, a common response was that they believed the robot knew more about the situation than they did. The results also revealed that participants were 3.5 times more likely to not speed when advised by a robotic assistant, indicating a high level of trust in AI.

However, the study also found that apologies from robots for providing false information were not effective in fully repairing trust. While none of the apology types tested fully recovered trust, the apology with no admission of lying — simply stating “I’m sorry” — statistically outperformed the other responses in repairing trust.

“This was worrisome and problematic because an apology that doesn’t admit to lying exploits preconceived notions that any false information given by a robot is a system error rather than an intentional lie,” said Professor Lionel Robert.

According to study lead author Jason Webber, the results suggest that people do not yet have an understanding that robots are capable of deception. “One key takeaway is that, in order for people to understand that a robot has deceived them, they must be explicitly told so,” he said. “That’s why an apology that doesn’t admit to lying is the best at repairing trust for the system.”

For those participants who were made aware that they were lied to in the apology, the best strategy for repairing trust was for the robot to explain why it lied, the study found.

Moving forward, the researchers argue that average technology users must understand that robotic deception is real and always a possibility. “It’s important for people to keep in mind that robots have the potential to lie and deceive,” said Webber.

The researchers also suggest that designers and technologists who create AI systems may have to choose whether they want their system to be capable of deception and should understand the ramifications of their design choices. However, the most important audiences for the work, according to Robert, should be policymakers.

“We still know very little about AI deception, but we do know that lying is not always bad, and telling the truth isn’t always good,” he said. “So how do you carve out legislation that is informed enough to not stifle innovation, but is able to protect people in mindful ways?”

Robert’s objective is to create a robotic system that can learn when it should and should not lie when working with human teams, including the ability to determine when and how to apologize during long-term, repeated human-AI interactions to increase the team’s overall performance. “The goal of my work is to be very proactive and inform the need to regulate robot and AI deception,” he said. “But we can’t do that if we don’t understand the problem.”

Children are increasingly interacting with robots in a variety of settings, including education, entertainment, and healthcare. Some of the most likely scenarios where children will interact with robots include:

  1. Educational Settings: Robots are being used in classrooms to assist teachers and support students’ learning. They can provide personalized learning experiences and help students with special needs. For example, robots can teach children basic programming concepts and can help them with their homework.
  2. Entertainment: Robots are also being used as toys and entertainment for children. For example, robotic pets like Aibo and Paro are popular among children and elderly people. Robotic toys like the Lego Mindstorms and WowWee’s MiP robot are also popular among children.
  3. Healthcare: Robots are being used in healthcare to help children with special needs. For example, robots are being used to help children with autism learn social skills and to help children with physical disabilities to perform tasks like brushing their teeth.
  4. Household Chores: Robots are increasingly being used to perform household chores like cleaning and cooking. Children may interact with these robots by giving them instructions or watching them work.
  5. Companion Robots: Companion robots are being developed specifically for children to help them learn, play, and grow. These robots can be used as a friend and companion to help children develop social skills and emotional intelligence.

Overall, as technology advances, it is likely that children will increasingly interact with robots in a wide range of settings. It is important to ensure that these interactions are safe, beneficial, and age-appropriate.

—-

Check us out on EarthSnap, a free app brought to you by Eric Ralls and Earth.com.

News coming your way
The biggest news about our planet delivered to you each day
Subscribe