Humans can feel emotions for robots, especially those in distress
11-01-2024

Humans can feel emotions for robots, especially those in distress

We wouldn’t think twice about feeling sorry for a child who dropped her ice cream or a dog whimpering at the door. But how about a machine? More specifically, a robot? A recent study examines this less-explored territory of humans showing empathy for a robot in distress.

Feeling sorry for robots

Marieke Wieringa, a researcher at Radboud University, discovered something fascinating.

We humans, it appears, can be rather empathetic to our robotic counterparts. Our hearts can bleed for them, metaphorically speaking of course.

All it seems to take, according to her study, are pitiful sounds, sorrowful digital eyes or a trembling mechanical arm.

Robots in distress

Now, we all know robots don’t feel pain. They’re not alive. They’re just machines composed of wires and metal, computer code and calculations.

Yet, Wieringa’s study throws a wrench in that logical perception. It suggests that given the right circumstances, we can be persuaded to believe that a robot could be in distress.

“If a robot can pretend to experience emotional distress, people feel guiltier when they mistreat the robot,” Wieringa explains. That’s right, pretend distress, induced guilt.

Empathy experiments

Wieringa and her team set up several experiments to examine our responses to robot mistreatment. This involved a rather uncomfortable task for the participants.

“Some participants watched videos of robots that were either mistreated or treated well. Sometimes, we asked participants to give the robots a shake themselves,” Wierenga noted.

“We tried out all variations: sometimes the robot didn’t respond, sometimes it did — with pitiful sounds and other responses that we associate with pain.”

The results? Participants were more reluctant to shake a robot that reacted with signs of distress, effectively pulling at their empathy strings.

“If we asked the participants to shake a robot that showed no emotion, then they didn’t seem to have any difficulty with it at all.” Wieringa added.

Distressing robot noises

In another experiment, the participants faced a choice. They could either complete a tedious task or shake the robot. The catch was if they shook the robot for a longer time, they’d have to do the boring task for less time.

Most preferred to shake a silent robot, but once the robot started making distressing noises, the participants opted for the mundane task instead.

Tamagotchi effect

Remember the Tamagotchi? The pocket-sized, electronic pet that took the world by storm in the late 90s?

Wieringa warns of a similar phenomenon.

“People were obsessed with Tamagotchis for a while: virtual pets that successfully triggered emotions. But what if a company made a new Tamagotchi that you had to pay to feed as a pet?” Wieringa asked.

“That’s why I am calling for governmental regulations that establish when it’s appropriate for chatbots, robots and other variants to be able to express emotions.”

The concern is that businesses might exploit our susceptibility to emotional manipulation, creating products designed to gain not just our attention but our sympathy and cash too.

Dilemmas of digital empathy

The findings of Wieringa’s study open a Pandora’s box of ethical questions.

If people can feel genuine empathy towards distressed robots, should robots be programmed with features that trigger such emotions? This raises concerns about consent and manipulation in human-robot interactions.

Is it ethical to design machines that can elicit empathy to influence human behavior or decision-making? As robotics and artificial intelligence technology continue to advance, these questions become increasingly relevant.

The responsibility of ensuring ethical design practices falls not only on developers but on society as a whole, prompting rigorous debates on the moral boundaries of technological innovation.

Human-robot relationships

As we move toward a future where robots become more integrated into everyday life, understanding the nuances of our emotional connections with them becomes essential.

Will robots become companions, akin to pets or even friends? Or will they remain tools, devoid of any genuine bond?

The psychological implications of forming attachments to machines could redefine social dynamics. Researchers like Wieringa aim to illuminate these implications, helping navigate the complexities of these emerging relationships.

This awareness is crucial as industries explore robotics for healthcare, education, and personal assistance, where empathy can play a pivotal role in enhancing human experiences.

Humans vs. distressed robots

While organizations that attempt to exploit our emotions warrant caution, Wieringa and her team also asserts that a complete ban on robotic emotions isn’t the answer.

There’s a silver lining to all this. She envisions “therapeutic robots” that could potentially help humans process emotional challenges.

Furthermore, her study showed that most participants viewed a robot displaying distress as a positive signal against violent behavior.

However, she emphasizes the need to safeguard people who may be overly susceptible to such “fake” emotions.

“We like to think we are very logical, rational beings that don’t fall for anything, but at the end of the day, we are also led by our emotions. And that’s just as well, otherwise we’d be robots ourselves,” Wieringa concludes.

So, are we heading to a future where we might feel sorry for our toaster if it burns our bread? Going down a path where we’d apologize to our self-driving car if we give it wrong directions?

It may seem far-fetched, but then again, who’d have thought we could feel sorry for a robot being shaken?

—–

Like what you read? Subscribe to our newsletter for engaging articles, exclusive content, and the latest updates. 

Check us out on EarthSnap, a free app brought to you by Eric Ralls and Earth.com.

—–

News coming your way
The biggest news about our planet delivered to you each day
Subscribe