Article image
07-19-2024

Eye reflections: The key to detecting deepfakes

Welcome to an era where deepfakes – artificial images created through artificial intelligence (AI) – are becoming increasingly easy to produce.

These AI-generated creations, with their potential to heavily distort reality and propagate AI deception, are a major concern.

But what if we could expose these deceptive images simply by analyzing the reflections in a person’s eyes?

This is the central focus of research that was recently presented at the Royal Astronomical Society’s National Astronomy Meeting.

Human eyes betray deepfakes

The study argues that human eyes can betray deepfakes, and the technique to expose them aligns fascinatingly with the methods astronomers employ to decode images of galaxies.

The primary crux of this research, led by MSc student Adejumoke Owolabi from the University of Hull, lies in analyzing the reflection in an individual’s eyes.

According to the study, if the reflections in both eyes are identical, this suggests the image is real. On the other hand, if the eye reflections are not matching, this discrepancy is a strong indicator of a deepfake.

Kevin Pimbblet is a professor of astrophysics and the director of the Centre of Excellence for Data Science, Artificial Intelligence, and Modeling at the University of Hull.

“The reflections in the eyeballs are consistent for the real person, but incorrect (from a physics point of view) for the fake person,” explained Professor Pimbblet.

Dark matter of deepfakes

To dig deeper into this matter, the researchers scrutinized the reflections of light on the eyeballs in both real and AI-generated images.

For their analysis, the team adopted the techniques typically used in astronomy to quantify these reflections and ensured they were consistent between the left and right eyes.

Interestingly, the researchers noticed that fake images often failed to maintain consistency in the reflections between each eye. On the other hand, real images generally exhibited identical reflections in both eyes.

“To measure the shapes of galaxies, we analyze whether they’re centrally compact, whether they’re symmetric, and how smooth they are,” said Professor Pimbblet.

“We analyze the light distribution. We detect the reflections in an automated way and run their morphological features through the CAS [concentration, asymmetry, smoothness] and Gini indices to compare similarity between left and right eyeballs. The findings show that deepfakes have some differences between the pair.”

The science behind the light

The Gini coefficient, a measurement typically used to assess light distribution in the image of a galaxy, was also utilized in this deepfake detection study.

Images of galaxies are arranged in ascending order of flux, and these results are compared against what would be anticipated form a perfectly even flux distribution.

However, the CAS parameters, originally developed by astronomers to measure light distribution of galaxies and determine their morphology, was found ineffective as a predictor of fake eyes.

Professor Pimbblet cautions that this method is not foolproof. He said that it’s important to note that this is not a silver bullet for detecting fake images.

“There are false positives and false negatives; it’s not going to get everything. But this method provides us with a basis, a plan of attack, in the arms race to detect deepfakes.”

Growing threat of deepfakes

Deepfakes are not just a technological curiosity; they pose a significant threat to society.

These AI-generated fake videos and images can be used to spread misinformation, create fraudulent identities, and even manipulate public opinion.

As the technology behind these forgeries becomes more sophisticated, it becomes increasingly challenging to distinguish between real and fake content. This has serious implications for trust in media, security, and personal privacy.

Governments and organizations worldwide are beginning to recognize the potential dangers. Efforts are being made to develop more sophisticated deepfake detection tools and to establish legal frameworks to address the misuse of this technology.

However, the battle against these convincing fakes is ongoing, and as detection methods improve, so too do the techniques used to create them.

The combination of astronomical techniques and AI highlights a multidisciplinary approach to solving the problem, underscoring the need for innovative and collaborative solutions.

—–

Like what you read? Subscribe to our newsletter for engaging articles, exclusive content, and the latest updates. 

Check us out on EarthSnap, a free app brought to you by Eric Ralls and Earth.com.

—–

News coming your way
The biggest news about our planet delivered to you each day
Subscribe