Fingerprint analysis has been a dependable tool in crime-solving for more than a century. Investigators lean on fingerprint evidence to identify suspects or connect them to specific crime scenes, believing that every print offers a distinctive code.
Yet, a team of researchers has found that prints from different fingers of the same person can sometimes appear more alike.
This insight came from an artificial intelligence model that revealed surprising connections between prints.
Hod Lipson, from Columbia Engineering, stands out in this effort to question widely accepted forensic norms, in collaboration with Wenyao Xu from the University at Buffalo.
For decades, it has been taken for granted that fingerprints from different fingers of one individual do not match. Much of this belief stems from the assumption that each finger displays completely separate ridges, loops, and swirls.
One anonymous reviewer even stated, “It is well known that every fingerprint is unique,” when confronted with the researchers’ work.
Despite such resistance, an undergraduate senior at Columbia Engineering named Gabe Guo spearheaded a study that contradicts this long-standing assumption.
By using a public U.S. government database with roughly 60,000 prints, Guo fed pairs of fingerprints into a deep contrastive network. Some pairs belonged to the same person, while others came from different people.
The artificial intelligence system became adept at telling when prints that looked different were actually from one individual, reaching an accuracy of 77% for single pairs.
In cases where multiple samples were grouped together, the accuracy soared, offering the possibility of boosting existing forensic methods by more than tenfold.
Although these findings promised fresh possibilities for connecting crime scenes, the researchers faced an uphill battle during peer review.
The project was rejected by a well-established forensics journal that did not accept the suggestion that different fingers might produce prints with shared characteristics.
Undeterred, the group sought out a broader readership. The paper was turned away once again, prompting Lipson to challenge the decision.
“If this information tips the balance, then I imagine that cold cases could be revived, and even that innocent people could be acquitted,” noted Lipson, who co-directs the Makerspace Facility at Columbia.
Determined not to back away from a challenge, even if it meant disrupting over 100 years of accepted practice, the team kept refining their work.
Finally, their persistence paid off as their study was finally recognized and published in the peer-reviewed journal, Science Advances.
Traditional methods rely on minutiae, which refer to branching patterns and endpoints in the ridges.
“The AI was not using ‘minutiae,’ which are the branchings and endpoints in fingerprint ridges – the patterns used in traditional fingerprint comparison,” Guo explained.
“Instead, it was using something else, related to the angles and curvatures of the swirls and loops in the center of the fingerprint.”
His findings suggest that experts may have overlooked important visual cues.
The collaboration included Columbia Engineering graduate Aniv Ray and PhD student Judah Goldfeder, both of whom indicated that the project’s early success could grow stronger with bigger datasets.
“Just imagine how well this will perform once it’s trained on millions, instead of thousands of fingerprints,” Ray remarked, hinting that this approach could eventually refine how investigators hunt for clues across multiple crime scenes.
The researchers are alert to possible data gaps. They noted that their system showed similar performance across various demographics but emphasized the need for larger, more diverse fingerprint collections.
They hope that thorough validation will address any concerns about bias before anyone adopts this technique in actual investigations.
The long-term goal is to offer law enforcement a supplementary tool that improves efficiency when cases seem tangled.
While the AI cannot officially conclude a legal matter, it can help narrow the field of suspects or connect distinct crime scenes based on partial matches.
“Many people think that AI cannot really make new discoveries – that it just regurgitates knowledge,” Lipson elaborated, pointing to a broader shift in how AI might support investigative work.
“But this research is an example of how even a fairly simple AI, given a fairly plain dataset that the research community has had lying around for years, can provide insights that have eluded experts for decades.”
This study demonstrates that artificial intelligence can spot patterns that traditional analysis methods might miss. It also highlights the value of open datasets that have been underutilized in many areas of research.
The findings may prompt forensic experts to rethink certain procedures, especially when multiple prints from the same suspect turn up at different locations.
Lipson sees a future where unexpected breakthroughs can come from fresh perspectives.
“Even more exciting is the fact that an undergraduate student, with no background in forensics whatsoever, can use AI to successfully challenge a widely held belief of an entire field,” Lipson concluded.
“We are about to experience an explosion of AI-led scientific discovery by non-experts, and the expert community, including academia, needs to get ready.”
The full study was published in the journal Science Advances.
—–
Like what you read? Subscribe to our newsletter for engaging articles, exclusive content, and the latest updates.
Check us out on EarthSnap, a free app brought to you by Eric Ralls and Earth.com.
—–