Article image
08-13-2024

Do AI models pose an existential threat to humans?

ChatGPT and other large language models (LLMs) lack the ability to learn independently or acquire new skills on their own, indicating that AI models do not pose an existential threat to humanity, according to new research conducted by the University of Bath and the Technical University of Darmstadt in Germany.

This study was published as part of the proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024) – the leading international conference in natural language processing.

Controllable, predictable and safe models

The researchers found that while large language models can superficially follow instructions and demonstrate proficiency in language, they are unable to master new skills without explicit instruction. As a result, these models remain inherently controllable, predictable, and safe.

According to the experts, even as LLMs are trained on increasingly larger datasets, they can be used without significant safety concerns, although there is still a potential for misuse of the technology.

AI models lack complex reasoning abilities 

As these models grow, they are likely to produce more sophisticated language and become better at responding to detailed prompts, but they are highly unlikely to develop complex reasoning abilities.

“The prevailing narrative that this type of AI is a threat to humanity prevents the widespread adoption and development of these technologies, and also diverts attention from the genuine issues that require our focus,” said Harish Tayyar Madabushi, a computer scientist at the University of Bath and co-author of the new study on the ‘emergent abilities’ of LLMs.

Emergent abilities of large language models 

The research team, led by Professor Iryna Gurevych at the Technical University of Darmstadt in Germany, conducted experiments to evaluate the ability of LLMs to perform tasks they had not encountered before – so-called emergent abilities.

For example, LLMs can answer questions about social situations without having been specifically trained or programmed to do so. 

While previous research suggested that this capability stemmed from the models “knowing” about social situations, the researchers demonstrated that it is actually a result of LLMs using a known ability called “in-context learning” (ICL), where they complete tasks based on a few examples provided to them.

Capabilities and limitations of AI models

Through thousands of experiments, the team showed that a combination of LLMs’ instruction-following ability (ICL), memory, and linguistic proficiency explains both their capabilities and their limitations.

“The fear has been that as models get bigger and bigger, they will be able to solve new problems that we cannot currently predict, which poses the threat that these larger models might acquire hazardous abilities including reasoning and planning,” Madabushi said.

“This has triggered a lot of discussion – for instance, at the AI Safety Summit last year at Bletchley Park, for which we were asked for comment – but our study shows that the fear that a model will go away and do something completely unexpected, innovative and potentially dangerous is not valid.”

Concerns over AI threats 

According to Madabushi, concerns over the existential threat posed by LLMs are not restricted to non-experts and have been expressed by some of the top AI researchers across the world.

However, these fears appear to be unfounded, as the researchers’ experiments clearly showed the absence of emergent complex reasoning abilities in LLMs.

Madabushi noted that while it’s important to address the existing potential for the misuse of AI, such as the creation of fake news and the heightened risk of fraud, it would be premature to enact regulations based on perceived existential threats.

“Importantly, what this means for end users is that relying on LLMs to interpret and perform complex tasks which require complex reasoning without explicit instruction is likely to be a mistake,” said Madabushi. 

“Instead, users are likely to benefit from explicitly specifying what they require models to do and providing examples where possible for all but the simplest of tasks.”

“Our results do not mean that AI is not a threat at all,” said Gurevych. “Rather, we show that the purported emergence of complex thinking skills associated with specific threats is not supported by evidence and that we can control the learning process of LLMs very well after all.”

Professor Gurevych concluded that future research should focus on other risks posed by the models, such as their potential to be used to generate fake news.

The study is published on the arXiv preprint server.

—–

Like what you read? Subscribe to our newsletter for engaging articles, exclusive content, and the latest updates.

Check us out on EarthSnap, a free app brought to you by Eric Ralls and Earth.com.

—–

News coming your way
The biggest news about our planet delivered to you each day
Subscribe