As artificial intelligence (AI) becomes increasingly integrated into journalism, newsrooms face the dual challenge of effectively using the technology while transparently disclosing its involvement to readers.
New research from the University of Kansas (KU) reveals that readers often view AI’s role in news production negatively, even when they don’t fully understand its specific contributions. This perception can lower their trust in the credibility of the news.
The studies, led by researchers Alyssa Appelman and Steve Bien-Aimé of the William Allen White School of Journalism and Mass Communications at KU, explores how readers interpret AI involvement in news articles and its impact on perceptions of credibility.
Appelman and Bien-Aimé, along with their collaborators Haiyan Jia of Lehigh University and Mu Wu of California State University, conducted an experiment to investigate how different AI-related bylines influence readers.
Participants were randomly assigned one of five bylines on an article about the safety of artificial sweetener aspartame. These bylines ranged from “written by staff writer” to “written by artificial intelligence,” with variations indicating collaboration or assistance from AI.
The researchers found that readers interpreted these bylines in diverse ways. Even when the byline simply stated “written by staff writer,” many readers assumed AI had played a role in the article’s creation due to the absence of a named human author.
Participants used their prior knowledge to make sense of AI’s potential contributions, often overestimating its involvement.
“People have a lot of different ideas on what AI can mean, and when we are not clear on what it did, people will fill in the gaps on what they thought it did,” Appelman explained.
Regardless of their interpretation, participants consistently rated news articles as less credible when they believed artificial intelligence was involved. This effect persisted even when the byline explicitly indicated human contribution alongside AI assistance.
Readers appeared to prioritize the perceived extent of human involvement in evaluating the article’s trustworthiness.
“The big thing was not between whether it was AI or human: It was how much work they thought the human did,” Bien-Aimé noted.
The findings highlight the importance of clear and precise disclosure about AI’s role in news production.
While transparency is crucial, simply stating that AI was used may not suffice to alleviate reader concerns. If readers perceive AI as having contributed more than a human, their trust in the news could diminish.
The studies highlight the need for greater transparency and improved communication about the use of AI in journalism.
Recent controversies, such as allegations that Sports Illustrated published AI-generated articles while presenting them as human-written, have underscored the risks of insufficient disclosure.
The research also suggests that readers may be more accepting of AI in contexts where it has not traditionally replaced human roles. For instance, algorithmic recommendations on platforms like YouTube are often perceived as helpful rather than intrusive.
However, in fields like journalism, where human expertise is traditionally valued, the introduction of AI can create skepticism about the quality and authenticity of the work.
“Part of our research framework has always been assessing if readers know what journalists do,” Bien-Aimé said. “And we want to continue to better understand how people view the work of journalists.”
Appelman and Bien-Aimé’s findings point to a gap in reader understanding of journalistic practices. Disclosures about AI involvement, corrections, ethics training, or even bylines are often interpreted differently by readers than journalists intend.
To bridge this gap, the researchers emphasize the need for journalists and educators to better communicate the specifics of how AI is used in news production.
“This shows we need to be clear. We think journalists have a lot of assumptions that we make in our field that consumers know what we do. They often do not,” Bien-Aimé said.
Both studies call for further investigation into how readers perceive AI’s role in journalism and how these perceptions influence trust in the media. By understanding these dynamics, journalists can refine their practices to maintain credibility while leveraging AI’s potential.
As AI continues to shape the future of journalism, the field must navigate the balance between technological innovation and maintaining public trust.
Transparency, clear communication, and ethical practices will be essential to ensuring that AI serves as a tool to enhance rather than undermine the credibility of the news.
The findings are published in Communication Reports and Computers in Human Behavior: Artificial Humans.
—–
Like what you read? Subscribe to our newsletter for engaging articles, exclusive content, and the latest updates.
Check us out on EarthSnap, a free app brought to you by Eric Ralls and Earth.com.
—–