As artificial intelligence becomes increasingly involved in journalism, journalists and editors are wondering not only how to use the technology, but also how to disclose it to readers. A new study from the University of Kansas found that when readers believe AI is somehow involved in news production, they have less confidence in the credibility of the information, even if they don’t fully understand what it brought.
The results show that readers are aware of the use of AI in news creation, even if they perceive it negatively. But understanding what and how technology has contributed to the news can be complicated, and how to disclose it to readers in a way they understand is a problem that needs to be addressed clearly, researchers say.
“The growing focus of AI in journalism is an issue we know journalists and educators are talking about, but we were interested in how readers perceive it. So we wanted to learn more about media perceptions and their influence, or what people think about it AI-generated news,” said Alyssa Appelman, associate professor at the William Allen White School of Journalism and Mass Communications and co-author of two studies on the topic.
Appelman and Steve Bien-Aimé, assistant professor at the William Allen White School of Journalism and Mass Communication, helped conduct an experiment in which they showed readers a news story about the artificial sweetener aspartame and its safety for human consumption. Readers were randomly assigned one of five signatures: written by an editor, written by an editor with an artificial intelligence tool, written by an editor with the help of artificial intelligence, written by a editor with collaboration with artificial intelligence and written by artificial intelligence. Furthermore, the article was consistent in all cases.
The results were published in two research articles. Both were written by Appelman and Bien-Aimé of KU, along with Haiyan Jia of Lehigh University and Mu Wu of California State University, Los Angeles.
One article focused on how readers made sense of AI signatures. Readers were surveyed after reading the article about the meaning of the specific byline they received and whether they agreed with several statements intended to measure their media literacy and attitudes toward media literacy. ‘AI.
The results showed that regardless of the signature received, participants had a broad view of what the technology did. The majority said they felt humans were the main contributors, while some said they thought AI could have been used as a research aid or to write a first draft edited by a human.
The results showed that participants understood what AI technology can do and that it is guided by humans using prompts. However, the different byline requirements left plenty of time for people to interpret how this might have contributed specifically to the article they were reading.
When AI’s contribution was mentioned in the byline, it negatively affected readers’ perceptions of the source and author’s credibility. Even with the byline “written by the author”, readers interpreted it to mean that it was at least partially written by AI, as no human names were attached to the story.
Readers used sensemaking as a technique to interpret the AI’s contributions, the authors wrote. The tactic is a way of using previously learned information to make sense of situations they may not be familiar with.
Discover the latest in science, technology and space with more than 100,000 subscribers who rely on Phys.org for daily information. Subscribe to our free newsletter and get updates on the breakthroughs, innovations and research that matter:daily or weekly.
“People have a lot of different ideas about what AI can mean, and when we’re not clear about what it did, people will fill in the gaps about what they thought it did,” he said. Appelman said.
The results showed that no matter how much they thought AI contributed to the story, their opinion of the credibility of the information was negatively affected.
The results were published in the journal Communications reports.
A second research paper explored how perceptions of humanity mediate the relationship between the perceived contribution of AI and credibility judgments. He found that recognizing AI improved transparency and making readers feel human contribution to the news improved trustworthiness.
Participants indicated the percentage they thought AI was involved in creating the article, regardless of the byline condition they received. The higher the percentage they gave, the lower their credibility judgment. Even those who read “written by the editor” said they thought AI was involved to some extent.
“The most important thing wasn’t whether it was the AI or the human: it was how much work they thought the human was doing,” Bien-Aimé said. “It shows that we need to be clear. We think journalists are making many of the assumptions we make in our field that consumers know what we’re doing. That’s often not the case.”
The results suggest that people place greater credibility on human contributions in fields such as journalism, traditionally carried out by humans. When this is replaced by technology such as AI, it can affect perceptions of credibility, whereas this may not be the case for things that are not traditionally human, like YouTube suggesting videos to watch to a person, based on their previous viewing, the authors said. .
Although it can be considered positive that readers tend to perceive information written by humans as more credible, journalists and educators must also understand that they must be clear when disclosing how or if they use AI. Transparency is a good practice, as demonstrated by a scandal earlier this year in which Sports Illustrated allegedly published AI-generated articles presented as being written by people. However, researchers say, simply stating that AI was used may not be clear enough for people to understand what it did and, whether they believe it contributed more than a human, this could negatively influence perceptions of credibility.
The findings on perceived authorship and humanity were published in the journal Computers in human behavior: artificial humans.
Both journal articles indicate that further research should continue to explore how readers perceive AI’s contributions to journalism, the authors say, and they also suggest that journalism as a field can benefit from improvements in the way it discloses such practices. Appelman and Bien-Aimé study readers’ understanding of various journalistic practices and found that readers often do not perceive what certain disclosures such as corrections, bylines, ethics training or use of AI in a way consistent with what the journalists wanted to say.
“Part of our research framework has always been to assess whether readers know what journalists do,” Bien-Aimé said. “And we want to continue to better understand how people perceive the work of journalists.”
More information:
Steve Bien-Aimé et al, Who wrote it? Sensemaking by news readers about AI/human signatures, Communications reports (2024). DOI: 10.1080/08934215.2024.2424553
Haiyan Jia et al, News signatures and perceived AI authorship: effects on source and message credibility, Computers in human behavior: artificial humans (2024). DOI: 10.1016/j.chbah.2024.100093
Quote: Study finds readers trust news less when AI is involved, even if they don’t understand to what extent (December 9, 2024) retrieved December 11, 2024 from
This document is subject to copyright. Except for fair use for private study or research purposes, no part may be reproduced without written permission. The content is provided for informational purposes only.