
Credit: UNSPLASH / CC0 public domain
“Luigi Mangione is shooting,” read the title of BBC News.
With the exception of Mangione, the man accused of the murder of the CEO of Unitedhealthcare, Brian Thompson, had done nothing like it. And the BBC had not pointed out that either – but yet it was the title that Apple Intelligence has displayed its users as part of a summary of notifications.
It was one of the many high -level errors committed by the software powered by artificial intelligence that led to the technology giant by susting the functionality of notifications in Apple Intelligence with regard to news and entertainment categories.
Anees Baqir says that the accidental propagation of disinformation by such a source of AI “posed a significant risk by eroding public confidence”.
The assistant professor of data science at the Northeastern University in London, who research online disinformation, says that errors like those committed by Apple Intelligence were likely to “create confusion” and could have brought consumers to doubt media brands that they had previously trusted.
“Imagine what it could do with people if there is a content related to disinformation from a very publicized source of information which is generally considered a reliable source of information,” said Baqir. “It could be really dangerous, in my opinion.”
The episode with Apple Intelligence has sparked a broader debate in Great Britain on the question of whether the generative software of public generators are capable of summarizing and understanding the press articles with precision.
BBC News Director General Deborah Turness said that AI brings “endless opportunities”, companies that develop the tools “are currently playing with fire”.
There are reasons why generative AI like Apple Intelligence may not always make them with regard to reports, explains Mariana Macedo, scientist of Northeastern data.
During the development of a generative AI, the “processes are not deterministic, they therefore have a certain stochasticity”, explains the assistant professor based in London, which means that there can be a chance to the result.
“Things can be written in a way that you cannot predict,” she explains. “It’s like when you talk about a child.
“The child knows what will be well or badly more or less, but the child does not know everything. The child does not have all the experience or knowledge to react and create new actions in a perfect way. It is the same with AI and algorithms.”
Macedo says that the problem with the news and learning AI is that the news mainly concerns things that have just happened – there is little or no past context to help the software to understand the relationships it is asked to summarize.
“When you talk about news, you’re talking about new things,” said the researcher. “You don’t talk about things we have known for a long time.
“The AI is very good in the things that are well established in society. AI does not know what to do with regard to contradictory or new things. So, whenever AI is not formed with enough information, it will be mistaken even more.”
To guarantee precision, Macedo maintains that developers must “find a way to automatically check this information” before its publication.
Allowing AI to learn press articles in their training could also mean that they are “more likely to improve” their precision, says Macedo.
The BBC currently prevents developers from using its content to form generative AI models. But other British media have taken measures to collaborate, with partnership agreements between the Financial Times and Openai, allowing Chatgpt users to see summaries, quotes and links allocated.
Baqir suggests that technological companies,, media organizations And the collaboration of communications regulators could be the best way to deal with the problem of disinformation of news from AI.
“I think they must all come together,” he said. “It is only then that we can find a way that can help us to alleviate these impacts. There can be only one solution.”
Supplied by
Northeast University
This story is republished thanks to Northeastern Global News News.northeastern.edu.
Quote: Apple faux pas of Apple highlight the risks of producing automated IA titles, the researcher dit (2025, March 24) recovered on April 21, 2025 from https://techxplore.com/News/2025-03-apple-missteps-highlight-ai-automated.html
This document is subject to copyright. In addition to any fair program for private or research purposes, no part can be reproduced without written authorization. The content is provided only for information purposes.