New York (CNN) — Apple is temporarily pulling its newly introduced artificial intelligence feature that summarizes news notifications after it repeatedly sent users error-filled headlines, sparking backlash from a news organization and press freedom groups.
The rare reversal from the iPhone maker on its heavily marketed Apple Intelligence feature comes after the technology produced misleading or altogether false summaries of news headlines that appear almost identical to regular push notifications.
On Thursday, Apple deployed a beta software update to developers that disabled the AI feature for news and entertainment headlines, which it plans to later roll out to all users while it works to improve the AI feature. The company plans to re-enable the feature in a future update.
As part of the update, the company said the Apple Intelligence summaries, which users must opt into, will more explicitly emphasize that the information has been produced by AI, signaling that it may sometimes produce inaccurate results.
Last month, The BBC complained to Apple about the technology, urging the company to scrap the feature after it created false headlines stating that Luigi Mangione, who is charged with murder in the death of the UnitedHealthcare CEO, had shot himself. On another occasion, three New York Times articles were also summarized in a single push notification, falsely stating that Israeli Prime Minister Benjamin Netanyahu had been arrested.
A BBC spokesperson told CNN in December it “is critical that Apple urgently addresses these issues as the accuracy of our news is essential in maintaining trust. These AI summarisations by Apple do not reflect — and in some cases completely contradict — the original BBC content.”
On Wednesday, the AI-powered feature once again incorrectly summarized a Washington Post notification, stating falsely “Pete Hegseth fired; Trump tariffs impact inflation; Pam Bondi and Marco Rubio confirmed.” None of these are true.
“This is my periodic rant that Apple Intelligence is so bad that today it got every fact wrong its AI a summary of Washington Post news alerts,” the newspaper’s tech columnist Geoffrey Fowler wrote. “It’s wildly irresponsible that Apple doesn’t turn off summaries for news apps until it gets a bit better at this AI thing.”
Press freedom groups have also highlighted the dangers the summaries pose to consumers seeking out reliable information, with Reporters Without Borders calling it “a danger to the public’s right to reliable information on current affairs” and the National Union of Journalists, one of the largest journalist unions worldwide, emphasizing “the public must not be placed in a position of second-guessing the accuracy of news they receive.” Both called for the AI-powered summaries to be removed.
Apple’s AI troubles are hardly the first time a developer has had to contend with the technology fabricating information, with popular models like ChatGPT often producing confident “hallucinations.”
Large-language models, the technology behind AI tools, are trained to respond to inputs using “a plausible sounding answer” to prompts, Suresh Venkatasubramanian, a professor at Brown University who helped co-author the White House’s Blueprint for an AI Bill of Rights, previously told CNN.
“So, in that sense, any plausible-sounding answer, whether it’s accurate or factual or made up or not, is a reasonable answer, and that’s what it produces,” Venkatasubramanian said. “There is no knowledge of truth there.”
Two years after ChatGPT’s launch, AI hallucinations remain as prevalent as ever. A July 2024 study from Cornell, the University of Washington, and the University of Waterloo found that top AI models still can’t be fully trusted given their proclivity for inventing information.
The-CNN-Wire
™ & © 2025 Cable News Network, Inc., a Warner Bros. Discovery Company. All rights reserved.
Source link