Leading AI assistants are found to misrepresent news content in almost half of their responses, as per the latest research released by the European Broadcasting Union (EBU) and the BBC on Wednesday. The international study, which analyzed 3,000 responses to news-related questions from prominent artificial intelligence assistants, assessed accuracy, sourcing, and the ability to differentiate between opinion and fact in 14 languages. Notable AI assistants scrutinized included OpenAI’s ChatGPT, Microsoft’s Copilot, Google’s Gemini, and Perplexity.
The research unveiled that 45% of AI responses analyzed contained significant issues, with 81% exhibiting some form of problem. According to the Reuters Institute’s Digital News Report 2025, approximately 7% of online news consumers and 15% of individuals under 25 rely on AI assistants for news consumption.
Gemini, Google’s AI assistant, expressed its openness to feedback on its website to enhance user experience. OpenAI and Microsoft acknowledged hallucinations as an issue and are actively working to address it. Perplexity boasts a 93.9% accuracy rate in terms of factuality on its website.
Regarding sourcing errors, the study found that one-third of AI assistant responses showed serious issues such as missing, misleading, or incorrect attribution. Notably, 72% of Google’s AI assistant responses had significant sourcing problems compared to less than 25% for other assistants. Furthermore, 20% of responses from all AI assistants studied displayed accuracy issues, including outdated information.
Examples highlighted in the study included Gemini inaccurately reporting changes to a vaping law and ChatGPT mistakenly identifying Pope Francis as the current Pope months after his passing. The research involved 22 public-service media organizations from various countries, emphasizing the importance of AI companies enhancing their responses to news queries and being more accountable to maintain public trust and democratic participation.
The EBU report emphasized the necessity for AI assistants to uphold accountability similar to news organizations’ practices of identifying and rectifying errors promptly. This move is crucial in ensuring public trust in AI assistants as they increasingly replace traditional search engines for news consumption.
