AI search engines give incorrect answers at an alarming 60% rate, study says

You May Be Interested In:Cloudflare turns AI against itself with endless maze of irrelevant facts


Even when these AI search tools cited sources, they often directed users to syndicated versions of content on platforms like Yahoo News rather than original publisher sites. This occurred even in cases where publishers had formal licensing agreements with AI companies.

URL fabrication emerged as another significant problem. More than half of citations from Google’s Gemini and Grok 3 led users to fabricated or broken URLs resulting in error pages. Of 200 citations tested from Grok 3, 154 resulted in broken links.

These issues create significant tension for publishers, which face difficult choices. Blocking AI crawlers might lead to loss of attribution entirely, while permitting them allows widespread reuse without driving traffic back to publishers’ own websites.

A graph from CJR showing that blocking crawlers doesn't mean that AI search providers honor the request.
A graph from CJR showing that blocking crawlers doesn’t mean that AI search providers honor the request.


Credit:

CJR

Mark Howard, chief operating officer at Time magazine, expressed concern to CJR about ensuring transparency and control over how Time’s content appears via AI-generated searches. Despite these issues, Howard sees room for improvement in future iterations, stating, “Today is the worst that the product will ever be,” citing substantial investments and engineering efforts aimed at improving these tools.

However, Howard also did some user shaming, suggesting it’s the user’s fault if they aren’t skeptical of free AI tools’ accuracy: “If anybody as a consumer is right now believing that any of these free products are going to be 100 percent accurate, then shame on them.”

OpenAI and Microsoft provided statements to CJR acknowledging receipt of the findings but did not directly address the specific issues. OpenAI noted its promise to support publishers by driving traffic through summaries, quotes, clear links, and attribution. Microsoft stated it adheres to Robot Exclusion Protocols and publisher directives.

The latest report builds on previous findings published by the Tow Center in November 2024, which identified similar accuracy problems in how ChatGPT handled news-related content. For more detail on the fairly exhaustive report, check out Columbia Journalism Review’s website.

share Paylaş facebook pinterest whatsapp x print

Similar Content

AMD says top-tier Ryzen 9900X3D and 9950X3D CPUs arrive March 12 for $599 and $699
AMD says top-tier Ryzen 9900X3D and 9950X3D CPUs arrive March 12 for $599 and $699
The best and weirdest photos of robots from 2024
2024 review: The best new robots of the year
Found in the wild: The world’s first unkillable UEFI bootkit for Linux
Found in the wild: The world’s first unkillable UEFI bootkit for Linux
New Scientist. Science news and long reads from expert journalists, covering developments in science, technology, health and the environment on the website and the magazine.
Is Google’s new quantum computer a big deal?
Competition opens to find the world's most perplexing computer code
Competition opens to find the world’s most perplexing computer code
A slot machine
Despite backlash, loot boxes could be essential to gaming’s future
The News Spectrum | © 2025 | News