
Breitbart
A new study from Columbia Journalism Review’s Tow Center for Digital Journalism has uncovered serious accuracy issues with generative AI models used for news searches. According to the study, AI search engines have a startling error rate of 60 percent when queried about the news.
Ars Technica reports that the research tested eight AI-driven search tools equipped with live search functionality and discovered that the AI models incorrectly answered more than 60 percent of queries about news sources. This is particularly concerning given that roughly 1 in 4 Americans now use AI models as alternatives to traditional search engines, according to the report by researchers Klaudia Jaźwińska and Aisvarya Chandrasekar.
Error rates varied significantly among the platforms tested. Perplexity provided incorrect information in 37 percent of queries, while ChatGPT Search was wrong 67 percent of the time. Elon Musk’s Grok 3 had the highest error rate at 94 percent. For the study, researchers fed direct excerpts from real news articles to the AI models and asked each one to identify the headline, original publisher, publication date, and URL. In total, 1,600 queries were run across the eight generative search tools. more here
So they’re still more accurate than legacy media sources.
Artificial is someone’s idea’s on what they think it should be.
Has noting to do with being correct
Programs don’t lie, but Liars program. Best bet is to argue with it, I got one to admit that Climate Change reply could be political bullshit.
Garbage in, garbage out. This is NOT a computer thinking on its own…just more biased liberal programming.
It’s no AI.
Fancy/clever algorithm.
Doesn’t learn, doesn’t think. Just a fancy interface for searching massive volumes of text…similar to a search engine but more chatty.
It’s like the nosy “Karen” next door. Completely ignorant but full of information.
MrLiberty
AND IT’S NOT TRUE AI. It’s all bull shit. I’ve been saying this from the beginning.
These have been pushed as the prevalent AIs, right?
Meaning they’re the new mainstreamedIAs.
Unfortunately quantum computers may actually produce true AI…
Skynet may not be too far off.
The more mainstream these become, the more lawsuits there will be. To the point where you will have to ‘agree’ to a disclaimer holding the user fully responsible for the dissemintaion of anything the AI tool spits out. No different than the ‘230 Rule’ that covers/protects web providers asses now.