AI is aware of all of it — however what occurs when it makes it up?
I keep in mind analysis analysts being essentially the most annoyed group again in November 2022 when ChatGPT exploded onto the tech scene. They have been being requested to experiment with and use AI of their workflows, but it surely didn’t take lengthy for them to come across a serious stumbling block. In any case, would you danger your profession and credibility over a brand new know-how fad?
Whereas content material creators like myself, information scientists, and engineers have been thriving with AI adoption, we may solely empathize with our analysis analyst friends as we partnered with them to seek out new methods to make OpenAI, Gemini, Langchain, and Perplexity cater to their necessities. Everybody tried constructing belief in AI as we placed on our researcher hats.
However quickly, the consensus was that AI hallucinations have been an issue for data staff, whether or not you have been a researcher, content material creator, developer, or a enterprise chief.
Quick ahead to 2025, and regardless of all of the developments in AI, hallucinations haven’t disappeared. Whereas firms like Anthropic, OpenAI, and NVIDIA are pushing the boundaries of AI reasoning fashions, the ghost of hallucinations nonetheless lingers. Our newest G2 LinkedIn ballot reveals that just about 75% of execs have skilled AI hallucinations, with over half (52%) saying they’ve skilled AI hallucinations a number of instances.
These new developments may promise smarter, sooner, and extra dependable AI, however the query stays — are they sturdy sufficient to maintain hallucinations at bay?
Let’s take a better take a look at the newest AI LLM updates shaping the {industry}:
A timeline of key AI LLM mannequin updates in 2025
- February 24, 2025: Anthropic launched Claude 3.7 Sonnet, the world’s first hybrid reasoning AI mannequin to boost and broaden output limits
- February 27, 2025: OpenAI unveiled GPT-4.5 Orion, integrating numerous applied sciences right into a unified mannequin for streamlined AI functions
- March 18, 2025: NVIDIA introduced the open Llama Nemotron household of fashions with reasoning capabilities to empower enterprise
- March 20, 2025: At GTC 2025, NVIDIA launched NVIDIA Dynamo, an open-source software program designed to speed up and scale AI reasoning fashions in AI factories
Hallucinations, the ‘Reply Economic system’, and real-world challenges
As AI fashions evolve with new capabilities, the way in which we work together with info can also be reworking. We’re witnessing the rise of a mega-trend that our very personal Tim Sanders calls the “Reply Economic system.” Individuals are transitioning from search-based analysis to an answer-driven model of studying, shopping for, and dealing.
However there’s a catch in all of this. AI chatbots appear to be delivering immediate, assured responses — even once they’re improper. And regardless of accuracy issues, these AI-generated solutions are influencing selections throughout industries. This shift poses a essential query: are we too fast to just accept AI’s responses as reality, particularly when the stakes are excessive? How sturdy is our belief in AI?
Whereas AI chatbots are shaking up search and AI firms are leaping in direction of agentic AI, how sturdy are their roots when hallucinations hang-out? AI hallucinations will be as trivial as Gemini telling folks to eat rocks and glue pizza. Or as massive as fabricating claims like those under.
AI hallucinations: a timeline of authorized challenges
- January 6, 2025: An AI skilled’s testimony was challenged in courtroom for counting on AI hallucinated citations in a deepfake-related lawsuit, elevating issues concerning the credibility of AI-generated proof
- February 11, 2025: Attorneys in Wyoming confronted potential sanctions for utilizing AI-generated fictitious citations in a lawsuit in opposition to Walmart, highlighting the dangers of counting on hallucinated information in authorized filings
- March 20, 2025: OpenAI confronted a privateness criticism in Europe after ChatGPT falsely accused a Norwegian particular person of homicide, elevating issues about reputational harm and GDPR violations
There have been a number of different notable AI hallucination mishaps in 2024 involving manufacturers like Air Canada, Zillow, Microsoft, Groq, and McDonald’s.
So, are AI chatbots making life simpler or simply including one other layer of complexity for companies? We combed by way of G2 evaluations to uncover what’s working, what’s not, and the place the hallucinations hit hardest.

Greater than your common e-newsletter.
Each Thursday, we spill scorching takes, insider data, and information recaps straight to your inbox. Subscribe right here
The G2 take
A fast comparability of ChatGPT, Gemini, Claude, and Perplexity reveals ChatGPT because the chief at a look, with an 8.7/10 rating. Nonetheless, a better look reveals that Gemini leads by way of reliability — by a slim margin.
Supply: G2.com
Whereas ChatGPT has better capabilities of studying from person interactions to scale back errors and perceive context, Perplexity and Gemini beat it at content material accuracy with an 8.5 rating.
Supply: G2.com
Almost 35% of evaluations spotlight the accuracy hole
These AI chatbots are being utilized in small companies, SMEs, and enterprises by all types of execs — analysis analysts, advertising and marketing leaders, software program engineers, tutors, and many others. And a deep dive into G2 assessment information reveals a evident pattern: inaccuracy stays a shared concern throughout the board.
We will’t assist however discover that, proper off the bat, a median of ~34.98% of evaluations have issues about inaccuracy, context understanding, and outdated info.
Supply: Unique G2 Information
Customers aren’t shy about flagging their frustrations. Out of the a whole lot of evaluations, accuracy issues topped the listing of cons:
- ChatGPT: 101 mentions of inaccuracy, with outdated info including to the frustration
- Gemini: 33 cases of inaccurate responses, compounded by 26 complaints about context understanding
- Claude: Fewer studies, however with seven accuracy points and 5 issues about recognition
- Perplexity: Whereas boasting fast insights, it wasn’t immune — customers identified seven limitations associated to AI accuracy
Whereas China’s DeepSeek has turned heads and wreaked inventory market havoc on account of its velocity and cost-saving go-to-market (GTM) product, it doesn’t have a particular (and dare we are saying authorized sufficient) presence within the USA for legitimate issues over security and potential information siphoning. Speculations round its reliability outweigh the attract of affordability.
Our VP of Insights, Tim Sanders, referred to as it out for its hallucination fee in a latest interview.
“DeepSeek’s R1 has an 83% hallucination fee for analysis and writing, which is far greater than the ten% hallucination fee of different AI platforms.”
Tim Sanders
VP of Analysis Insights at G2
Gemini: The ironic productiveness booster for analysis analysts
We famous a number of analysis analysts use Gemini. Some significantly favor the analysis mode and use it for tutorial and market analysis.
“Each day use, significantly in love with analysis mode. Gemini’s velocity enhances the browsing expertise general, particularly for individuals who use the web for intensive analysis and work duties or who multitask.”
Elmoury T.
Analysis Analyst
However right here’s the twist: analysis analysts aren’t raving about Gemini for its analysis reliability. As an alternative, it’s the seamless connectivity to Google’s suite of instruments and customizable person expertise that steals the highlight. Productiveness boosts, streamlined workflows, and smoother job administration? Completely. Trusting it for rigorous analysis? Not a lot.
Whereas Gemini’s analysis mode aggregates info from the web, accuracy and fact-checking aren’t making the headlines. Reminiscence administration points and sluggish efficiency additionally maintain it from being a real analysis powerhouse.
Supply: G2.com Opinions
ChatGPT: energy participant with precision pitfalls
From code era to market analysis, ChatGPT has turn out to be a every day go-to for professionals to brainstorm, generate content material rapidly, and reply complicated questions. But, accuracy issues persist.
Geopolitical matters and nuanced analysis usually result in deceptive outcomes. Context understanding is strong, however misinformation and hallucinations nonetheless plague customers.
Consumer evaluations reward ChatGPT’s polished tone and contextual understanding, however this confidence usually masks the occasional hallucination. Customers highlighted its tendency to offer plausible-sounding however inaccurate info, particularly in complicated or nuanced situations like geopolitics. It’s a textbook case of “sounding sensible however not at all times being proper.”
Paid account customers are impressed with its new multimodal inputs, voice interactions, and reminiscence retention but in addition spotlight its limitations in information evaluation, picture creation, and general accuracy.
General, paid customers discover the product expensive in comparison with different free options accessible out there owing to ChatGPT’s server down time and accuracy points.
Supply: G2.com Opinions
Supply: G2.com Opinions
G2 evaluations additionally surfaced how customers undergo back-and-forth with ChatGPT to get their desired outcomes. At instances, customers ran out of allotted tokens rapidly, leaving their queries unhappy.
Supply: G2.com Opinions
However for some customers, the advantages far outweigh the pitfalls. As an example, in industries the place velocity and effectivity are essential, ChatGPT is proving to be a game-changer.
G2 Icon use case
Peter Gill, a G2 Icon and freight dealer, has embraced AI for industry-specific analysis. He makes use of ChatGPT to investigate regional produce tendencies throughout the U.S., figuring out the place seasonal peaks create alternatives for his trucking companies. By lowering his weekly analysis time by as much as 80%, AI has turn out to be a essential instrument in optimizing his enterprise technique.
“Historically, my weekly analysis may take me over an hour of guide work, scouring information and studies. ChatGPT has slashed this course of to only 10-Quarter-hour. That’s time I can now put money into different essential areas of my enterprise.”
Peter Gill
G2 Icon and Freight Dealer
Peter advocates that AI’s advantages prolong far past the logistics sector, proving to be a strong ally in right this moment’s data-driven world.
Perplexity: velocity meets smarts — with a facet of stumbles
Perplexity’s exterior internet search functionality and speedy updates have earned it a strong fanbase amongst researchers. Customers reward its capacity to offer complete, context-aware insights. The frequent integration of the newest AI fashions ensures it stays a step forward.
Nevertheless it’s not all sunshine and summaries. Customers flagged points with information export, making it tougher to translate insights into actionable studies. Minor UX enhancements may additionally considerably elevate its person expertise.
Michael N., a G2 reviewer and head of buyer intelligence, said that Perplexity Professional has reworked how he builds data.
Supply: G2.com Opinions
“Easiest method of conducting tiny and sophisticated analysis with correct prompting.”
Vitaliy V.
G2 Icon and Product Advertising and marketing Supervisor
Enterprise leaders and CMOs like Andrea L. are utilizing completely different AI chatbots to both complement, complement, or full their analysis.
Supply: G2.com Opinions
G2 Icon use case
Luca Piccinotti, a G2 Icon and CTO at Studio Piccinotti, makes use of AI to navigate complicated market dynamics. His group makes use of AI to course of huge quantities of knowledge from surveys, social media, and buyer suggestions for sentiment evaluation, serving to them gauge public opinion and spot rising tendencies. AI additionally streamlines their survey workflows by automating query era, information assortment, and evaluation, making their analysis extra environment friendly.
To translate insights into actionable methods, Luca depends on predictive analytics to forecast shopper conduct, monitor rivals, and personalize advertising and marketing campaigns. His most well-liked AI instruments? Perplexity for analysis and ChatGPT for managing and refining the info.
“Perplexity is our trusted companion for analysis functions, whereas we use ChatGPT for managing the obtained information. We additionally use extra instruments and wrappers, API, native fashions and many others. However the unbeatable ones are Perplexity and ChatGPT at this second.”
Luca Piccinotti
G2 Icon and CTO at Studio Piccinotti
Claude: a reasonably trustworthy, human-like, data-deficient counterpart
Claude’s conversational tone and contextual understanding shine by way of in evaluations. Customers admire its willingness to confess when it doesn’t know one thing reasonably than hallucinating a response. That degree of transparency builds belief.
Nonetheless, restricted coaching information and functionality gaps in comparison with rivals like ChatGPT stay areas for enchancment. And whereas its strengths lie in conversational accuracy, its structured information evaluation continues to be a piece in progress.
Not like most AI chatbots that confidently present incorrect solutions, Claude customers admire its transparency when it doesn’t know one thing. This “honesty over hallucination” method is a novel promoting level, making it a most well-liked alternative for customers who worth dependable suggestions over speculative responses.
Supply: G2.com Opinions
Nonetheless, customers additionally expressed frustrations round Claude’s skilled mode, citing its utilization bandwidth and lack of customer support.
Supply: G2.com Opinions
Verdict: AI for analysis — yay or nay?
It’s a cautious yay — which continues to be higher than the basic “it relies upon”.
AI chatbots are undeniably beneficial analysis instruments, particularly for dashing up info gathering and summarizing. However they’re not flawless.
4 key takeaways
Hallucinations, accuracy points, and inconsistent reliability stay challenges.
- Gemini could be your productiveness sidekick, simply not your analysis fact-checker in the event you’re a analysis analyst who values integration and productiveness over pinpoint accuracy.
- ChatGPT is a productiveness booster for fast analysis duties, however fact-checking stays a should, even in the event you’re paying a bomb for the paid subscription.
- Perplexity is a dependable data companion for researchers who worth velocity and cutting-edge AI.
- Claude is the selection for these looking for trustworthy, human-like responses, however don’t anticipate it to crunch complicated datasets.
My tried-and-tested prompting hacks to keep away from AI hallucinations
- Immediate construction = Be exact + give context + specify what you need the specified end result to be + warn it about what its output shouldn’t have + share an instance if attainable
- Use a immediate that calls on AI’s chain-of-thought reasoning to verify accuracy and determine hallucinations. Ask the AI chatbot: “Break down the steps you adopted to supply this output. Additionally, are you able to clarify your rationale for doing so?”
- Use templatization and observe organization-wide pointers on utilizing AI chatbots and LLMs for work
- People within the loop stay essential, particularly in high-stakes environments like authorized analysis, market analysis, medical analysis, monetary analysis, and many others.
- All the time confirm and cross-check sources. We all know life will get busy, however a fast verify is at all times cheaper than a lawsuit!
Hallucinate much less, confirm extra: keep away from the AI tunnel imaginative and prescient entice
Anticipate AI fashions to double down on accuracy and transparency. Advances in multimodal AI and retrieval-augmented era (RAG) may scale back hallucinations. Perplexity, OpenAI, Google, and Anthropic now have their very own AI search capabilities, which can plug into real-time person information to sharpen the accuracy and relevance of outputs.
Despite the fact that newer fashions like DeepSeek R1 are being constructed at one-tenth the price of main rivals, its trustworthiness will decide its destiny within the world market.
Ultimately, AI chatbots and LLMs are your analysis sidekick, not your fact-checker. Use them properly, query relentlessly, and let the info — not the chatbot — cleared the path.
Loved this deep-dive evaluation? Subscribe to the G2 Tea e-newsletter right this moment for the most well liked takes in your inbox.
Edited by Supanna Das