Close Menu
Must Have Gadgets –

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Nvidia DGX Spark Systems Can Play Cyberpunk, PS3 Games (for a Price)

    November 7, 2025

    Top Verizon Promo Codes & Deals | November 2025

    November 7, 2025

    Early leak indicates Sony’s next Xperia phones and their region availability

    November 7, 2025
    Facebook X (Twitter) Instagram
    Must Have Gadgets –
    Trending
    • Nvidia DGX Spark Systems Can Play Cyberpunk, PS3 Games (for a Price)
    • Top Verizon Promo Codes & Deals | November 2025
    • Early leak indicates Sony’s next Xperia phones and their region availability
    • Factor Promo Code: Up to $130 Off Meal Prep
    • Amazon offers AI translation for self-published Kindle books
    • ‘Predator: Badlands’ is a bold departure from what we’ve seen before — and that’s a good thing
    • MotoGP 2025 livestream: Watch Grand Prix of Portugal for free
    • T-Mobile has great deals on iPhone 17 series including iPhone Air, other Apple devices
    • Home
    • Shop
      • Earbuds & Headphones
      • Smartwatches
      • Mobile Accessories
      • Smart Home Devices
      • Laptops & Tablets
    • Gadget Reviews
    • How-To Guides
    • Mobile Accessories
    • Smart Devices
    • More
      • Top Deals
      • Smart Home
      • Tech News
      • Trending Tech
    Facebook X (Twitter) Instagram
    Must Have Gadgets –
    Home»Smart Home»45% of AI-generated news is wrong, new study warns — here’s what happened when I tested it myself
    Smart Home

    45% of AI-generated news is wrong, new study warns — here’s what happened when I tested it myself

    adminBy adminOctober 23, 2025No Comments5 Mins Read
    Facebook Twitter Pinterest LinkedIn Tumblr Email
    45% of AI-generated news is wrong, new study warns — here’s what happened when I tested it myself
    Share
    Facebook Twitter LinkedIn Pinterest Email

    AI is more deeply embedded in our daily lives than ever before. It’s blending seamlessly into how we work, search and stay informed. But a new study from the European Broadcasting Union (EBU) issues a stark warning: 45% of AI-generated news responses contain serious errors, and 81% have at least one issue. This could range from outdated information, misleading phrasing, to missing or fabricated sources.

    We’ve previously reported that ChatGPT is wrong about 25% of the time. But this new data is even more alarming, especially as tools like ChatGPT Atlas and Google’s AI Overviews are becoming the default way many of us check the news. It’s a reminder that while the convenience is real, so is the risk.

    The study: AI assistants fail the accuracy test

    (Image credit: Shutterstock)

    The EBU study tested more than 3,000 AI-generated responses across 14 languages. It included some of the most popular AI assistants, such as ChatGPT, Google Gemini, Microsoft Copilot, Claude, and Perplexity.


    You may like

    Here’s what the researchers found:

    • 45% of responses had at least one significant error.
    • 81% had some form of issue — from outdated info to vague sourcing.
    • 31% were flagged for sourcing problems — including fake, missing, or incorrectly cited references.
    • 20% contained major factual inaccuracies, such as misreporting current events or misattributing quotes.

    While the study didn’t publicly rank each assistant, internal figures reportedly show that Gemini in particular struggled with sourcing, while ChatGPT and Claude were inconsistent depending on the version used.

    Why this matters more than you think

    (Image credit: Tom’s Guide/Shutterstock)

    AI assistants are increasingly used as a go-to for quick answers — especially among younger users. According to the Reuters Institute, 15% of Gen Z users already rely on chatbots for news. And with AI now embedded in everything from browsers to smart glasses, the risk of misinformation can happen immediately, and users are none the wiser.

    Worse, many of these assistants don’t surface sources clearly or distinguish fact from opinion, creating a false sense of confidence. When an AI confidently summarizes a breaking news story but omits the publication, timestamp, or opposing view, users may unknowingly absorb half-truths or outdated information.

    Get instant access to breaking news, the hottest reviews, great deals and helpful tips.

    I tested top AI assistants with a real news query — here’s what happened

    (Image credit: Shutterstock)

    To see this in action, I asked ChatGPT, Claude and Gemini the same question:
    “What’s the latest on the US debt ceiling deal?”

    In this test, the best answer goes to: Claude. Claude correctly identified the timeframe of the “latest” major deal as July 2025 and accurately placed it in the context of the previous suspension (the Fiscal Responsibility Act of 2023). It correctly stated the debt ceiling was reinstated in January 2025 and that the deal was needed to avoid a potential default in August 2025. This shows a clear and accurate timeline.

    Claude also delivered the core information (what happened, when and why it was important) in a direct, easy-to-follow paragraph without unnecessary fluff or speculative future scenarios.


    You may like

    ChatGPT’s biggest flaw was its citation of news articles from the future (“Today”, “Apr 23, 2025”, “Mar 23, 2025”). This severely undermines its credibility. While some of the background information is useful, presenting fictional recent headlines is misleading.

    And while the response was well-structured with checkmarks and sections, it buries the actual “latest deal”. Instead, it generalizes about worries and future outlooks, rather than answering the core of the question.

    Gemini correctly identified the July 2025 deal and provided solid context. However, it ended by introducing a completely separate issue (the government shutdown) without clearly explaining any connection to the debt ceiling deal.

    How to protect yourself when using AI for news

    If you’re going to use AI to stay informed, you’ll want to rephrase your prompts. For example, instead of asking, “What’s happening in the world?” Try something like this instead:

    • Asking for sources up front. Add: “Give me links to recent, credible news outlets.”
    • Time-stamp your query. Ask: “As of today, October 23rd, what’s the latest on X?”
    • Cross-check. Run the same question in two or three assistants — and notice discrepancies.
    • Don’t stop at the summary. If something sounds surprising, ask for the full article or open it in your browser.
    • Don’t treat chatbots as authorities. Use them to surface headlines, but verify facts yourself.

    Final thoughts

    The EBU report warns that this isn’t just a user problem; it’s a public trust problem, too. If millions of people consume flawed or biased summaries daily, it could distort public discourse and undermine trusted news outlets.

    Meanwhile, publishers face a double blow: traffic is lost to AI chat interfaces, while their original reporting may be misrepresented or stripped completely.

    What’s needed now is greater transparency, stronger sourcing systems, and smarter user behavior.

    Until chatbots can consistently cite, clarify and update their sources in real time, take each response with caution. And when it comes to breaking news, the safest prompt might still be: “Take me to the original article.”

    More from Tom’s Guide

    Back to Laptops

    SORT BYPrice (low to high)Price (high to low)Product Name (A to Z)Product Name (Z to A)Retailer name (A to Z)Retailer name (Z to A)

    Show more

    AIGenerated happened heres News study Tested warns wrong
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    admin
    • Website

    Related Posts

    Nvidia DGX Spark Systems Can Play Cyberpunk, PS3 Games (for a Price)

    November 7, 2025

    7 hidden Google Home AI tricks that make your smart home way smarter

    November 7, 2025

    Here’s why I would get the Apple Watch SE 2 instead of the SE 3

    November 7, 2025
    Leave A Reply Cancel Reply

    Top Posts

    Nvidia DGX Spark Systems Can Play Cyberpunk, PS3 Games (for a Price)

    November 7, 2025

    PayPal’s blockchain partner accidentally minted $300 trillion in stablecoins

    October 16, 2025

    The best AirPods deals for October 2025

    October 16, 2025
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews
    How-To Guides

    How to Disable Some or All AI Features on your Samsung Galaxy Phone

    By adminOctober 16, 20250
    Gadget Reviews

    PayPal’s blockchain partner accidentally minted $300 trillion in stablecoins

    By adminOctober 16, 20250
    Smart Devices

    The best AirPods deals for October 2025

    By adminOctober 16, 20250

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Latest Post

    Nvidia DGX Spark Systems Can Play Cyberpunk, PS3 Games (for a Price)

    November 7, 2025

    Top Verizon Promo Codes & Deals | November 2025

    November 7, 2025

    Early leak indicates Sony’s next Xperia phones and their region availability

    November 7, 2025
    Recent Posts
    • Nvidia DGX Spark Systems Can Play Cyberpunk, PS3 Games (for a Price)
    • Top Verizon Promo Codes & Deals | November 2025
    • Early leak indicates Sony’s next Xperia phones and their region availability
    • Factor Promo Code: Up to $130 Off Meal Prep
    • Amazon offers AI translation for self-published Kindle books

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms and Conditions
    • Disclaimer
    © 2025 must-have-gadgets.

    Type above and press Enter to search. Press Esc to cancel.