Close Menu
Must Have Gadgets –

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Best Deals for New Year’s Resolutions: Sleep, Fitness, and More (2026)

    January 11, 2026

    Unplugging these 7 common household devices helped reduce my electricity bills

    January 11, 2026

    I found the cutest (and strangest) Android phone at CES 2026

    January 11, 2026
    Facebook X (Twitter) Instagram
    Must Have Gadgets –
    Trending
    • Best Deals for New Year’s Resolutions: Sleep, Fitness, and More (2026)
    • Unplugging these 7 common household devices helped reduce my electricity bills
    • I found the cutest (and strangest) Android phone at CES 2026
    • As an Android fan, there’s only one iOS feature I want Google to copy
    • Grab Apple’s Latest Pro, Mini and Air iPads at Up to $100 Off Right Now
    • I (finally) ditched Google Photos for self-hosted; here’s how it went
    • Smart Home Expo 2026 returns to Mumbai, 28–30 April at Jio World Convention Centre
    • SpaceX can deploy 7,500 more Starlink Gen2 satellites with FCC approval
    • Home
    • Shop
      • Earbuds & Headphones
      • Smartwatches
      • Mobile Accessories
      • Smart Home Devices
      • Laptops & Tablets
    • Gadget Reviews
    • How-To Guides
    • Mobile Accessories
    • Smart Devices
    • More
      • Top Deals
      • Smart Home
      • Tech News
      • Trending Tech
    Facebook X (Twitter) Instagram
    Must Have Gadgets –
    Home»Tech News»How chatbots can change your mind – a new study reveals what makes AI so persuasive
    Tech News

    How chatbots can change your mind – a new study reveals what makes AI so persuasive

    adminBy adminDecember 6, 2025No Comments6 Mins Read
    Facebook Twitter Pinterest LinkedIn Tumblr Email
    How chatbots can change your mind – a new study reveals what makes AI so persuasive
    Share
    Facebook Twitter LinkedIn Pinterest Email

    stellalevi/DigitalVision Vectors via Getty Images

    Follow ZDNET: Add us as a preferred source on Google.

    ZDNET’s key takeaways

    • Interacting with chatbots can shift users’ beliefs and opinions.
    • A newly published study aimed to figure out why.
    • Post-training and information density were key factors.

    Most of us feel a sense of personal ownership over our opinions: 

    “I believe what I believe, not because I’ve been told to do so, but as the result of careful consideration.”
    “I have full control over how, when, and why I change my mind.”

    A new study, however, reveals that our beliefs are more susceptible to manipulation than we would like to believe — and at the hands of chatbots. 

    Also: Get your news from AI? Watch out – it’s wrong almost half the time

    Published Thursday in the journal Science, the study addressed increasingly urgent questions about our relationship with conversational AI tools: What is it about these systems that causes them to exert such a strong influence over users’ worldviews? And how might this be used by nefarious actors to manipulate and control us in the future?

    The new study sheds light on some of the mechanisms within LLMs that can tug at the strings of human psychology. As the authors note, these can be exploited by bad actors for their own gain. However, they could also become a greater focus for developers, policymakers, and advocacy groups in their efforts to foster a healthier relationship between humans and AI.

    “Large language models (LLMs) can now engage in sophisticated interactive dialogue, enabling a powerful mode of human-to-human persuasion to be deployed at unprecedented scale,” the researchers write in the study. “However, the extent to which this will affect society is unknown. We do not know how persuasive AI models can be, what techniques increase their persuasiveness, and what strategies they might use to persuade people.” 

    Methodology

    The researchers conducted three experiments, each designed to measure the extent to which a conversation with a chatbot could alter a human user’s opinion.

    The experiments focused specifically on politics, though their implications also extend to other domains. But political beliefs are arguably particularly illustrative, since they’re typically considered to be more personal, consequential, and inflexible than, say, your favorite band or restaurant (which might easily change over time).

    Also: Using AI for therapy? Don’t – it’s bad for your mental health, APA warns

    In each of the three experiments, just under 77,000 adults in the UK participated in a short interaction with one of 19 chatbots, the full roster of which includes Alibaba’s Qwen, Meta’s Llama, OpenAI’s GPT-4o, and xAI’s Grok 3 beta.

    The participants were divided into two groups: a treatment group for which their chatbot interlocutors were explicitly instructed to try to change their mind on a political topic, and a control group that interacted with chatbots that weren’t trying to persuade them of anything.

    Before and after their conversations with the chatbots, participants recorded their level of agreement (on a scale of zero to 100) with a series of statements relevant to current UK politics. The surveys were then used by the researchers to measure changes in opinion within the treatment group.

    Also: Stop accidentally sharing AI videos – 6 ways to tell real from fake before it’s too late

    The conversations were brief, with a two-turn minimum and a 10-turn maximum. Each of the participants was paid a fixed fee for their time, but otherwise had no incentive to exceed the required two turns. Still, the average conversation length was seven turns and nine minutes, which, according to the authors, “implies that participants were engaged by the experience of discussing politics with AI.”

    Key findings

    Intuitively, one might expect model size (the number of parameters on which it had been trained) and degree of personalization (the degree to which it can tailor its outputs to the preferences and personality of individual users) to be the key variables shaping its persuasive ability. However, this turned out not to be the case. 

    Instead, the researchers found that the two factors that had the greatest influence over participants’ shifting opinions were the chatbots’ post-training modifications and the density of information in their outputs.

    Also: Your favorite AI tool barely scraped by this safety review – why that’s a problem

    Let’s break each of those down in plain English. During “post-training,” a model is fine-tuned to exhibit particular behaviors. One of the most common post-training techniques, called reinforcement learning with human feedback (RLHF), tries to refine a model’s outputs by rewarding certain desired behaviors and punishing unwanted ones. 

    In the new study, the researchers deployed a technique they call persuasiveness post-training, or PPT, which rewards the models for generating responses that had already been found to be more persuasive. This simple reward mechanism enhanced the persuasive power of both proprietary and open-source models, with the effect on the open-source models being especially pronounced.

    The researchers also tested a total of eight scientifically backed persuasion strategies, including storytelling and moral reframing. The most effective of these was a prompt that simply instructed the models to provide as much relevant information as possible. 

    “This suggests that LLMs may be successful persuaders insofar as they are encouraged to pack their conversation with facts and evidence that appear to support their arguments — that is, to pursue an information-based persuasion mechanism — more so than using other psychologically informed persuasion strategies,” the authors wrote.

    Also: Should you trust AI agents with your holiday shopping? Here’s what experts want you to know

    The operative word there is “appear.” LLMs are known to profligately hallucinate or present inaccurate information disguised as fact. Research published in October found that some industry-leading AI models reliably misrepresent news stories, a phenomenon that could further fragment an already fractured information ecosystem. 

    Most notably, the results of the new study revealed a fundamental tension in the analyzed AI models: The more persuasive they were trained to be, the higher the likelihood they would produce inaccurate information.

    Multiple studies have already shown that generative AI systems can alter users’ opinions and even implant false memories. In more extreme cases, some users have come to regard chatbots as conscious entities. 

    Also: Are Sora 2 and other AI video tools risky to use? Here’s what a legal scholar says

    This is just the latest research indicating that chatbots, with their capacity to interact with us in convincingly human-like language, have a strange power to reshape our beliefs. As these systems evolve and proliferate, “ensuring that this power is used responsibly will be a critical challenge,” the authors concluded in their report.

    change Chatbots mind persuasive reveals study
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    admin
    • Website

    Related Posts

    Unplugging these 7 common household devices helped reduce my electricity bills

    January 11, 2026

    All the new tech that caught our eye in Las Vegas

    January 10, 2026

    NASA orders “controlled medical evacuation” from the International Space Station

    January 9, 2026
    Leave A Reply Cancel Reply

    Top Posts

    Best Deals for New Year’s Resolutions: Sleep, Fitness, and More (2026)

    January 11, 2026

    More Studio Ghibli 4K restorations are coming to IMAX in 2026

    December 6, 2025

    GoTrax Mustang Electric Bike Review: Punchy and Tiny

    December 6, 2025
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews
    How-To Guides

    Your Holiday Survival Guide to Finding a Dead or Stolen iPhone

    By adminDecember 6, 20250
    Gadget Reviews

    More Studio Ghibli 4K restorations are coming to IMAX in 2026

    By adminDecember 6, 20250
    Tech News

    GoTrax Mustang Electric Bike Review: Punchy and Tiny

    By adminDecember 6, 20250

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Latest Post

    Best Deals for New Year’s Resolutions: Sleep, Fitness, and More (2026)

    January 11, 2026

    Unplugging these 7 common household devices helped reduce my electricity bills

    January 11, 2026

    I found the cutest (and strangest) Android phone at CES 2026

    January 11, 2026
    Recent Posts
    • Best Deals for New Year’s Resolutions: Sleep, Fitness, and More (2026)
    • Unplugging these 7 common household devices helped reduce my electricity bills
    • I found the cutest (and strangest) Android phone at CES 2026
    • As an Android fan, there’s only one iOS feature I want Google to copy
    • Grab Apple’s Latest Pro, Mini and Air iPads at Up to $100 Off Right Now

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms and Conditions
    • Disclaimer
    © 2026 must-have-gadgets.

    Type above and press Enter to search. Press Esc to cancel.