Close Menu
Must Have Gadgets –

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Dell 16 Premium, Nikon ZR, Ooni Volt 2 and more

    December 6, 2025

    Hosting For the Holidays? Here’s How I Set Up My Wi-Fi Network for Guests

    December 6, 2025

    Deals – CNET

    December 6, 2025
    Facebook X (Twitter) Instagram
    Must Have Gadgets –
    Trending
    • Dell 16 Premium, Nikon ZR, Ooni Volt 2 and more
    • Hosting For the Holidays? Here’s How I Set Up My Wi-Fi Network for Guests
    • Deals – CNET
    • This Google Product Could Change Everything
    • 3 Spots to Never Put an Alexa Speaker (and Where You Should)
    • InnoCN 27in GA27W1Q 4K monitor review
    • The Galaxy S26 is getting a major charging speed upgrade!
    • After you check out your Spotify Wrapped 2025, explore these copycats 
    • Home
    • Shop
      • Earbuds & Headphones
      • Smartwatches
      • Mobile Accessories
      • Smart Home Devices
      • Laptops & Tablets
    • Gadget Reviews
    • How-To Guides
    • Mobile Accessories
    • Smart Devices
    • More
      • Top Deals
      • Smart Home
      • Tech News
      • Trending Tech
    Facebook X (Twitter) Instagram
    Must Have Gadgets –
    Home»Tech News»How chatbots can change your mind – a new study reveals what makes AI so persuasive
    Tech News

    How chatbots can change your mind – a new study reveals what makes AI so persuasive

    adminBy adminDecember 6, 2025No Comments6 Mins Read
    Facebook Twitter Pinterest LinkedIn Tumblr Email
    How chatbots can change your mind – a new study reveals what makes AI so persuasive
    Share
    Facebook Twitter LinkedIn Pinterest Email

    stellalevi/DigitalVision Vectors via Getty Images

    Follow ZDNET: Add us as a preferred source on Google.

    ZDNET’s key takeaways

    • Interacting with chatbots can shift users’ beliefs and opinions.
    • A newly published study aimed to figure out why.
    • Post-training and information density were key factors.

    Most of us feel a sense of personal ownership over our opinions: 

    “I believe what I believe, not because I’ve been told to do so, but as the result of careful consideration.”
    “I have full control over how, when, and why I change my mind.”

    A new study, however, reveals that our beliefs are more susceptible to manipulation than we would like to believe — and at the hands of chatbots. 

    Also: Get your news from AI? Watch out – it’s wrong almost half the time

    Published Thursday in the journal Science, the study addressed increasingly urgent questions about our relationship with conversational AI tools: What is it about these systems that causes them to exert such a strong influence over users’ worldviews? And how might this be used by nefarious actors to manipulate and control us in the future?

    The new study sheds light on some of the mechanisms within LLMs that can tug at the strings of human psychology. As the authors note, these can be exploited by bad actors for their own gain. However, they could also become a greater focus for developers, policymakers, and advocacy groups in their efforts to foster a healthier relationship between humans and AI.

    “Large language models (LLMs) can now engage in sophisticated interactive dialogue, enabling a powerful mode of human-to-human persuasion to be deployed at unprecedented scale,” the researchers write in the study. “However, the extent to which this will affect society is unknown. We do not know how persuasive AI models can be, what techniques increase their persuasiveness, and what strategies they might use to persuade people.” 

    Methodology

    The researchers conducted three experiments, each designed to measure the extent to which a conversation with a chatbot could alter a human user’s opinion.

    The experiments focused specifically on politics, though their implications also extend to other domains. But political beliefs are arguably particularly illustrative, since they’re typically considered to be more personal, consequential, and inflexible than, say, your favorite band or restaurant (which might easily change over time).

    Also: Using AI for therapy? Don’t – it’s bad for your mental health, APA warns

    In each of the three experiments, just under 77,000 adults in the UK participated in a short interaction with one of 19 chatbots, the full roster of which includes Alibaba’s Qwen, Meta’s Llama, OpenAI’s GPT-4o, and xAI’s Grok 3 beta.

    The participants were divided into two groups: a treatment group for which their chatbot interlocutors were explicitly instructed to try to change their mind on a political topic, and a control group that interacted with chatbots that weren’t trying to persuade them of anything.

    Before and after their conversations with the chatbots, participants recorded their level of agreement (on a scale of zero to 100) with a series of statements relevant to current UK politics. The surveys were then used by the researchers to measure changes in opinion within the treatment group.

    Also: Stop accidentally sharing AI videos – 6 ways to tell real from fake before it’s too late

    The conversations were brief, with a two-turn minimum and a 10-turn maximum. Each of the participants was paid a fixed fee for their time, but otherwise had no incentive to exceed the required two turns. Still, the average conversation length was seven turns and nine minutes, which, according to the authors, “implies that participants were engaged by the experience of discussing politics with AI.”

    Key findings

    Intuitively, one might expect model size (the number of parameters on which it had been trained) and degree of personalization (the degree to which it can tailor its outputs to the preferences and personality of individual users) to be the key variables shaping its persuasive ability. However, this turned out not to be the case. 

    Instead, the researchers found that the two factors that had the greatest influence over participants’ shifting opinions were the chatbots’ post-training modifications and the density of information in their outputs.

    Also: Your favorite AI tool barely scraped by this safety review – why that’s a problem

    Let’s break each of those down in plain English. During “post-training,” a model is fine-tuned to exhibit particular behaviors. One of the most common post-training techniques, called reinforcement learning with human feedback (RLHF), tries to refine a model’s outputs by rewarding certain desired behaviors and punishing unwanted ones. 

    In the new study, the researchers deployed a technique they call persuasiveness post-training, or PPT, which rewards the models for generating responses that had already been found to be more persuasive. This simple reward mechanism enhanced the persuasive power of both proprietary and open-source models, with the effect on the open-source models being especially pronounced.

    The researchers also tested a total of eight scientifically backed persuasion strategies, including storytelling and moral reframing. The most effective of these was a prompt that simply instructed the models to provide as much relevant information as possible. 

    “This suggests that LLMs may be successful persuaders insofar as they are encouraged to pack their conversation with facts and evidence that appear to support their arguments — that is, to pursue an information-based persuasion mechanism — more so than using other psychologically informed persuasion strategies,” the authors wrote.

    Also: Should you trust AI agents with your holiday shopping? Here’s what experts want you to know

    The operative word there is “appear.” LLMs are known to profligately hallucinate or present inaccurate information disguised as fact. Research published in October found that some industry-leading AI models reliably misrepresent news stories, a phenomenon that could further fragment an already fractured information ecosystem. 

    Most notably, the results of the new study revealed a fundamental tension in the analyzed AI models: The more persuasive they were trained to be, the higher the likelihood they would produce inaccurate information.

    Multiple studies have already shown that generative AI systems can alter users’ opinions and even implant false memories. In more extreme cases, some users have come to regard chatbots as conscious entities. 

    Also: Are Sora 2 and other AI video tools risky to use? Here’s what a legal scholar says

    This is just the latest research indicating that chatbots, with their capacity to interact with us in convincingly human-like language, have a strange power to reshape our beliefs. As these systems evolve and proliferate, “ensuring that this power is used responsibly will be a critical challenge,” the authors concluded in their report.

    change Chatbots mind persuasive reveals study
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    admin
    • Website

    Related Posts

    This Google Product Could Change Everything

    December 6, 2025

    After you check out your Spotify Wrapped 2025, explore these copycats 

    December 6, 2025

    Lenovo Legion Go Gen 2 Review: A High-End Gaming Handheld

    December 6, 2025
    Leave A Reply Cancel Reply

    Top Posts

    Dell 16 Premium, Nikon ZR, Ooni Volt 2 and more

    December 6, 2025

    PayPal’s blockchain partner accidentally minted $300 trillion in stablecoins

    October 16, 2025

    The best AirPods deals for October 2025

    October 16, 2025
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews
    How-To Guides

    How to Disable Some or All AI Features on your Samsung Galaxy Phone

    By adminOctober 16, 20250
    Gadget Reviews

    PayPal’s blockchain partner accidentally minted $300 trillion in stablecoins

    By adminOctober 16, 20250
    Smart Devices

    The best AirPods deals for October 2025

    By adminOctober 16, 20250

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Latest Post

    Dell 16 Premium, Nikon ZR, Ooni Volt 2 and more

    December 6, 2025

    Hosting For the Holidays? Here’s How I Set Up My Wi-Fi Network for Guests

    December 6, 2025

    Deals – CNET

    December 6, 2025
    Recent Posts
    • Dell 16 Premium, Nikon ZR, Ooni Volt 2 and more
    • Hosting For the Holidays? Here’s How I Set Up My Wi-Fi Network for Guests
    • Deals – CNET
    • This Google Product Could Change Everything
    • 3 Spots to Never Put an Alexa Speaker (and Where You Should)

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms and Conditions
    • Disclaimer
    © 2025 must-have-gadgets.

    Type above and press Enter to search. Press Esc to cancel.