Close Menu
Must Have Gadgets –

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    iOS 26.2 beta: The biggest new features

    November 11, 2025

    Dangerous Android VPNs are lurking on the Google Play Store. Here are the safe ones that are on sale.

    November 11, 2025

    You could be eligible for a $60 YouTube TV discount

    November 11, 2025
    Facebook X (Twitter) Instagram
    Must Have Gadgets –
    Trending
    • iOS 26.2 beta: The biggest new features
    • Dangerous Android VPNs are lurking on the Google Play Store. Here are the safe ones that are on sale.
    • You could be eligible for a $60 YouTube TV discount
    • EU considers law to phase out Huawei and ZTE equipment from bloc’s telecom networks
    • 33 Best Gifts for Cat Lovers (2025)
    • The best smart home gadgets for every room in your house
    • Is this the future of computing?
    • Common Reasons Move to iOS Fails and How to Fix Each One
    • Home
    • Shop
      • Earbuds & Headphones
      • Smartwatches
      • Mobile Accessories
      • Smart Home Devices
      • Laptops & Tablets
    • Gadget Reviews
    • How-To Guides
    • Mobile Accessories
    • Smart Devices
    • More
      • Top Deals
      • Smart Home
      • Tech News
      • Trending Tech
    Facebook X (Twitter) Instagram
    Must Have Gadgets –
    Home»Gadget Reviews»What Would it Take to Convince a Neuroscientist That an AI is Conscious?
    Gadget Reviews

    What Would it Take to Convince a Neuroscientist That an AI is Conscious?

    adminBy adminNovember 11, 2025No Comments8 Mins Read
    Facebook Twitter Pinterest LinkedIn Tumblr Email
    What Would it Take to Convince a Neuroscientist That an AI is Conscious?
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Large language models like ChatGPT are designed to be eerily human-like. The more you engage with these AIs, the easier it is to convince yourself that they’re conscious, just like you.

    But are you really conscious? I’m sure it feels like you are, but how do you actually know? What does it mean to be conscious, anyway? Neuroscientists have been working to answer these questions for decades, and still haven’t developed a single, universally accepted definition—let alone a way to test for it.

    Still, as AI becomes increasingly integrated into everyday life, one has to wonder: Could these bots ever possess the self-awareness that we do? And if so, how would we know?

    For this Giz Asks, we asked neuroscientists what it would take to convince them that an AI is conscious. Each of them highlighted different obstacles that currently stand in the way of proving this hypothesis, underscoring the need for more research into the basis of consciousness itself.

    Megan Peters

    An associate professor of cognitive sciences and logic & philosophy of science at the University of California, Irvine. Peters is also a research fellow at the Canadian Institute for Advanced Research Program in Brain, Mind, & Consciousness, and serves as president and board chair at Neuromatch. Her research investigates how the brain represents and uses uncertainty.

    The hard thing about convincing a neuroscientist (or anybody) that AI is conscious is that there’s no objective test for consciousness, and creating that test might be fundamentally impossible.

    Consciousness is—by definition—internal, personal, and subjective. Nobody can peer inside your head to determine whether you are conscious! Instead, we rely on outwardly-observable signatures, such as behaviors or brain activity, to infer that other humans are conscious because we experience consciousness and assume the same for others who behave like us.

    With AI, we can’t make the same assumption. Instead, we can only build “tests” that strengthen our subjective belief that the AI is conscious. We can look for signatures in the AI’s architecture, internal activity patterns, or behaviors that make us believe “someone is in there,” but remember, beliefs are not facts. Just because you believe the Sun goes around the Earth, that doesn’t mean it’s true!

    Let’s say we build tests to strengthen our belief that AI can be conscious. We would still need to test for the right kind of thing.

    We don’t want to test for intelligence or human-like responses (like the Turing test does), or for whether the AI is a threat (Skynet might be even more frightening if it was a superintelligent but mindless automaton!). And we can’t just ask the AI if it is conscious—how would we interpret its response in either direction? Maybe it’s lying, or simply cleverly parroting statistical patterns in human-created text.

    Recently, I and others have proposed ways to start building these kinds of “belief-raising” tests for AI consciousness. I am also working on how we can decide whether a given human-based ‘test’ can be used in other systems, including AI.

    For example, we can test for consciousness in a hospital patient by measuring brain activity in response to commands like, “Think about playing tennis” or “Think about walking through your house.” But that kind of test isn’t applicable to AI because AI doesn’t have comparable “brain activity patterns.” (Incidentally, using this test with octopuses or chickens is also a no-go, because they don’t understand language and/or might not be able to imagine things!)

    Other “tests” might ask whether an AI exhibits the kind of cognitive computations that researchers think are critically involved in creating consciousness. These might be more relevant for AI, but we still have to identify the right kinds of consciousness-critical cognitive computations.

    Even with all this, we still have to remember that what we’d really like to ask is whether the AI has consciousness, not just that we believe it does. We may not ever reach that kind of conviction, and in the meantime, we need to be careful not to confuse our belief about AI consciousness with a statement of objective fact.

    Anil Seth

    Director of the Sussex Center for Consciousness Science and a professor of cognitive and computational neuroscience at the University of Sussex. Seth is also co-director of the Canadian Institute for Advanced Research (CIFAR) Program on Brain, Mind, and Consciousness. His research seeks to understand the neurobiological basis of consciousness.

    It’s a very difficult question. While we’ve learned a great deal about the neurobiological basis of consciousness over recent decades, there’s still no consensus about the necessary or sufficient conditions for consciousness. So, anyone who claims that “conscious AI” is either definitely possible (or imminent, or here already), or definitely impossible, is overstepping what can reasonably be said.

    This uncertainty is not unusual. Uncertainty is inherent to science, but the level of uncertainty relevant to this question is perhaps unusually high, given the widely diverging opinions about the likelihood and plausibility of real artificial consciousness.

    So, what would convince me that an AI is conscious? Well, I will not be convinced by ever more fluent conversations about consciousness with large language models. I think the tendency to project consciousness into language models is primarily a reflection of our own human psychological biases, rather than a reliable insight into what’s actually going on.

    For me, the key question is: How brain-like does AI have to be to move the needle on our belief that “conscious AI” is possible? Many researchers assume it’s just a matter of getting the “computations” right. In this view, consciousness in real brains is based on “neural computations,” but could equally arise from the same—or sufficiently similar—computations implemented in silicon. (In philosophy, this position—often assumed implicitly—is called “computational functionalism”).

    I am very suspicious of this view. The more you look inside real brains, the less plausible it seems that computations are all that matter. My own view is that detailed biological properties—like metabolism and autopoiesis—may turn out to be necessary (though not sufficient) for consciousness. If this is on the right track, silicon-based conscious AI is off the table, no matter how smart it is.

    For me to be fully convinced, I’d need a clear idea of the sufficient conditions for consciousness, and a clear idea of whether AI satisfies those conditions. They might turn out to be merely computational, but I think that is unlikely. Alternatively, they might turn out to involve other biological properties too, which I think is more likely.  Importantly, merely simulating these properties on classical computers would not be enough.

    But being fully convinced is a high bar. Very simply, my strategy goes like this: The more we understand about consciousness in cases where we know it exists, the surer our footing will be elsewhere. And we really, really, should not be trying to create conscious AI anyway.

    Michael Graziano

    A professor and researcher at the Princeton Neuroscience Institute who studies the brain basis of consciousness.

    The question is tricky. If it means: What would convince me that AI has a magical essence of experience emerging from its inner processes? Then nothing would convince me. Such a thing does not exist. Nor do humans have it.

    Almost all work in the modern field of consciousness studies is pseudoscience, predicated on the idea that we must figure out how a magical essence of experience emerges in humans or other agents. Sort of like, “Where is the big hole in the ground, to the west, where the Sun disappears at night?” or, “Who drives the Sun chariot as it moves across the sky?” The question itself is mistaken.

    The human brain has a self-model. The self-model misinforms us that we have a magic essence of experience. Or rather, the self-model is a useful, but schematic description of the self. Instead of depicting the reality of 86 billion neurons and their interactions, the model depicts a vague, magic-like essence. Everything we think we know about ourselves—everything, no matter how gut-certain we are that it’s true—depends on models (or bundles of information encoded in the brain) embedded as patterns of activity among neuronal networks.

    Now, if you ask me: What would convince me that AI has the same kind of self-model that humans have, and thus has the same certainty that it is conscious? That’s easier to answer, at least in principle.

    We still don’t know the details of the human self-model, so the comparison is a little difficult at this time. But AI has a useful feature—we can look inside the black box and see what representations, models, bundles of information are being encoded or embedded within its neural networks. That so-called “mech-interp,” or interpretation of the activity patterns inside AI, is in its early days but is becoming rapidly more sophisticated.

    Show me that an AI builds a stable self-model, that the self-model depicts the AI as having a conscious experience, and that the depiction has the same features as the human self-model, and I’ll accept that you have an AI that believes it’s conscious in much the same way that humans believe they are conscious.

    A self-model is everything. It shapes our personalities—our moral, social, and personal understanding of ourselves. You are your self-model. I think it’s probably a good idea, and also probably an inevitability, to give AI a robust self model.

    Conscious convince Neuroscientist
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    admin
    • Website

    Related Posts

    Is this the future of computing?

    November 11, 2025

    The Puma Velocity Nitro 3 is the running shoe our testing expert would shop — and it’s 34% off before Black Friday

    November 11, 2025

    Apple’s Improved Spotlight Feature in macOS Is a Real Game Changer

    November 11, 2025
    Leave A Reply Cancel Reply

    Top Posts

    iOS 26.2 beta: The biggest new features

    November 11, 2025

    PayPal’s blockchain partner accidentally minted $300 trillion in stablecoins

    October 16, 2025

    The best AirPods deals for October 2025

    October 16, 2025
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews
    How-To Guides

    How to Disable Some or All AI Features on your Samsung Galaxy Phone

    By adminOctober 16, 20250
    Gadget Reviews

    PayPal’s blockchain partner accidentally minted $300 trillion in stablecoins

    By adminOctober 16, 20250
    Smart Devices

    The best AirPods deals for October 2025

    By adminOctober 16, 20250

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Latest Post

    iOS 26.2 beta: The biggest new features

    November 11, 2025

    Dangerous Android VPNs are lurking on the Google Play Store. Here are the safe ones that are on sale.

    November 11, 2025

    You could be eligible for a $60 YouTube TV discount

    November 11, 2025
    Recent Posts
    • iOS 26.2 beta: The biggest new features
    • Dangerous Android VPNs are lurking on the Google Play Store. Here are the safe ones that are on sale.
    • You could be eligible for a $60 YouTube TV discount
    • EU considers law to phase out Huawei and ZTE equipment from bloc’s telecom networks
    • 33 Best Gifts for Cat Lovers (2025)

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms and Conditions
    • Disclaimer
    © 2025 must-have-gadgets.

    Type above and press Enter to search. Press Esc to cancel.