Close Menu
Must Have Gadgets –

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Grok would prefer a second Holocaust over harming Elon Musk

    December 3, 2025

    Google Photos 2025 Recap is here with the ability to hide unwanted faces

    December 3, 2025

    Gift Cozy Vibes With the SwitchBot Candle Warmer Lamp This Holiday Season

    December 3, 2025
    Facebook X (Twitter) Instagram
    Must Have Gadgets –
    Trending
    • Grok would prefer a second Holocaust over harming Elon Musk
    • Google Photos 2025 Recap is here with the ability to hide unwanted faces
    • Gift Cozy Vibes With the SwitchBot Candle Warmer Lamp This Holiday Season
    • Last-Minute Cyber Week Gaming Laptop and Desktop Deals Are Still Available
    • How much will the Galaxy Z TriFold cost? I’m a Samsung expert and here’s my prediction
    • This spyware campaign can turn your browser extensions into malware — how to stay safe
    • HBO Max’s ‘Mad Men’ Vomit Scene Proves ‘Remastered’ Doesn’t Mean ‘Better’
    • Today’s NYT Mini Crossword Answers for Dec. 3
    • Home
    • Shop
      • Earbuds & Headphones
      • Smartwatches
      • Mobile Accessories
      • Smart Home Devices
      • Laptops & Tablets
    • Gadget Reviews
    • How-To Guides
    • Mobile Accessories
    • Smart Devices
    • More
      • Top Deals
      • Smart Home
      • Tech News
      • Trending Tech
    Facebook X (Twitter) Instagram
    Must Have Gadgets –
    Home»How-To Guides»Is DeepSeek’s new model the latest blow to proprietary AI?
    How-To Guides

    Is DeepSeek’s new model the latest blow to proprietary AI?

    adminBy adminDecember 3, 2025No Comments6 Mins Read
    Facebook Twitter Pinterest LinkedIn Tumblr Email
    Is DeepSeek’s new model the latest blow to proprietary AI?
    Share
    Facebook Twitter LinkedIn Pinterest Email

    NurPhoto/Contributor/NurPhoto via Getty Images

    Follow ZDNET: Add us as a preferred source on Google. 

    ZDNET’s key takeaways

    • DeepSeek released its V3.2 model on Monday.
    • It aims to keep accessible AI competitive for developers.
    • V3.2 heats up the race between open and proprietary models.

    Chinese AI firm DeepSeek has made yet another splash with the release of V3.2, the latest iteration in its V3 model series.

    Launched Monday, the model, which builds on an experimental V3.2 version announced in October, comes in two versions: “Thinking,” and a more powerful “Speciale.” DeepSeek said V3.2 pushes the capabilities of open-source AI even further. Like other DeepSeek models, it’s a fraction of the cost of proprietary models, and the underlying weights can be accessed via Hugging Face. 

    Also: I tested DeepSeek’s R1 and V3 coding skills – and we’re not all doomed (yet)

    DeepSeek first made headlines in January with the release of R1, an open-source reasoning AI model that outperformed OpenAI’s o1 on several crucial benchmarks. Considering V3.2’s performance also rivals powerful proprietary models, could this shake up the AI industry once more? 

    What V3.2 can do

    Rumors first began circulating in September that DeepSeek was planning to launch its own, more cost-effective agent to compete with the likes of OpenAI and Google. Now, it seems that the competitor has finally arrived.

    V3.2 is the latest iteration of V3, a model DeepSeek released nearly a year ago that also helped inform R1. According to company data published Monday, V3.2 Speciale outperforms industry-leading proprietary models like OpenAI’s GPT-5 High, Anthropic’s Claude 4.5 Sonnet, and Google’s Gemini 3.0 Pro on some reasoning benchmarks (for what it’s worth, Kimi K2, a free and open-source model from Moonshot, also claimed to rival GPT-5 and Sonnet 4.5 in performance). 

    In terms of cost, accessing Gemini 3 in the API costs up to $4.00 per 1 million tokens, while V3.2 Speciale is $0.028 per 1 milion tokens. The new model also achieved gold-level performance in the International Math Olympiad (IMO) and the International Olympiad in Informatics, according to the company.

    “DeepSeek-V3.2 emerges as a highly cost-efficient alternative in agent scenarios, significantly narrowing the performance gap between open and frontier proprietary models while incurring substantially lower costs,” the company wrote in a research paper. While these claims are still debated, the sentiment continues DeepSeek’s pattern of reducing costs with each model release, which threatens to logically undermine the exorbitant investments proprietary labs like OpenAI pour into their models. 

    The problems

    DeepSeek said it built V3.2 in an effort to help the open-source AI community catch up with some of the technical achievements that have recently been made by companies building closed-source models. According to the company’s paper, the agentic and reasoning capabilities demonstrated by leading proprietary models have “accelerated at a significantly steeper rate” than those of their open-source counterparts.

    Also: Mistral’s latest open-source release bets on smaller models over large ones – here’s why

    As the engineer Charles Kettering once put it, “A problem well-stated is a problem half-solved.” In that spirit, DeepSeek began the development of its new model by attempting to diagnose the reasons behind open-source models’ lagging performance, ultimately breaking it down into three factors.

    First, open-source models have tended to rely on what’s known to AI researchers as “vanilla attention” — a slow and compute-hungry mechanism for reading inputs and generating outputs, which makes them struggle with longer sequences of tokens. They also have a more computationally limited post-training phase, hindering their ability to complete more complex tasks. Unlike proprietary models, they struggle with following long instructions and generalizing across tasks, making them inefficient agents.

    The solutions

    In response, the company introduced DeepSeek Sparse Attention (DSA), a mechanism that mitigates “critical computation complexity without sacrificing long-context performance,” according to the research paper.

    Also: What is sparsity? DeepSeek AI’s secret, revealed by Apple researchers

    With traditional vanilla attention, a model essentially generates its outputs by comparing each individual token from a query with every single token in its training data — a painstakingly power-hungry process. By analogy, imagine you had to dig through an enormous pile of books scattered on a lawn to find a particular sentence. You could do it, but it would take a lot of time and careful scrutiny of a huge number of pages.

    The DSA approach tries to work smarter, not harder. It’s deployed in two phases: an initial “lightning indexer” search, which performs a high-level scan of the tokens in its training data to identify the small subset that are likely to be most relevant to a particular query. It then drills into that subset with its full computational power to find what it’s looking for. Rather than starting with a giant pile of books, you’re now able to walk into a neatly organized library, walk to the relevant section, and perform a much less stressful and lengthy search to find the passage you’ve been seeking.

    The company then aimed to solve the post-training issue by building “specialist” models to test and refine V3.2’s abilities across writing, general question-answering, mathematics, programming, logical reasoning, agentic tasks, agentic coding, and agentic search. They’re like tutors charged with the task of turning the model from a generalist into a multi-specialist.

    Limitations

    DeepSeek V3.2, according to the research paper, “effectively bridges the gap between computational efficiency and advanced reasoning capabilities” and “[unlocks] new possibilities for robust and generalizable AI agents” through open-source AI.

    Also: Stop saying AI hallucinates – it doesn’t. And the mischaracterization is dangerous

    There are a few caveats, however. For one thing, the new model’s “world knowledge” — the breadth of practical understanding about the real world that can be inferred from a corpus of training data — is much more limited compared to leading proprietary models. It also requires more tokens to generate outputs that match the quality of those from frontier proprietary models, and it struggles with more complex tasks. DeepSeek says it plans to continue bridging the divide between its own open-source models and its proprietary counterparts by scaling up compute during pretraining and refining its “post-training recipe.”

    Even with these limitations, though, the fact that a company — and one based in China, no less — has built an open-source model that can compete with the reasoning capabilities of some of the most advanced proprietary models currently on the market is a huge deal. It reiterates growing evidence that the “performance gap” between open-source and close-sourced models isn’t a fixed and unresolvable fact, but a technical discrepancy that can be bridged through creative approaches to pretraining, attention, and posttraining.

    Even more importantly, the fact that its underlying weights are almost free for developers to access and build upon could undermine the basic sales pitch that’s thus far been deployed by the industry’s leading developers of closed-source models: That it’s worth paying to access these tools, since they’re the best on the market. If open-source models eclipse proprietary models, it won’t make sense for most people to continue paying for the latter.

    Blow DeepSeeks Latest model proprietary
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    admin
    • Website

    Related Posts

    Gift Cozy Vibes With the SwitchBot Candle Warmer Lamp This Holiday Season

    December 3, 2025

    Where to Buy Nintendo Switch 2: Holiday Stock Guide

    December 3, 2025

    My favorite Walmart deal ends tonight: save 50% while you still can

    December 3, 2025
    Leave A Reply Cancel Reply

    Top Posts

    Grok would prefer a second Holocaust over harming Elon Musk

    December 3, 2025

    PayPal’s blockchain partner accidentally minted $300 trillion in stablecoins

    October 16, 2025

    The best AirPods deals for October 2025

    October 16, 2025
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews
    How-To Guides

    How to Disable Some or All AI Features on your Samsung Galaxy Phone

    By adminOctober 16, 20250
    Gadget Reviews

    PayPal’s blockchain partner accidentally minted $300 trillion in stablecoins

    By adminOctober 16, 20250
    Smart Devices

    The best AirPods deals for October 2025

    By adminOctober 16, 20250

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Latest Post

    Grok would prefer a second Holocaust over harming Elon Musk

    December 3, 2025

    Google Photos 2025 Recap is here with the ability to hide unwanted faces

    December 3, 2025

    Gift Cozy Vibes With the SwitchBot Candle Warmer Lamp This Holiday Season

    December 3, 2025
    Recent Posts
    • Grok would prefer a second Holocaust over harming Elon Musk
    • Google Photos 2025 Recap is here with the ability to hide unwanted faces
    • Gift Cozy Vibes With the SwitchBot Candle Warmer Lamp This Holiday Season
    • Last-Minute Cyber Week Gaming Laptop and Desktop Deals Are Still Available
    • How much will the Galaxy Z TriFold cost? I’m a Samsung expert and here’s my prediction

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms and Conditions
    • Disclaimer
    © 2025 must-have-gadgets.

    Type above and press Enter to search. Press Esc to cancel.