
Image credit: Nature — “How China created AI model DeepSeek and shocked the world” (https://www.nature.com/articles/d41586-025-00259-0)
DeepSeek AI did not become a geopolitical headline overnight. When Pinterest’s CTO described his company’s “DeepSeek moment,” it sounded less like ideology and more like economics. Open-source techniques — including Chinese models — were reportedly about 30% more accurate for Pinterest’s internal use cases and up to 90% cheaper than leading proprietary US models.
That combination — good enough, fast, and dramatically cheaper — is what shifted the conversation.
This story is less about who has the biggest model, and more about how the economics of AI are shifting. DeepSeek’s rise forces a deeper question: not whether China has built the single smartest system, but whether it has made intelligence cheap enough to change how companies choose their tools. To understand that shift, we need to look at how DeepSeek grew, why its pricing matters, and how the Chinese AI ecosystem supports that model.
From Hedge Fund Lab to Global Model
DeepSeek did not begin as a Silicon Valley unicorn. It emerged in 2023 from within High-Flyer, a Chinese quantitative hedge fund led by Liang Wenfeng. By July 2023, the internal AGI research lab was spun out as an independent company.
2024 became a foundation year. DeepSeek released multimodal systems such as DeepSeek-V2, and by late December 2024 it upgraded its deepseek-chat backend to DeepSeek-V3. These moves quietly built credibility among developers.
Then came January 2025.
On 20 January 2025, DeepSeek released R-1 — an open-source reasoning model accompanied by a technical paper on arXiv. Within days, commentators described the launch as sending shockwaves through Silicon Valley. R-1 demonstrated reasoning performance competitive with far more expensive proprietary systems — but at radically lower cost.
By September 2025, R-1 had appeared on the cover of Nature — an unusual milestone for a mainstream large language model.
The speed of that ascent matters.
DeepSeek’s Growth in Numbers
Public statistics are compiled from analytics sources rather than audited filings, but the trajectory is striking.
| Period | Metric | Approximate Value | Source |
|---|---|---|---|
| Aug 2024 | Daily active users | ~7,500 | ElectroIQ |
| Jan 2025 | Daily active users | 22.15 million | ElectroIQ |
| Jan 2025 | Monthly active users | 33.7 million | ElectroIQ |
| May 2025 | Monthly active users | ~125 million | SQMagazine |
| 2024 | DeepSeek-VL monthly queries | 470 million | ElectroIQ / SQMagazine |
| 2025 | DeepSeek-VL monthly queries | 980 million | ElectroIQ / SQMagazine |
| May 2025 | Mobile downloads (cumulative) | 57.2 million | ElectroIQ |
Geographically, usage has been concentrated in China and parts of Asia, but with a visible Western footprint. Around 4% of January 2025 monthly active users were reportedly in the United States, alongside users in India, Indonesia, and Europe.
The pattern suggests something important: DeepSeek is not only a domestic Chinese tool. It is being used globally — and increasingly inside Western firms.
Why DeepSeek Is So Cheap — and Why That’s Not an Accident
DeepSeek’s low prices are not the result of a single breakthrough, but of a stack of efficiency decisions layered together.
Instead of retraining an entirely new foundation model from scratch, DeepSeek built R-1 by applying large-scale reinforcement learning on top of its existing V3 base model. That dramatically reduces the need for another full pre-training run, which in frontier labs can cost tens of millions of dollars. By focusing the reinforcement process on reasoning-heavy domains like mathematics and coding — tasks with clear right and wrong answers — DeepSeek could automate much of the reward process rather than relying heavily on expensive human-labelled data.
The optimisation pipeline also uses a variant of reinforcement learning known as Group Relative Policy Optimization (GRPO), which avoids the need for a separate critic network. That reduces computational overhead compared with standard RLHF pipelines. Once a strong reasoning model is trained, DeepSeek distils its behaviour into smaller dense models, allowing much of the capability to be preserved at a fraction of the inference cost.
Architecture plays an equally important role. DeepSeek’s V3 model uses a Mixture-of-Experts (MoE) design, meaning that only a subset of model “experts” are activated for each token processed. In effect, the model behaves like a very large network, but only a portion of it runs on each request. Combined with aggressive low-precision training and kernel-level optimisation tuned for constrained hardware, DeepSeek squeezes more output per GPU than many brute-force scaling approaches.
The result is not just cheaper training, but cheaper inference. Independent analyses estimate that DeepSeek’s per-token costs can be dramatically lower than comparable proprietary Western APIs. When multiplied across billions of tokens per day, that difference becomes strategic. For enterprises, intelligence is not just about quality — it is about cost at scale.
DeepSeek AI: Competitive Accuracy at a Fraction of the Cost
Cost alone would not matter if performance collapsed.
But public benchmark discussions and enterprise commentary suggest that DeepSeek’s reasoning and coding performance sits in the same competitive tier as leading Western systems. Pinterest’s CTO described open-source models in this family as not only cheaper but more accurate for specific internal use cases. Surveys cited in analytics summaries report high developer satisfaction with DeepSeek-Coder compared with established tools.
This is the power shift.
If a model is “good enough” for enterprise reasoning, document analysis, coding assistance, or recommendation pipelines — and costs a fraction of proprietary alternatives — procurement logic changes.
AI stops being a prestige purchase and becomes infrastructure.
The Invisible Hand of the Chinese AI Ecosystem
The rise of DeepSeek AI cannot be separated from its ecosystem.
There is no publicly disclosed single cheque from Beijing labelled “DeepSeek funding.” Instead, the support appears structural.
DeepSeek received “national high-tech enterprise” status in late 2023, qualifying it for tax advantages and subsidies. At least 17 Chinese provinces have reportedly issued computing vouchers — worth up to roughly $300,000 per firm — to offset AI compute costs. Meanwhile, China has invested billions of dollars in large-scale computing hubs and AI education initiatives.
The effect is cumulative. Lower compute costs, dense talent pipelines, and provincial subsidy structures reduce the marginal cost of experimentation. DeepSeek’s efficiency is partly engineering discipline — and partly the product of an ecosystem designed to make AI infrastructure cheaper over time.
This is not a single lab sprinting alone. It is a lab running on state-supported rails.
My Take: This Is a Price War Disguised as an Innovation Race
DeepSeek’s rise suggests that the AI race is entering a new phase.
The first wave was about capability: who could demonstrate the most impressive reasoning, multimodal fluency, or coding intelligence. The next wave looks more like a price war.
When API prices fall to cents per million tokens and open-source models approach frontier-level reasoning performance, intelligence becomes easier to commoditise. That does not eliminate the value of premium proprietary systems, but it weakens the assumption that only a handful of Western labs can provide competitive AI at scale.
Once price differences narrow, the competitive focus shifts. Latency, throughput, and reliability become central. The question is no longer only “how smart is your model?” but “how quickly can it respond under heavy load, and how resilient is the infrastructure behind it?”
China’s investments in compute hubs and AI education suggest that it understands this second layer. Infrastructure, not just algorithms, may define the next stage of the race.
There are also harder questions. If Western enterprises quietly adopt foreign open-source models because they are cheaper and “good enough,” what does that mean for AI sovereignty? At what point does cost efficiency collide with governance, security, or data-localisation concerns?
China is not yet the uncontested winner of the AI race. But DeepSeek shows that the contest has already shifted. It is no longer only about who builds the smartest model — it is about who makes intelligence abundant, affordable, and deeply embedded into global systems.
Sources
BBC News – “Is China quietly winning the AI race?”
https://www.bbc.co.uk/news/articles/c86v52gv726o
ElectroIQ – “DeepSeek AI Statistics By Users Demographics, Usage and Facts (2025)”
https://electroiq.com/stats/deepseek-ai-statistics/
SQMagazine – “DeepSeek AI Statistics 2026: Users, Benchmarks & Enterprise Reach”
https://sqmagazine.co.uk/deepseek-ai-statistics/
AllOutSEO – “DeepSeek Statistics 2025: Growth, User & Market Insights”
https://alloutseo.com/deepseek-stats/
The Conversation – “DeepSeek: how China’s embrace of open-source AI caused a geopolitical earthquake”
https://theconversation.com/deepseek-how-chinas-embrace-of-open-source-ai-caused-a-geopolitical-earthquake-249563
Lawfare – “Beyond DeepSeek: How China’s AI Ecosystem Fuels Breakthroughs”
https://www.lawfaremedia.org/article/beyond-deepseek–how-china-s-ai-ecosystem-fuels-breakthroughs
Nature – “How China created AI model DeepSeek and shocked the world”
https://www.nature.com/articles/d41586-025-00259-0
Timeline of DeepSeek
https://timelines.issarice.com/wiki/Timeline_of_DeepSeek
36Kr – “DeepSeek Updates R1 Paper by Over 60 Pages: Is V4 Release Imminent?”
https://eu.36kr.com/en/p/3631908557374473
Georgia State University – “How DeepSeek is Changing the AI Landscape”
https://news.gsu.edu/2025/02/04/how-deepseek-is-changing-the-a-i-landscape/
EU Institute for Security Studies – “China’s DeepSeek model and the pluralisation of AI development”
https://www.iss.europa.eu/publications/briefs/challenging-us-dominance-chinas-deepseek-model-and-pluralisation-ai-development
“Why DeepSeek Is So Cheap – R1 training pipeline and GRPO”
https://theaiinsider.tech/2025/01/28/why-deepseek-is-so-cheap-a-quick-guide-to-why-r1-costs-so-little-to-build/
Pingback: AI-Only Social Network Moltbook and the Security Risk It Reveals | KorishTech