Is Chinese AI Open-Source Winning?
If the title above sounds weird, well, it is, and indeed the whole AI-tech industry was shaken last week by the DeepSeek Model Release.
China’s DeepSeek lab unveiled DeepSeek-R1, a reasoning AI model that rivals OpenAI’s o1 with competitive benchmark performance.
Why was this so impressive?

DeepSeek stands out in the AI landscape for its cutting-edge capabilities and exceptional performance metrics.
DeepSeek-V3 is claimed to be trained using just 2,048 NVIDIA H800 GPUs over two months, arriving at 2.8 million GPU hours. For context, a comparable model like the LLaMA 3 model required over 30.8 million GPU hours.
That has opened a massive debate in tech, with many impressed by it, claiming an architectural breakthrough, and many others doubting that DeepSeek has been able to really do that and actually arguing that the company has way more GPUs.
In other words, DeepSeek R1 has disrupted the AI community by matching OpenAI’s o1 at just 3%-5% of the cost.
This open-source model has captivated developers, with 109,000 downloads on HuggingFace so far, and its search feature now rivals Google’s Gemini.
Key innovation: DeepSeek skipped traditional supervised fine-tuning, relying on reinforcement learning to develop independent reasoning. This leaner approach delivered exceptional results despite using 50,000 GPUs compared to OpenAI’s 500,000.
For enterprises, DeepSeek democratizes AI access, challenging costly proprietary models.
While ethical and ROI concerns remain, DeepSeek reshapes AI development, sparking a shift toward cost-efficient innovation and transparency.
We’ll leave this discussion for a later issue, but for now, while DeepSeek has been a wake up call for most AI players, I believe there is a single player that DeepSeek has been able to impress, or if you wish, scare the hell out: Meta.
The post Is Chinese AI Open-Source Winning? appeared first on FourWeekMBA.