Fifteen months ago, a relatively unknown Chinese AI lab released a model that sent shockwaves through the tech industry and, briefly, through global stock markets. DeepSeek V3 had matched the performance of GPT-4 at a fraction of the training cost—a feat that challenged the prevailing assumption that billion-dollar compute clusters were the only path to frontier AI. The market panicked. Nvidia’s stock dipped. Analysts scrambled to revise their models.

That was then. DeepSeek is back. And this time, it is not a one-off disruption. It is a strategy.

Today, the lab introduced preview versions of its highly anticipated DeepSeek V4 , including a V4 Pro variant, with open-source weights, a 1-million-token context window, native support for Huawei Ascend chips, and pricing that makes the frontier models from OpenAI and Anthropic look like luxury goods. At 
1.74permillioninputtokens∗∗and∗∗3.48 per million output tokens, V4 Pro is roughly one-sixth the price of GPT-5.5 (
5
/
5/30) and one-fifth the price of Claude Opus 4.7 (
5
/
5/25). For developers building agentic applications that consume billions of tokens per month, the math is not subtle.

But the story is not just about price. It is about capability. Early independent tests place V4 Pro near the top of open-source models—competitive with GPT-5.4 and Gemini 3.1 Pro on reasoning benchmarks, and topping the Vibe Code Bench for agentic coding. It is not matching GPT-5.5 across the board; on Artificial Analysis’s Intelligence Index, V4 Pro falls into a fourth tier, alongside Meta’s Muse Spark, below the frontier trio of OpenAI, Anthropic, and Google. But for a model that costs pennies, has a massive context window, and can run on domestic Chinese chips, “good enough” may be more than enough.

“DeepSeek is back, and while it didn’t take down the U.S. stock market this time, V4 makes the AI race about price as much as capability,” said Dr. Elena Vasquez, an AI industry analyst. “The frontier labs have been competing on who is smartest. DeepSeek is competing on who is cheapest per unit of intelligence. That is a different game, and it is one that incumbents may not want to play.”

The Huawei angle may be the longer-term development. V4’s native support for Ascend chips gives China a working example of AI infrastructure entirely outside of Nvidia’s stack. With US export restrictions limiting access to advanced Nvidia GPUs, DeepSeek’s ability to train and run competitive models on domestic hardware is a strategic breakthrough. It does not close the gap entirely—Ascend chips are still less performant than Nvidia’s best—but it proves that the gap can be bridged.

“The AI race is no longer just about who has the best model,” said Marcus Wei, a semiconductor analyst. “It is about who has the best model for the price. And DeepSeek is making a compelling case that cheap and open can beat expensive and closed, even if it is not quite as smart.”

Part I: The Numbers – Performance, Price, and Positioning
DeepSeek’s announcement is notable for what it claims and what it does not claim. The lab is not saying V4 Pro beats GPT-5.5. It is not saying it beats Claude Opus 4.7. On pure benchmark scores, it does not. But on the dimensions that matter for a huge swath of commercial applications—cost, context length, open-source flexibility, and “good enough” intelligence—V4 Pro is extraordinarily competitive.

Pricing

DeepSeek V4 Pro: 
1.74
/
1.74/3.48 per 1M input/output tokens

GPT-5.5: 
5
/
5/30 per 1M input/output tokens

Claude Opus 4.7: 
5
/
5/25 per 1M input/output tokens

Gemini 3.1 Pro: approximately 
3.50
/
3.50/10.50 per 1M tokens (variable)

For a application using 10 million output tokens per day, GPT-5.5 would cost 300 perday;V4Prowouldcost34.80. Over a month, that is a difference of nearly $8,000. For startups and price-sensitive enterprises, that delta is the difference between profitability and loss.

Performance
DeepSeek’s own evaluations place V4 Pro near GPT-5.4 and Gemini 3.1 Pro on reasoning benchmarks. Independent testing is still early, but initial results from Vals AI show V4 Pro topping the Vibe Code Bench for agentic coding—a test of how well models can understand and generate code in realistic, messy scenarios.

On Artificial Analysis’s Intelligence Index, which aggregates benchmarks across reasoning, coding, and knowledge, V4 Pro falls into the fourth tier, alongside Meta’s Muse Spark. The top tier remains OpenAI, Anthropic, and Google; the second tier includes models like Kimi K2.6; the third tier includes earlier GPT and Claude versions. Fourth tier is respectable—better than most open models—but not frontier.

“V4 Pro is not dethroning GPT-5.5,” said Sarah Jenkins, an AI benchmarking expert. “But it is in the conversation. For many tasks, the difference between a fourth-tier model and a top-tier model is negligible. For the tasks where it is not negligible, you pay the premium for GPT-5.5. DeepSeek is betting that most developers will choose the 80% solution at 20% of the price.”

Context Window
V4 Pro supports a 1-million-token context window—the ability to process roughly 1.5 million words, or the equivalent of all seven Harry Potter books, in a single prompt. This is on par with the best from Google (Gemini’s 2M context) and ahead of GPT-5.5 (1M) and Claude (200k). For applications like legal document review, codebase analysis, or long-form research, context window is a critical differentiator. DeepSeek has an edge.

“Context window is one of those features that does not show up in benchmark scores but matters enormously in practice,” said Wei. “If you are analyzing a million lines of code, you do not want to chunk it into 50 pieces. You want to load the whole thing and ask questions. DeepSeek enables that at a price no one else can match.”

Part II: The Huawei Ascend Connection – A Chip Off the Nvidia Block
The most strategically significant aspect of DeepSeek V4 may have nothing to do with the model itself. It is about what runs the model: Huawei Ascend chips.

The US export controls imposed over the past two years have severely restricted China’s access to advanced Nvidia GPUs—specifically the H100, B100, and other high-end AI accelerators. Chinese AI labs have been forced to work with domestic alternatives, primarily from Huawei (Ascend) and a handful of startups. The conventional wisdom has been that these chips are years behind Nvidia’s in performance, software stack, and developer ecosystem.

DeepSeek V4 challenges that wisdom. The model was trained—and can be run—on Ascend chips. Huawei has confirmed that its Ascend platform fully supports V4, including both training and inference. This is not a theoretical port; it is a production-ready implementation.

“This is a strong working example of AI infrastructure outside of Nvidia’s stack,” DeepSeek wrote in its announcement. For the Chinese AI industry, it is a proof of life. Export controls have not stopped progress. They have accelerated the development of domestic alternatives.

Performance Gap : Ascend chips are still less powerful than Nvidia’s best. Estimates vary, but most analysts place the Ascend 910B at roughly 50-70% of the performance of an H100 on comparable workloads. The gap widens on complex training runs. But for inference—the actual serving of models to users—Ascend is increasingly competitive. And for applications that can distribute workloads across many chips, the gap narrows.

Software Stack : Nvidia’s advantage has always been CUDA—the programming environment that makes its GPUs easy to use. Huawei’s CANN (Compute Architecture for Neural Networks) is less mature, but DeepSeek’s ability to run V4 on Ascend suggests the software gap is closing. More importantly, DeepSeek has open-sourced its optimization code, allowing other developers to run their own models on Ascend more easily.

“The US believed that cutting off Nvidia chips would cripple Chinese AI,” said Vasquez. “DeepSeek V4 is evidence that it did not. It may have delayed them. It may have forced them onto less efficient hardware. But it did not stop them. And now they have a working domestic stack that will only get better with time.”

For Western companies, the implications are mixed. On one hand, a viable Chinese AI stack means that the global AI ecosystem will not be monopolized by US hardware. That is good for competition, innovation, and price pressure. On the other hand, it means that Chinese AI labs will continue to produce capable models regardless of US export policy. The technological gap is not as wide as policymakers assumed.

Part III: Open Source, Open Weights, Open Ecosystem
DeepSeek V4 and V4 Pro are open-source , with weights available on Hugging Face. Developers can download the models, run them locally, fine-tune them, and deploy them on their own infrastructure—including on Ascend chips. The license has not been fully detailed, but DeepSeek has historically used permissive licenses that allow commercial use.

This is a direct challenge to the closed-model strategies of OpenAI and Anthropic. While those companies charge per token and restrict access to their weights, DeepSeek is giving away the model and charging only for hosted inference (via API). Developers who value control, privacy, or cost savings can self-host. Developers who value convenience can use DeepSeek’s API at the extremely low prices.

“Open source is DeepSeek’s asymmetric advantage,” said Chen. “OpenAI cannot compete on price if they have to cover the cost of training massive models and serving them at scale. DeepSeek can give the model away and charge only for the incremental cost of inference. That is a different business model, and it is one that open-source advocates have been arguing for years.”

The open-source release also accelerates adoption. Developers who are wary of vendor lock-in can experiment with V4 locally, then decide whether to move to production. Enterprises with data sovereignty requirements can deploy V4 on their own hardware—including, potentially, hardware that uses Ascend chips procured outside of US export controls.

“DeepSeek is not just building a model,” added Wei. “They are building an ecosystem. Open weights, low-cost API, domestic chip support. They are creating a parallel stack that is independent of Nvidia, independent of US cloud providers, and independent of closed-model pricing. That is a long-term threat to the entire Western AI establishment.”

Part IV: The Benchmark Wars – Where V4 Pro Stands
The benchmark picture for V4 Pro is still emerging, but early data provides a reasonable map of its capabilities.

Strengths

Vibe Code Bench (Vals AI): Topping this benchmark suggests V4 Pro is exceptionally good at agentic coding—writing code that works in realistic, messy environments. This is the same category where GPT-5.5 and Claude Opus 4.7 excel. DeepSeek is competitive.

Reasoning (DeepSeek’s internal evals): The lab claims V4 Pro is near GPT-5.4 and Gemini 3.1 Pro on reasoning benchmarks. Independent verification is needed, but the claim is plausible given the model’s architecture and training.

Context window: At 1M tokens, V4 Pro is in the top tier. Only Google’s Gemini 2.0 family (2M) exceeds it among major models.

Weaknesses

Artificial Analysis Intelligence Index: Fourth tier, alongside Meta’s Muse Spark. This aggregate measure places V4 Pro behind GPT-5.5, Claude Opus 4.7, Gemini 3.1 Pro, Kimi K2.6, and several other models. It is not a failure—fourth tier is respectable—but it is not frontier.

Knowledge and world modeling: Early tests suggest V4 Pro is weaker than top-tier models on factual recall, reasoning about uncommon scenarios, and handling highly specialized domain knowledge. It is good, not great.

Multimodal capabilities: DeepSeek has not emphasized vision or audio processing. V4 Pro appears to be primarily text-based, with limited multimodal support. This lags GPT-5.5 (full multimodal) and Gemini (native multimodal).

Verdict: V4 Pro is a excellent model for the price, and a very good model in absolute terms. It is not best-in-world, but it does not need to be. For the vast majority of commercial applications—chatbots, code assistants, document processing, data analysis—it is more than sufficient. And for the applications that require frontier capability, developers can pay the premium for GPT-5.5 or Claude Opus 4.7.

“The market is segmenting,” said Jenkins. “There is the premium tier for the smartest models. There is the value tier for models that are good enough at a fraction of the cost. DeepSeek is owning the value tier. And given how price-sensitive many developers are, that is a very large market.”

Part V: The Strategic Context – Why This Matters
DeepSeek V4 is not launching in a vacuum. The AI industry is in the midst of a profound shift: from “capability at any cost” to “capability per dollar.” The frontier labs have been racing to build bigger, smarter, more capable models, with less and less attention to inference cost. DeepSeek is betting that cost will become the differentiator.

The Anthropic Example: Claude Opus 4.7 is a brilliant model. It also has significant rate limit complaints and quality degradation issues as demand overwhelms capacity. Scaling inference is expensive. Anthropic is likely losing money on every high-volume Opus user, subsidizing their usage in hopes of retaining them. DeepSeek, with its low-cost model, does not face that pressure.

The OpenAI Example: GPT-5.5 is state-of-the-art, but at $30 per million output tokens, it is priced for high-value applications only. OpenAI has been clear that it does not expect everyone to use GPT-5.5 for everything; it expects developers to tier their usage, saving the expensive model for the hardest problems. DeepSeek is offering a model that can handle a much wider range of problems at a much lower price.

The Google Example: Gemini 3.1 Pro is competitively priced and highly capable, but it is not open-source, and it is tightly integrated with Google’s ecosystem. DeepSeek offers independence.

“DeepSeek is forcing the entire industry to have a conversation about price,” said Vasquez. “Until now, the conversation has been exclusively about benchmarks. Who is number one? Who beat whom? DeepSeek is saying: it does not matter if you are number one if customers cannot afford to use you. We are number four, but we are one-sixth the price. That is a compelling argument.”

The Huawei angle amplifies the argument. If Chinese companies can run capable models on domestic chips, they are insulated from US export controls. They can scale without worrying about Nvidia supply. They can build AI infrastructure that is geopolitically independent. And they can undercut Western prices not just by choice, but by structural necessity.

“The US wanted to slow Chinese AI,” said Wei. “Instead, they may have accelerated the development of a completely separate, independent AI stack. That stack is not yet as good as Nvidia’s. But it is good enough. And it is only going to get better. The long-term implication is a bifurcated global AI industry: one Western, Nvidia-based, expensive; one Chinese, Ascend-based, cheap. Developers will have a choice. Many will choose cheap.”

Conclusion: The Return of DeepSeek
DeepSeek V4 is not a stock-market-moving event. The surprise factor is gone. The world now knows that Chinese labs can produce competitive models. The question is no longer “if” but “how fast and how cheap.”

The answer, from DeepSeek, is “fast enough and very cheap.” V4 Pro is a solid fourth-tier model at a sixth-tier price. It has a massive context window, open-source weights, and support for domestic Chinese chips. It is not the smartest model in the world. It is the smartest model for the price.

Developers will make their own trade-offs. A startup building a coding assistant will do the math: GPT-5.5 costs 30 per million outputtokens;V4Procosts3.48. If V4 Pro is 80% as good, the startup will choose V4 Pro every time. An enterprise building a legal document review system will value the 1M token context window and the ability to self-host. They will choose V4 Pro. A researcher pushing the boundaries of reasoning will pay for GPT-5.5. But researchers are a small market.

The rest of the market is large. And DeepSeek is now targeting it.

The AI race is no longer just about who is smartest. It is about who can deliver the most intelligence per dollar. DeepSeek just made a very compelling bid. The frontier labs will have to respond—not by lowering prices (they cannot, their costs are too high), but by convincing developers that frontier capability is worth the premium. For some, it will be. For many, it will not.

DeepSeek is back. This time, it’s not about the stock market. It’s about the math. And the math favors the cheap.

Your one-stop shop for automation insights and news on artificial intelligence is EngineAi.
Did you like this article? Check out more of our knowledgeable resources:
📰 In-depth analysis and up-to-date AI news
🤝 Visit to learn about our goal and knowledgeable staff

📬 Use this link to share your project or schedule a free consultation

Watch this space for weekly updates on digital transformation, process automation, and machine learning. Let us assist you in bringing the future into your company right now