Why the DeepSeek Breakthrough Is Good News for AI Stocks

Ouch… That is perhaps, in a word, the best way to describe the recent price action in AI stocks. And it was all spurred on by the launch of DeepSeek, China’s own powerful AI model. 

According to Ben Reitzes, head of technology research at Melius, DeepSeek achieves better learning and more efficient use of memory, as noted by CBS News. But perhaps the real kicker? It’s also vastly cheaper than ChatGPT. This revelation incited fears on Wall Street that companies won’t need to spend as much money to develop next-gen AI as previously thought. 

As a result, AI chipmaker Nvidia (NVDA) crashed as much as 15%. Vertiv (VRT), a data center equipment supplier, got clobbered, plunging more than 25%. Vistra (VST) and Oklo (OKLO) – two nuclear energy companies hoping to power the AI data center boom – each dropped 20%. 

It was a terrible day for AI stocks. 

But what if I told you that this whole DeepSeek-inspired selloff is actually a fantastic buying opportunity?

Because that’s exactly what we believe. 

DeepSeek: A Potential Paradigm Shift

China’s DeepSeek potentially represents a disruptive paradigm shift in the world of foundational AI models. 

Of course, the model operates a lot like ChatGPT. But as Reflexivity’s Giuseppe Sette put it, it “activate(s) only the most relevant portions of their model for each query.” That means DeepSeek is more efficient than its incumbents.

But the pièce de résistance? DeepSeek is way cheaper than ChatGPT.

When it comes to AI models, there are two primary costs – training and inferencing; or how much they cost to develop and how much they cost to run regularly. And reportedly, DeepSeek has significantly lower training and inference costs than incumbent foundational AI models. 

Indeed, it cost about $80 million to train ChatGPT-4. Google’s Gemini Ultra cost nearly $200 million. Across the board, foundational AI models in the U.S. have taken $100 million-plus to train. 

But DeepSeek is claiming that it cost less than $6 million to train its own AI model, which is perhaps even better than those other big-budget AI. 

Meanwhile, DeepSeek also boasts about 95% lower inference costs than ChatGPT. Its reasoning model – R1 – currently charges about $0.55 per million input tokens. (A token is a character in a query, i.e. a letter, punctuation mark, etc.) Now, ChatGPT charges on a monthly basis, so it’s not exactly an apples-to-apples comparison. But a breakdown from Bernstein found that the incumbent charges around $15 per million input tokens. 

Point being: DeepSeek reportedly has drastically lower training and inference costs than incumbent foundational models. 

Understanding the Initial Market Meltdown

Now, why does that matter to the market?

This reported cost breakthrough suggests that companies will spend less money developing new AI models over the next several years. That means less money going into the AI infrastructure buildout, less money for companies supporting that buildout – and lower stock prices for those firms, too. 

A core tenet of the AI-stock bull thesis has been that companies and governments alike will collectively spend hundreds of billions of dollars per year to build out all the infrastructure necessary to support further AI model development. 

That core tenet rested on the critical (and, until now, unchallenged) assumption that AI models require a ton of time, money, resources, and computation power to build. 

DeepSeek challenges that assumption. 

If the firm’s claims are true, U.S. companies could replicate the same tactics and methods used to create DeepSeek – because the model is open-source – to significantly drive down their own training and inferencing costs. In that world, companies and governments would have to spend much less than previously expected over the next few years to create new AI models. 

Of course, that means that the projected AI infrastructure boom may be much smaller than once anticipated. For example, instead of companies and governments collectively spending hundreds of billions per year on the AI infrastructure buildout into 2030, maybe they only spend… say… $100 billion per year. 

That is the big fear driving AI stocks lower. 

DeepSeek News Is Bullish for ‘Appliers’

If AI infrastructure spend over the next few years ends up being an order of magnitude less than previously anticipated, less money will flow into AI infrastructure stocks like chip, data center, energy stocks, etc. – what we call the “AI Builders.” 

Note that those AI Builders were at the epicenter of the market’s recent selloff. 

Meanwhile, AI software stocks – or “AI Appliers,” as we like to call them – actually did fine yesterday. Firms like Samsara (IOT), Procore (PCOR), Zscaler (ZS), Intuit (INTU), Monday.com (MNDY), ServiceNow (NOW), AppFolio (APPF), Workday (WDAY), Atlassian (TEAM), and others mostly rose amid the turmoil. After all, if the DeepSeek model breakthrough is duplicable, developers will be able to spend less to build AI models. 

Understated or Overblown?

Now, we think it’s important to note that we mostly disagree with the market’s initial reaction.

As we previewed in our first statement on the matter, we think the market is grossly overreacting to the DeepSeek news. And in fact, we view the big crash in AI stocks as a fantastic buying opportunity for a few reasons. 

For starters, we do not think DeepSeek’s cost claims should be taken at face value. 

This is a Chinese company, and Chinese companies have a long history of overstating facts. While the model is open-source – and therefore its legitimacy is not in question – claims regarding training costs are not rigorously backed or detailed. They also only include “official training” costs of DeepSeek-V3 and exclude costs associated with prior research and ablation experiments on architectures, algorithms and data.

Elon Musk and others have similarly cast doubt on DeepSeek’s training costs. It seems that the general consensus among AI leaders is that DeepSeek may be dramatically understating how many Nvidia GPUs were used to train the model. 

Meanwhile, the inference costs are simply what DeepSeek is charging. And for all we know, DeepSeek may not care about profits. OpenAI, on the other hand, is trying to price ChatGPT at a level that makes the company profitable. 

Being a largely stealth Chinese company, we have no idea what DeepSeek’s ultimate goal is. Therefore, we do not know what its profitability looks like with such low charged costs to users. 

Implications of the DeepSeek Breakthrough

Additionally, even assuming all of DeepSeek’s cost claims are true, we believe the implications of such a massive efficiency breakthrough are hugely positive for AI stocks

As far as the AI Builders are concerned, we do not think this means that less money will be spent on the infrastructure buildout. The total volume of money spent on AI infrastructure over the next several years will equal the training cost per model times the number of models built. Theoretically, an efficiency breakthrough does mean lower training costs. But it should also mean more models being built. 

Regardless, AI is still the future. Every company knows that. So, if AI model training costs drop dramatically, do folks really think companies will cut their AI budgets? No – they’ll just start creating more and more AI models. 

That’s the essence of competitive capitalism. 

Say Coca-Cola (KO) and Pepsi (PEP) are both spending the same amount on AI models to predict demand and create new products. If Coca-Cola leverages the DeepSeek efficiency breakthrough to merely bring down costs and run the same AI models… while Pepsi leverages it to simultaneously bring down costs and create dozens more models that accurately predict channel-specific demand… Pepsi could run Coca-Cola out of business. 

We predict that any and all AI model efficiency breakthroughs will simultaneously reduce per-model training costs and boost total volume of models trained, with the two largely offsetting each other, having a net-neutral impact on overall infrastructure spend. 

In other words, we see this news as net-neutral to AI Builder stocks. 

AI and the Jevons Paradox

For the economists in the room, this is not a new idea; it is called Jevons Paradox

Originated by 19th century British economist William Stanley Jevons, the Jevons Paradox states that improvements in a resource’s efficiency tend to increase – rather than decrease – the overall consumption of that resource. That’s because greater efficiency lowers a resource’s cost, which can lead to increased demand.

This happened with coal usage in the 1800s. Improvements in steam engine efficiency reduced coal consumption per unit of output. However, these improvements also made coal-powered technology more economically attractive, leading to broader adoption and ultimately increasing overall coal consumption.

It also happened with the internet. Over time, computers got smaller and cheaper – and access to the internet became much more affordable. That led us into the digital age, wherein computers proliferated. Here today, most folks around the world are plugged into the internet nearly 24/7. 

And we believe that right now, it is happening with AI. As model training and inference costs are greatly diminished, more and more AI models will emerge, paving the path for artificial intelligence to become a global ubiquity. 

For the AI Appliers, we’re confident this is all great news. 

The Final Word on DeepSeek

Let’s revisit the Coca-Cola and Pepsi example. 

Say both companies leverage the DeepSeek breakthrough to create more AI models with the same level of spend. That means both companies will be getting far more bang for their buck. They’ll become even bigger AI Appliers. And they’ll likely see their revenues soar while costs stay constrained. 

That’s why we see this development as a huge net positive for AI Applier stocks. 

Not to mention, such AI model efficiency breakthroughs suggest that we are closer than anyone ever thought to creating artificial general intelligence (AGI). If indeed we can make foundational AI models for ~95% cheaper than anticipated, we can theoretically create 20X more models for every dollar spent as well. Of course, that means we can improve AI model throughput by 20X, potentially advancing AI reasoning 20X faster than expected. 

In other words, we may be a lot closer to AGI than we realized. 

That is a huge positive for all AI stocks – and Applier stocks in particular. 

So… where do we go from here?

We think the investment strategy response here is pretty simple. 

In the medium to long term… stay fully bullish on the AI trade. But tilt more than ever toward Appliers over Builders. While we think AI Builder stocks will rebound and perform well in the coming months and years, AI Applier stocks should do far better.

And in the short term… don’t make any moves just yet. Wait a few days. Let the dust settle. Then, think about buying the dip in some top Applier stocks. 

To help us find some of the best AI stocks to buy on this dip, we’re looking to Elon Musk – the world’s richest man – and his big venture, xAI.

While that startup isn’t yet a publicly traded company, we’ve found a promising ‘backdoor’ way to invest in it today.

Learn more about how to play Musk’s startup right now.

On the date of publication, Luke Lango did not have (either directly or indirectly) any positions in the securities mentioned in this article.

P.S. You can stay up to speed with Luke’s latest market analysis by reading our Daily Notes! Check out the latest issue on your Innovation Investor or Early Stage Investor subscriber site.

You may also like...