The world has gone mad for AI. Setting aside what the latest AI models are actually good for, it is not surprising the Street is on the hunt for stocks with “AI exposure”. Unfortunately, this turns out to be a fairly short list at the moment, and at the top of that list is Nvidia.
Nvidia has largely captured all of the market for chips used for training AI models. and is doing fairly well with chips for inference. The company is incredibly well placed strategically. And that is reflected in the stock which is currently trading at 167x trailing twelve months earnings and 67x this year’s estimated EPS. Those are some big multiples, big enough to give many investors pause. True, they are unquestionably the leader in the hottest new market out there and there are no signs of anyone chipping into that dominance. On the other hand, this is a company which over its 40 year history has had numerous boom/bust swings. Their CEO has done an incredible job in getting them here, combining a deep technical understanding, a keen strategic mind and the eloquence to convince others of his Big Vision. But that eloquence has regularly gotten the Street overexcited about the numbers, often just ahead of a big inventory correction. There are no signs of a downturn out there, but to put it politely, this is a company that sometimes struggles to accurately forecast its end markets and effectively communicate its expectations to the market.
So what is Nvidia worth?
A big part of the disconnect right now is that for the first time Nvidia’s strong market position is based on software more than its hardware. For years, the company had to slog it out with AMD for leadership in the GPU Feeds and Speeds rat race. Nvidia has ended up winning most of those contests, but they always had some competition out there to let the air out of the balloon. The market for AI is different. Nvidia has been able to hold on to its lead because of its CUDA software. This is not exactly an operating system (OS), but its ubiquity and relative ease of use has ensured that it has become the de facto common software layer for much of the world where AI software meets silicon. AMD has never had anything to rival CUDA, and from what we can tell they are not even trying. And while there are many software libraries out there attempting to displace Nvidia, those are all owned or largely supported by software companies who do not really care enough about the intricacies of GPU firmware to create a true alternative. Maybe a few years of near monopoly will change that, but there does not seem to be anything on the horizon currently.
So if Nvidia’s software is their true competitive advantage, should they be viewed as a software company? This is just mildly outlandish, not totally outlandish, and worth considering. We ran some rough comparables for Nvidia stock in the graph below. Nvidia is already trading at more than double its large cap semis peers. It is also trading at a hefty premium to large cap, established software companies like Microsoft, Salesforce and Adobe. The closest multiple group are the new, high growth software companies like Snowflake and Datadogs. That is a lofty peer group. And while the Street expects Nvidia’s earnings to double over the next two years, Snowflake’s earnings are expected to double in a year. If Nvidia traded at Snowflake’s multiple, the stock would be worth ~$600, over double the current price of $291. The fact that we are even looking at a company like Snowflake for this discussion is reason enough to have some serious questions about Nvidia’s valuation.
A final way to think about Nvidia is think of other companies which monetize their unique software through the sale of hardware, which leads us to Apple. There is an investment banking MD shaking their head at us for even making the comparison as the two companies are very different. Nonetheless, conceptually they share that commonality of hardware prices with software differentiation. The problem is that even Apple trades at a discount to Nvidia, by almost 40%.
As much as we think Nvidia is executing incredibly well, it is really hard to be comfortable with the current share price.
I feel like most conversations about Nvidia’s valuation ignore the existence of TPUs at Google. It appears that Google, a top-2 AI R&D organization, doesn’t used Nvidia for its internal AI development and is continuing its work on new versions of TPU that will be even better for ML training. Google Cloud made a $300 million investment in Anthropic which will also include a lot of TPU usage. Google is more than happy to offer its Cloud customers TPUs.
So if we’re talking inference, there will probably be many players and margins will be low. If we’re talking training, there aren’t that many organizations that are dependent on Nvidia. Maybe Meta and Microsoft until Microsoft builds a competitor, and maybe a few Chinese companies until they set up a local alternative.
All in all I agree with the valuation conclusion, I just think it’s understated. If Nvidia’s AI biz is worth even $500 billion then Google Cloud should also be valued at around the same area. And nobody thinks that’s even close to reasonable.
On your point about TPUs, or just the fact that all the big hyperscalers (aka the major data center semis customers) are rolling their own AI chips, there is definitely some market share going to those products, but it is probably not as big a problem for Nvidia as it is for others.
So even Google, who invented the whole category, still uses a lot of Nvidia, especially for training. The latest gen TPU has a variant that was designed for that, but my sense is that it is not widely used. For Google Cloud, you can buy TPU instances, but no one does, and so the big AI push they have talked about lately is largely running on Nvidia, and GCP was a big part of Nvidia’s DTC event last month. Finally, I get the impression that Google is reconsidering the use of TPU and moving more of its internal workloads to Nvidia. It doesn’t sound TPU handles transformers well. Maybe that’s temporary, but if the category leader in homegrown AI accelerators is moving more workloads to Nvidia, I think it is safe to say Nvidia has a pretty solid position for the foreseeable future.
Thanks for the response.
Can you say more about the internal workloads with Nvidia? All of the content I read about Google LLM training, especially papers (see Chinchilla paper and the recent Palm v2 paper https://arxiv.org/pdf/2305.10403.pdf), mentions TPUs. It seems to me that even if TPUs are currently slightly less efficient, Google won’t give up on them and is very motivated to get them working well for Transformers. Their TPU chip team is capable and well resourced, and they have the advantage of being able to build specifically for this use case.
What do you think about the inference point? To me it seems like a much less differentiated market for Nvidia. If the LLM market is mostly training, is it really worth more than say 10 billion dollars a year in revenue? Who’s going to put in that money? It seems to me that not many companies have the skills and motivation to set up large GPU clusters and train LLMs.
I don’t think Google will abandon TPU, but I do think they are in the process of re-evaluating everything in their AI stack. They are still using both TPU and Nvidia for training. And I don’t think that will change. From what I can tell, they are recognizing that they need to be open to use other, open source models, which are likely going to run better on Nvidia. And I don’t think that will change. And not for nothing, they sell a lot of Nvidia instances in GCP.
I think the real semi battle is going to take place in the inference space, both for edge and cloud. TPU likely works best here for core Google algorithms (i.e. search, indexing, etc) but other things they add may not work as well on TPU. It is very optimized for their workloads.
More broadly, I think Nvidia is less differentiated for inference, but they are still differentiated. They remain the default solution for most AI workloads today. So for the foreseeable future they hold onto a big share of cloud inference. AMD has barely but a dent in that market. Much of the rest of the cloud inference market looks likely to be some mix of Nvidia and NPU ASICs from internal projects and a handful of start-ups all struggling for some share here.
Interesting, thanks again for the response. It’s going to be an interesting few years.