A Few GTC Highlights

Nvidia’s developer conference is not over yet, but after a few days of listening to keynotes and presentations and reading all the coverage we can get our hands on, we have a few thoughts on the outlook for Nvidia.

First and foremost, Nvidia is positioning itself as a systems company. At one investor event, CEO Jensen Huang said “No semiconductor company should be worth $1 trillion…[pause for dramatic effect]… we are not a semiconductor company.” This quip sums up Nvidia, and its CEO’s presentations skills, as effectively as anything else. This is a company that almost from its inception sold not only GPUs but complete graphics cards. That systems approach seems built into their DNA, a phenomenon in full display with their latest product offerings. What is probably most surprising about this now is Huang’s insistence that Nvidia is perfectly willing to sell its products as complete solutions OR de-composed into individual products. If they wanted to, Nvidia could build an AI Walled Garden, but is signaling that it does not intend to do that. How this plays out in practice is a different story, and who knows how their story may change in coming years, but today that is a remarkable degree of strategic flexibility.

One of the most closely watched trends in AI among investors is the nascent competitive dynamics in the market for inference chips. This is going to be a much larger market than the more Training-centered market of today, hence it is where most of Nvidia’s competitors have set their sites. Of course, Nvidia wants this market too, on their latest earnings call they noted that 40% of their data center revenue comes from chips used for inference. During Q&A, an investor asked Huang what advantage Nvidia has in inference, and his immediate response was “NV Link”. This is the networking system that connects Nvidia chips inside a server rack and allows for the sharing of memory across a complete Nvidia solution. We think this is telling. On the one hand, NV Link does have some significant technical advantages, especially around low latency, but many customers have expressed varying degrees of objection to using it. There is no single way to architect a data center, every component brings its own trade-offs. So many, especially the hyperscalers, may choose to do without NV Link. The significance here is that if NV Link is really what differentiates Nvidia in inference, that leaves open the door to competition. If Huang had instead responded with a list of factors that gave them the edge, such as their software stack, that might be game over for inference. NV Link on its own, instead signals game on, this is a small, but critical opening for the competition as many (some?) customers will choose to architect around NV Link, and seek performance gains from other vendors. This does not mean that NV Link is irrelevant, or that Nvidia is really vulnerable, it just means that competitors like AMD have a shot at carving out some toeholds in the market.

Finally, when asked what future AI software will look like, Huang gave a long list of models. His point was that AI software is in its infancy and he expects it to develop rapidly with new capabilities. His view here is important as Nvidia sees almost everything going in this space right now. That being said, if he is correct that belies arguments we have heard recently that the future of AI will all be built on Transformer-based models. The significance here is that it means companies which are building chips around transformers will be exposed if the model shifts again in some other direction. The rapid pace of change in AI software is a big part of the reason that so few semis start-ups have been able to gain a hold on the market, as the performance gains they held at design get negated by the time the chip actually ships and the software market has shifted. Of course, take all of this with a grain of salt. Huang has an interest in seeing this market continue to change, as the default choice when software platforms are constantly shifting is to stick with Nvidia GPUs. However, we tend to agree that there is still a lot of evolution in AI software models coming down the line.

Leave a Reply