AI Semis – the Have and Have Nots
Qualcomm has a robust AI offering, and the market for AI Inference at the Edge is huge. But Qualcomm and every other company not named Nvidia, will struggle to ride this wave to AI riches.
Qualcomm has a robust AI offering, and the market for AI Inference at the Edge is huge. But Qualcomm and every other company not named Nvidia, will struggle to ride this wave to AI riches.
We provide a basic look at why GPUs are the chip of choice for AI workloads, with a look at the early days of Nvidia CUDA.
The latest advances in AI (GPT, LLM, transformers, etc.) are like a Nokia phone in the 90’s – everyone could see the appeal, but no one could predict all that it would lead to.
It seems very likely that most AI Training will be run on Nvidia GPUs. The software environments for training are too numerous and changing rapidly, which favors the soft, warm familiarity of CUDA over speculative gains offered by new entrants.
Margin Stacking and the Cost of AI – How much of the value of “AI” will accrue to hardware makers who have to add significant silicon content to make their products stand out.
Heterogeneous Compute – The competitive dynamics of the processor markets were static for years, but the demand for “AI” and the emergence of new customers/competitors means the market for all processors is likely to shift considerably in coming years.
Amazon AI Assemble – Amazon launches its ML accelerator, demonstrating how hard it is to supply the data center industry with chips.
AI accelerators are special purpose chips whose appeal is economic rather than technical. And it is now likely that China has more stand alone AI accelerator companies than the US has, or maybe ever will have.
In our corner of the tech world there is an endless tempo of 5G ‘excitement’. This is largely manufactured because, as we wrote about last March, 5G brings no immediate, […]