Microsoft launched a new AI platform last week, called Azure Percept. (Here is some background from Techcrunch) This is a new hardware and software platform for building AI solutions. That hits just about all the buzzwords. So let’s break it down a bit.
Azure, Microsoft’s public cloud platform, wants to promote its AI functions. Running AI models requires a lot of compute from the cloud, and Azure wants to attract customers to its AI services. So they have released a set of software tools and hardware designs to ease customer adoption, and point them in the direction of Azure.
Will this actually work? AI seems to be in everything nowadays, but that label is a bit misleading. AI is just a set of software tools that work for a particular set of skills, skills that are acquired over a long period of building models. Azure Percept is focused on audio and visual applications of AI, meaning image and voice recognition.
If you run a website, and you want to run analysis on all of the data you capture on that website, tailoring ads or images, you can use AI to help your site make better ‘decisions’. You already have all that data, you can just sign up for Azure (or AWS or GCP) and use their AI services to make those calculations. But what do you do if you are collecting data from somewhere else – say video cameras or microphones? This is the goal of Azure Percept, to bring AI to companies building video cameras and microphones (likely into other devices).
So far so good, but we start to run into problems here. For Percept to work, it will need both electronics (easy enough to obtain) and silicon to run the AI math locally. One approach to this is to just build the device, capture the data and send it back to the cloud for analysis. Everyone is happy, except the billing department who is suddenly faced with a multi-thousand dollar bill for bandwidth costs. If you have 1,000 cameras deployed, and they are all sending back images of sufficient quality to run against AI models, that ends up being a lot of bandwidth. Better to run the AI locally on the camera, and only send back alerts for important events (an intruder has entered the building) or for clarification via more high powered compute (Is this an intruder?).
And here not only does Percept fall short, but so does the whole AI complex. As much as everyone is talking about “AI at the Edge” or AI on the device, the ecosystem to build those devices is not fully developed. Makers of low-end devices (i.e. cameras and microphones) do not generally have the expertise in AI or even in building products that have enough compute to perform AI math. On the flip side, there are dozens of start-ups who have built some great AI chips, but they are faced with a highly fragmented market. If you run a chip start-up, you cannot bet the future of the company on a design win at one of the big handset companies – there is too much competition and too many customers building their own product. So that means going after “edge devices”, and trying to win over enough small scale customers (building cameras and mics) to cobble together a scale business.
For their part, Azure Percept looks to solve the former problem, helping hardware makers build AI-ready systems, but they are a little vague about the details. From Techcrunch:
Microsoft says it is working with silicon and equipment manufacturers to build an ecosystem of “intelligent edge devices that are certified to run on the Azure Percept platform.”Source: Techcrunch
Having read a lot of press releases over the years, and written a fair amount too, this sort of “Coming Soon” statement speaks volumes. This is far from the first company to create some sort of hardware/software platform, and these things always get bogged down in the details. This is why Amazon’s Alexa seems to be embedded in everything and Apple’s HomeKit is hard to find. We are not saying Percept is bound to fail, merely that there is a long way to go between here and a Platform (capital P).
We also think this speaks to a growing problem for this corner of the industry. Large hardware makers and buyers (e.g. Apple, Amazon, Samsung, etc.) are now all building their own chips with an AI focus. The ability of smaller companies to build AI into their products will have a huge impact on the future of consumer electronics.