Margin Stacking and the Cost of AI

Let’s build an AI camera.

We want this to be a fancy camera, you know, with AI. This camera can recognize your cat or your baby and send alerts when they are on camera. So when you are toiling away in your shelter-in-place basement bunker office, you do not miss a moment of cuteness. We want our camera to have all the features, but also want customers to be able to afford it. How much does all that AI cost?

Let’s do some comparisons. Ubiquiti sells a video camera for $30. Ubiquiti is a good benchmark because they tend to make reliable products at very low prices. By comparison, a Google Nest camera costs $200, and it come with some good recognition features but not really true AI. It is safe to assume that AI will cost more, but how much?

The honest answer is that no one really knows yet. What does it mean to put AI in something? When most people talk about AI, they are referring to big software models which use a lot of computing power to analyze patterns, and then use that analysis to make choices. Is there something in the frame? Is it a cat? Is it doing something cute? Is the cat in danger and do we need to call 911? To get a camera to do that first requires a lot of work and money spent on computing.

Beyond the software, there is the question of hardware. We know that a basic camera is priced at $30. Factoring in Ubiquiti’s margins, the hardware probably costs $15-$20. But the Nest costs a lot more, it has a better camera (a small expense) and a lot of semis (a much bigger expense). Adding an AI chip to that is going to cost even more. And then we are going to have to make sure the image sensors and all the other chips in the camera can communicate, which means a fairly extensive firmware operation.

We are a small company how do we afford all of this? On the software side, we are going to be a bit limited. We will have to make use of open source image recognition tools. Some of these are fairly sophisticated, but tailoring them for our purposes will require some serious data science and data engineering talent. And we probably cannot accommodate all the features we want now. Fortunately it is software, so we can update these as our skills and the industry advance. Note that the Nest comes with a subscription plan starting at $6 a month, so we even have a business model that can fund ongoing improvements.

The hardware part though is stickier. None of the big chip companies today really sell standalone AI processors. Intel is probably an exception, but good luck getting their attention. There are a few dozen chip start-ups working on “Edge AI” solutions. They will likely be very responsive to a new customer, but they are also small. So who is going to fund the tens of millions of dollars it will take to bring their chip to production? Venture investors are unlikely to fund a company who’s first customer is our tiny camera start-up. It would take a dozen customers like us to make that chip start-up fundable. (And set aside that the last thing we want is 11 competitors.) Assuming we could find a vendor, it is safe to assume that their chip will not be cheap, a combination of their heavy upfront costs to amortize and low volumes. To produce the camera today would likely require a $50 chip. This will come down in time, but it will always be a significant cost adder to the device.

And this brings us back to the theme we touched on yesterday – big companies are building a big advantage by owning their own chip design capacity. Amazon is likely building some form of AI processor. They are unlikely to ever sell these to third parties, but if they wanted to launch a line of their own video cameras, they could probably modify this chip. Same goes for Apple and Google (i.e. the owner of Nest). It is possible that we are overstating this as those are all big companies and suffer from problems with internal coordination. On the other hand, they all have massive AI software teams, which gives them a big leg up on the software side of this problem.

The consumer electronics industry goes through cycles of vertical integration. Twenty years ago, the leading handset makers all made their own chips (and base stations). Then Qualcomm came along with a third party solution and broke open the market for companies like Samsung, LG and eventually Oppo/Vivo and Xiaomi. The same thing could possibly happen in the market for AI chips. However, times have changed and the semis industry is much more concentrated than it once was, and so are the customers. This will make it much harder for the ‘Qualcomm’ of AI to break into the space, it is much more likely to just be Qualcomm, or Intel or Nvidia.

Another option is to build our own chip. This would be impossible if we used the latest manufacturing processes at TSMC. If we used a lagging version, say 10nm or 16nm, we could probably get the economics to work. However, this would entail a larger camera, and it would almost certainly have to be powered. Not a great model for a start-up, but it could be possible for a slightly larger company. Companies like Belkin, Arlo and Resideo are not tech giants (Belkin is a fairly independent subsidiary of owned by Foxconn) but they all have an interest in building smart home networking devices. Could they build a chip? Again, the upfront manufacturing costs make this challenging. And as far as we know, none of them have internal chip design teams. But maybe that changes. When we explored the costs of internal chip design, we came to the conclusion that internal chip teams are really break-even propositions in terms of dollar costs, but companies pursue them because they deliver strategic advantage. Building video cameras is very far from Amazon’s strategic imperatives, but it is much closer to Belkin’s and Resideo’s. This would not be easy for them, or likely for that matter, but it is possible, especially if they could re-use the chip across multiple products.

In the end, the question really comes down to what will consumers pay for this? A $500 camera is probably a non-starter, but there is already a market for $200 cameras. Would consumers pay an extra $100 for AI? Ultimately, we suspect the prices of these devices will not go up much more, and instead the value will accrue to the software side of things, $10/month for AI would be easier for all those cute cat lovers to bear. And this is probably to the advantage of the hardware makers who are all on the lookout for recurring revenue streams.

When it comes down to it, we do not really know how much anyone values AI. That term has become so widely used that it is hard to even define, but there are clearly some interesting possibilities for “AI” applications across hardware categories. Unfortunately, it seems likely that much of the value AI provides will go to the software side of the house. The hardware requirements are significant and will stack up considerably, at least until the AI chip business matures significantly.

Image Credit: Cartoon Network

4 responses to “Margin Stacking and the Cost of AI

    • Good find. This is CPU based so probably not capable of much in the way of compute capacity. Either very slow or doesn’t work well in many conditions. Probably something similar in the Nest Camera.
      Did you notice it has both an Arm processor and a RISC V co processor?

  1. Watch the video they have announcing the product. They claim it can handle some machine vision and image recognition functions. Though I agree it does not look very powerful.

    Yes saw it has ARM and RISC V. An interesting combination.

  2. Lots of “edge AI” start-ups trying to capitalize on this trend.

    Most won’t make it but it will be interesting to see what feature(s) will ultimately have them stand out and survive!

Leave a Reply