Update on pCell and Artemis

A couple weeks ago I wrote three pieces on privately-held Artemis and their pCell technology. (Parts One, Two and Three.) Artemis is proposing something really interesting, a new approach to wireless networking.

In those posts, I cautioned that while this is potentially very powerful, the company has so far not released much detail on how it really works, leaving open questions as to potential limitations. I got several very interesting comments on those posts from people who know radios far better than I do, speculating as to their approach.

I also got an update directly from the company when they reached out through the comments. As I suspected, they are very busy. They are working on providing more detailed information and a technical white paper, but also have to, you know, run a business. I believe they will have more detail soon.

They also answered what I consider the two most pressing technical questions.

First, how mobile is the system. I did not get the full technical run down, but they claim it is very mobile, and can work at high speeds. I have no reason to doubt them, so this is encouraging.

Second, how does the system handle ‘uplink’ traffic, that is traffic from the mobile device back to the access point. Here, the answer just made me even more fascinated by what they are proposing. I do not want to get into the technical details here, but essentially they just reverse the process of the ‘downlink’, in which all the transmitted signals interfere with each other and are then decoded based on the system’s knowledge of the signals. “Just reversing” something makes it sound simple. It is not simple, it looks very complicated, even harder math than the downlink problem, but very cool if it works.

I will update this subject as I learn more, but I thought the fact they responded to me very quickly and openly was worth mentioning.  I am looking forward to learning more.

 

Buses, Trains and Wireless Operators – A Corollary to Benedict Evans

Benedict Evans had an interesting post out over the weekend, “Railways and WhatsApp”, in which he makes a good analogy between wireless operators and railways. His point being that the wireless operators are hard to disrupt because: 1) building mobile networks is not easy; and 2) and those networks have scale economies which are hard to compete against. He goes on to say that the operators still have many more pricing and marketing levers to pull to defend themselves. My favorite part of the note is when he points out that WhatsApp, and other similar over-the-top apps, are just a form of pricing arbitrage. I appreciate it when people make economic analyses of technology. That’s what I try to do here, hence the name of the blog.

So I like the article, but I would add a corollary. The core point of Evans piece is that buses and trains are different technologies. Buses are more flexible, but you cannot link eight of them together and make a train. Mobile networks are big and expensive because they have to be, without that complexity you would not get the efficency or mobility we have today. He argues further that WhatsApp does not disrupt the operators’ business, it just forces a change in pricing. And smart operators have ways to deal with that (i.e a data bundle).

I think there is a key point buried in this comparison. The wireless operators’ business is providing local access for mobile devices. The really hard part to replicate is their ownership of spectrum  and the energy they spent building base stations and antennae all over the world. Those are a problem which cannot be easily solved with smarter software. But beyond that ‘access’ every other service they offer is just the way in which they monetize. In the train analogy, people pay for a ticket to ride, and they may also buy lunch on board, but riders may choose to get that meal through an alternative provider. The wireless carriers have gotten all of their paying passengers to buy meal service in the form of charging for voice and messaging services. The OTT apps cut into that business, just like a McDonald’s opening up a To-Go only outlet in a train station. To torture this analogy a bit further, prior to the iPhone launch, the carriers had a lock on all the services on a phone, just as if the train operators forbid people from bringing outside food on the train.

So on the one hand, there is little anyone can do to disrupt the wireless operators core service, but the pricing structure will have to adapt. Evans makes a similar argument. I think the hard part for the wireless operators is recognizing that they are no longer in the business of providing voice or messaging or video services. They need to rethink what services they offer and charge. The airlines are in a similar position, having now started to charge for luggage and even restroom use. The wireless operators may have to consider charging for things which were once part of the bundle. The hard part for the industry is figuring out what those are. This search is not new. I have seen many equipment vendors pitches over the years with a whole laundry list of services for carriers. The are armies of consultants working on these kinds of projects. So far there is no easy answer.

Artemis and the pCell Part 3: Does it Work?

In my last post, I looked at how privately-held Artemis has created a system which they claim can unlock immense amounts of wireless bandwidth. They use DIDO techniques, and a new access point which they call a pCell to transmit a single signal which can be decoded differently by each device in the cell’s radius.

Heady stuff, if it works. And they certainly have a solid demo video. But I still have a lot of questions as to how they achieve this.

I will fully admit that most of the technical solution will lay beyond my meager technical abilities (said the Chinese History major).  Still, I wanted to learn more. And having done more than my fair share of company research I set out to try to understand pCell.

Which is where I start to run into trouble. The founder of Artemis is a serial entrepreneur. He has a track record of successful companies he has started and sold, or continues to incubate in his own portfolio. More than anything else, I think this gives Artemis a lot of credibility. Benefit of the doubt goes to them. However, the drawback of this is that he has mastered the art of the pitch. The company has done some very good marketing, but it is almost too slick. I get the sense that the team has been burned before by the press and are managing their image very carefully.

As a result, it turns out that there is very little technical detail available on Artemis or pCell. Their website looks very nice, but has very little in the way of hard data. There is a link to that demo video, and links to all the positive press coverage, but no technical links. A Google search for “pCell Whitepaper” does yield a download from the company. That whitepaper was the basis of my last post, and it is helpful, but is not deeply technical.

There are many reasons why the company does not want to share more information. Most likely, they are still filing or just preparing the key patents. There is also some smart marketing at work. They start with the consumer appeal, the big picture ideas. If they started with the technical details they would likely be swamped quickly by a sea of technical quibbles with their ideas, bogging us down in minutiae. Artemis has the potential to be very disruptive, and those threatened have armies of engineers and marketers who could quickly clog the debate.

That being said, Artemis is making some big claims. They are picking several fights here, and at some point soon they will have to release more data.

I imagine that there are some limitations to this system. They do not appear to be limits of scaling, or device capacity. Last post I wondered how the system handles cell to cell hand off. The cellular standards spend a lot of resources handling the administrative side of communications (as opposed to the message itself). Those are serious problems, and I am curious how pCell handles them. And that is just the start of my list of questions.

Another big, related question for me is how mobile is this system. Meaning, how fast can I be travelling with my device? My guess is that pCell depends a lot on knowing the exact position of the receiving device. If I am in a car going 65 mph that may pose a problem. But it’s just a guess.

I did try to one other approach for understanding the system. Their CTO is Antonio Forenza, and a Google Scholar search yielded several interesting papers he published through the IEEE.

Below are some links to a few that looked relevant, the first three seem to be the basic science, followed by two that look like simplifications to the key algorithms.

Benefit of pattern diversity via two-element array of circular patch antennas in indoor clustered MIMO channels

Adaptive MIMO Transmission for Exploiting the Capacity of Spatially Correlated Channels

Adaptive MIMO transmission scheme: exploiting the spatial selectivity of wireless channels

A low complexity algorithm to simulate the spatial covariance matrix for clustered MIMO channel models

Multiplexing/Beamforming Switching for Coded MIMO in Spatially Correlated Channels Based on Closed-Form BER Approximations

Looking at these, I would speculate that pCell takes advantage of the fact that radio waves change as they travel through space. The signal a device receives at point A is going to look different at Point B, even if the two points are only a few centimeters apart. So the trick is to transmit a signal from the pCell access point which is designed to alter slightly from Point A to Point B, and the receiving device can make use of that difference to receive a different message depending on location, or multiple devices get different messages from the same transmission.  If you look at the abstracts from these papers (and that is all I had access to) there is a lot of discussion about correlation of ‘channels’ (radio signals) differing spatial conditions, with the key word being ‘exploiting’ those affects. Then there is the paper on a low complexity algorithm which speaks to the fact that this is some complex math and you need simplification to make sure the cloud servers do not get overburdened.

As I said at the outset, Artemis deserves the benefit of the doubt, but they still have some ways to go to prove credibility, let alone commercialization.

A quick look at King Digital’s Disappointing IPO

I am back in the swing of writing 500 words a day, and since I am a bit behind on that, I wanted to do some extra posts today as there is a lot going on.

A few months back, I took a look at King Digital’s IPO Prospectus filing. King Digital is the maker of hit iPhone game Candy Crush Saga. KING went public today. Their shares opened at $20.50 and closed at $19, well below the IPO price of $22.50.

In that earlier post I compared KING to Zynga. King had much better financials than its older peer Zynga, triple the revenue and users, more than double the profitability. King is now trading at a market cap of $5.9 billion, to Zynga’s $4.0 billion. However, if you compare the trading multiples of the two Zynga is actually trading at a premium. King is trading at 6.8x 2013 trailing EV/Adj. EBITDA while Zynga is trading at 11.9x. (I have to use trailing multiples as there are no published forecasts for KING yet.)

The market is saying that Zynga is twice as valuable as King. This despite the fact that the average King user is many times more profitable than the average Zynga user.

The market seems to think that KING’s business is going to decline in coming quarters. Let me quantify that. Using the multiples above, the market is saying that King’s $824 million in adjusted EBITDA in 2013 is likely to fall by almost half to $470 million. At those levels, the two peer companies would trade at the same multiple.

There seem to be two big concerns about King. First is that their key game, Candy Crush, seems to be slowing down in user growth. The last quarter saw declines in usage numbers. This is the same red flag that investors saw in Twitter’s last earnings call. (Note: I own 100 shares of Twitter.)  Slowing user growth is a concern. However, the second problem investors have with KING is that they are almost entirely dependent on that single game, with Candy Crush contributing 78% of revenue last year. So the investment thesis seems to be that KING is dependent on Candy Crush, and Candy Crush appears to be slowing if not declining. To make matters worse, Candy Crush is not really a game that appeals to the investor demographic, with its blinking lights and super basic game mechanics. So it is easy for investors to assume the worst.

That being said, could the sell-off in KING be overdone? While a $6 billion market cap for a game maker seems like a lot of money, a trading multiple at less than 7x EBITDA is not outrageous. I would argue that the company probably made some smart moves in how they priced their IPO and communicated their model forecast to the Street. This is the right way to do an IPO. Underpromise at the IPO and then steadily beat expectations. If the company did that, then we may see some positive earnings surprises when the company reports results next month.

Despite the similarities between King and Zynga, the two represent very different investment stories. Zynga went public on the expectation that it could keep growing by adding new game titles and leveraging its user base to move those users to new games. That did not happen, and the company’s user base quickly shrank as newer games could not reproduce the popularity of older titles. King is very openly dependent on a single title. They have proven adept at building a user base for the game, and arguably holding on to that user base. People continue to play Candy Crush and spend money there. So the question boils down to can King hold on to existing users and keep them spending. I would argue that other, similar games, like Clash of Clans from SuperCell, have proven that they can do this. It is getting very hard (and expensive) to launch a new title in mobile app stores.  Consequently, the longevity of existing titles is improving somewhat. This speaks to the way in which the App Store model has some serious flaws, but does provide something of a moat protecting Candy Crush.

Now I am not recommending you go out and buy the stock. That is not what I do on this site, and I the only work I have done of this stock is to read its financial filings and check out its position on the iTunes Store, where Candy Crush remains one of the highest grossing games. But I do think it merits more work.

Artemis and the pCell Part 2 – How it kinda works

In my last post, I described how wireless networks are starting to bump up against “Shannon’s Law”, essentially running out of ways to cram more data into existing radio spectrum.

Privately-held Artemis has proposed a novel solution to this problem. They call their technology “pCell”, short for Personal Cell. They make use of a technique called “Distributed Input/Distributed Output” or DIDO, and sometimes it appears that way in press reports. (Side Note: Notice the subtle marketing shift there. Dido was the founder of Carthage and a Greek hero, but a villain in Roman mythology. Artemis is a Greek goddesses, and a highly revered one at that, sister to Apollo.)

In current mobile systems, we take a piece of radio spectrum and divide it up in many clever ways so that each phone has a unique signal to and from the base station. With pCell, each cell in the system sends out a unique signal to all the devices it is talking to, but each device receives it in such a way that it can interpret its own message.  A unique signal to each device (current approach) versus a common signal with each device taking its own unique interpretation (pCell).

In the press, Artemis describes this as ‘sidestepping’ Shannon’s Law. Shannon’s Law says you can only divide up a signal into a finite number of pieces. Instead, Artemis does not divide up the signal, it transmits a signal in such a way that each device can take what it needs from the signal.

This is phenomenally complicated math, so a key piece of Artemis is that the computation that determines each signal is not handled by the pCell in the field. Instead, all that data from the pCell is sent back to Artemis servers and the math is performed ‘in the cloud’. This company claims this allows the system to scale massively. It also has the benefit of making the pCell’s themselves relatively cheap to build, they do not need a lot of processors or memory.

Artemis claims that this approach frees up immense amounts of capacity. And in that video demo they stream a different HD video to ten iPhones simultaneously from a single pCell.

In theory, this system should scale nicely. The pCells have a range somewhere between Wi-Fi access points and cellular base stations (but have the power to reach much further). They are small, and so should be relatively inexpensive to deploy in large numbers. And since the hard work is done in the cloud, adding additional users is just a matter of spinning up more servers. (A topic for a different day, but the price of that function is dropping rapidly as well.) This seems to be one more example of the rapidly plummeting cost of computing is having an impact in the physical world.

All of this is pretty exciting. If it works.

I still have a lot of questions. First, how do transmissions from the phone to the cell work (i.e. the uplink)? Second, how does that math work? Third, how mobile is the system, meaning how fast can you travel between cells?  And what happens when lots of devices are moving about? In today’s cellular systems, this problem sucks up considerable amounts of capacity to handle.

It sounds like Artemis has worked out these problems, but in my next post, I will look at how they have been parsimonious in their disclosure about the technical side of their system.

A look at Artemis and the pCell: Part 1- The Basics

One of the more interesting product demos/start-up ideas out there right now is Artemis and their pCell technology. They had slightly uncloaked their idea a year ago, almost by accident, but then last month their founder did a full-blown demo. Here is a link to that demo.

Once you dig into what they are proposing, it gets very interesting very quickly. I want to walk through their technology here, but I also I need to do a follow up post looking at how the company is positioning itself, because their marketing effort leaves open many questions as to how they achieve what they demonstrate.

First the basics. A common topic for me has been the idea that our wireless networks are getting pretty clogged. It is almost impossible to install a new base station in most regions now, so instead the operators are looking for ways to ‘densify’ their networks, or install a large number of much smaller pieces of equipment that cover smaller areas.

The root of the problem is that we are running out of ways to increase wireless speeds. Each advance of the wireless standard (i.e. 2G to 3G to 4G) has involved manipulating the radio waves in some new way to cram more data into the same piece of radio spectrum. A good analogy here is TV channels. Once upon a time TVs came with dials, and you had to turn the dial to switch channels. Channel 7 is ABC, Channel 4 is NBC, etc.  Imagine if you could use that one channel to broadcast two channels, so you have Channel 7A is ABC and Channel 7B ABC Sports.  Do that enough times and you have systems that can handle a lot of traffic, each wireless generation has split that channel many times.

We achieve this through some pretty heavy computations. The people who designed the standards (i.e.  the 3GPP) have done some really clever things to achieve the speeds we have today. They do not just manipulate the channel or frequency of the signal, they also manipulate the amplitude, phase, and timing of the signal. I am running out of analogies here, but put simply the standards have found many ways to divide up the spectrum so that each phone connecting to the network gets its own signal, and that signal contains much more data than in the past.

Unfortunately, we are starting to bump up against the laws of physics. As you can imagine, as you divide up a frequency you eventually reach a point of diminishing returns where each signal gets too small to carry as much data as before you divided it up. This limit is known as Shannon’s Law, and I mention it because it features heavily in the pCell demo. Put plainly, Artemis claims that they have found a way to ‘sidestep’ (their words) Shannon’s Law, and the way they achieve this is really clever, if it works.

That’s probably enough for now. In my next post, I will look at what Artemis is proposing to get around this problem.

What is a ‘device’? – A response to Stratechery

A few days ago I read a post on Stratechery. Since half the traffic to this site comes from its author Ben Thompson’s Tweets, you are probably familiar with the site. If you are not, go sign up, it’s one of the best strategy sites out there. Ben writes things that I wish I could write.

His post entitled Digital Hub 2.0 takes a look at Apple’s evolving digital media strategy. It is a great analysis and has generated a lot of discussion in the blogosphere. A few ideas at the end of the post resonated with me, topics I have touched on in the past, and rather than e-mail Ben with my thoughts, I decided to publish them here.

Put simply, Thompson’s point is that a putative iWatch from Apple could become the next hub of their cloud-based media streaming efforts. I have no idea if that is what they will do, but it is an interesting argument. However, I think it opens up a very important subject that goes beyond any single OEM. It now seems inevitable that all our digital content will be cloud-based in some way. I think apps will be with us forever, but even today most apps are just fronts for web-based content. In the Stratechery post, the idea is that the ‘iWatch’ serves as the gateway to the content, which then streams it to larger screens or connects to a keyboard and mouse if needed.

Again, I have no idea if this is what Apple plans, but I do think that eventually all devices head this way. There is no reason that our interaction with digital data has to come through a ‘phone’ or a five-inch device made of silicon and glass. My guess is that in the future, we will no longer have ‘computers’. Instead, we will have mobile devices that connect to input and output devices wirelessly. You will take your mobile device to work, plug it in to a power dock which will then connect to a larger monitor and keyboard. Take that same device home and it will stream movies to your TV. Take it on a plane and you can stream to a larger, tablet-sized screen. We will completely abstract the input and output (I/O) functions from the hardware we carry around.

Of course, we are not there yet. There are still a lot of issues with this vision, but none of them are technical. From a hardware and silicon standpoint, we could do all of this today. The real trouble will be user interface (UI) issues. The UI for a five inch screen has to be very different than the UI for a 42″ monitor used for work with a keyboard, or a 70″ LCD Ultra HD screen used for lean-back video watching. This issue will take a few years to sort out.

But then once we reach that point, what is that device that we are carrying around with us? What is it really providing? Often, we tend to describe these devices as ‘mobile’, but that phrase becomes almost meaningless if all content is web-based. We will not need five inch screens, except for a small set of uses. We talk a lot about smartPHONES, but the phone part can be handled with a watch-sized device and a bluetooh earpiece. Really, a smartphone is a computer. And we are pretty close to the point where all that computing power can fit in an even smaller footprint than a smartphone. If we completely decouple computing and I/O, this means the key piece of hardware we carry with us is not a phone. It is not an app machine either, nor a media consumption or creation device.

The most important device we carry with us will be our Identity Hardware. Smartphones are incredibly personal devices already, because the real value of the device is that it authenticates who we are and what mobile networks we can access. The actual computing power can reside anywhere – in the cloud, in five inches of glass and silicon, in a ring or a pair of eyeglasses. The device just lets the Web know who we are (or who we choose to be on the Web).

That opens up all kinds of doors to new devices and new usage models. We have to solve the UI problem, and we will always have to be close to a power source, but the real value of a mobile device is identity – the rest of it will be abstracted away.

 

Follow

Get every new post delivered to your Inbox.

Join 513 other followers