Coming of age in mobile. Yahoo buys Flurry

I wanted to comment briefly on the news that Yahoo! has acquired Flurry. This a turning point, or a milestone. A marker of some sort. The End of the Beginning of Mobile, something along those lines. I originally wrote this the day of the announcement, but delayed posting, in part because I wanted to take some time to read Fred Wilson’s post on Flurry’s history.

First the facts

Flurry was one of the first mobile analytics companies to gain scale. They offered a free suite of tools that developers could embed in their iPhone apps to give the developers a better sense of what users were doing with those apps. For commentators on mobile apps, Flurry was a huge help in the early days (circa 2010). They had data on hundreds of thousands of apps at a time when no one else really had any hard facts (at least no one after AdMob got acquired). Later Flurry added a whole host of other tools, largely around ads and user tracking.

Yahoo! confirmed the deal late in the day, but no terms were disclosed. The big blogs cited valuations from $200 million to $1 billion. RWW literally has the range at $300 million to $1  billion. This is the same as saying no one knows how much Yahoo! paid because that range spans the gamut from a great exit to a weak one. According to Crunchbase, Flurry had raised $73 million, although I suspect that data may be understating the total. Reading between the lines (aka guessing) if Flurry had really sold for $1 billion or anywhere north of $500 million, someone would be saying so, loudly. Bottom line, seems like a modest exit for a well known company.

Truth be told, I have been hearing warnings about Flurry’s results for a couple years. The first red flag was their pivot in 2012(?) hard into the ad space. I regard the Flurry team very highly, so I think this exit has something to say about the broader app market.

 

Now the Analysis

Flurry’s early success led many (myself included) to wonder if the mobile app market was going to develop very differently from the shrink wrapped PC software market that came before it. With mobile app stores providing a largely open distribution channel, the way was now open for hundreds of thousands of new developers to enter the market. We are still feeling our way through that change.

The first takeaway I have from the news is that selling tools to developers is still a hard business to be in. This was one of those old truisms from the PC software market. Surely having all these new customers offered a new market for tools. Turns out that some things never change. The Developer tools, or Developer to Developer (no relation to this D2D) market is not big enough to support large companies.

The second conclusion I have is that there is a bubble in app development, and that bubble is stretched pretty thin. I am not calling the whole appconomy a Bubble (capital B), nor I am claiming that the whole mobile space has gotten overly-frothy (San Francisco real estate not withstanding). Nonetheless, Flurry’s sale seems to me an example of what happens when there are too many venture dollars chasing a too small market. Flurry had a lot going for it,but was never able to find a breakthrough business model.

Again, I am referring to that pivot into the ad market. Flurry has more data on mobile app usage than probably any company out there other than Google and Apple. But the value of that data, apparently is not worth so much. There are too many other companies out there charging little or nothing for smaller subsets of that data. Those companies will probably not exist much longer either, but they were around long enough to make the future of Flurry cloudy enough to merit a sale. This is bad money pushing out the good. I am not saying every mobile ad company is bad, just that there are a lot of bad business models getting funded out there, and that makes it hard for the good business models.

So what does this sale mark? I suspect this just proves what we have long suspected – Google, Apple, Facebook and a handful of others are the big winners in the app market. Google is able to take app usage data and throw it into its core search business, Facebook can do something similar with app usage against its social graph data. And Apple likes having a robust, highly competitive app market because it highlights their integrated hardware’s advantages. I think the mobile ad market is solidifying around the majors. This is not necessarily bad for app developers, but probably should be of concern for the hundreds of ad networks out there.

Finally the Lament

I want to close on a personal note. I was a big fan of Flurry. Not as a customer, but as someone deeply involved in the mobile app space. I will be sad to see them swallowed inside the “mobile first” Yahoo!. Covering the mobile app space has not been easy. Finding hard data has been challenging. For a long time, there was only AdMob and their excellent monthly reports. Much of that team ended up at Flurry after the Google acquisition of AdMob. They continued to put out some of the most interesting blog pieces about mobile usage. You know that meme that compares consumer media usage (TV, Newspaper, Mobile and radio) to ad spending? The one that showed mobile ad spending is poised to grow hugely… That came from Flurry. China is now a huge app market. You know who called it first? Flurry. I could go on, but you get the idea.

As you go into the dark Purple ether, know that you will be missed.

What Value does Equity Research Provide?

I have been writing lately about the state of equity research. In the comments a few people mentioned alternatives that already exist, and I mentioned a couple in my post. I should highlight Estimize as one of the more interesting ones that escaped my earlier list.

Anyone wanting to start a company that tackles this problem will have to confront the fact that the role of equity research is very unclear, and hence its value. In my first note on this subject, I noted that for the investment banks and brokers, equity research is now a marketing function. It helps to show the value of the firms’ underwriting and trading platforms. But that is just one perspective, it leads to a bigger question. Does research actually influence clients’ decisions about those other things?

Remember that there are two ways for financial firms to monetize research. It can drive trading revenue as institutional investors trade on the stocks that the analysts cover. It can also drive companies considering IPOs or other financial transactions to choose one bank over another. This second function is a complicated topic in its own right. I will save it for another day, one of my IPO posts, or my over-the-horizon IPO book. For this post, let’s just focus on the ability of research to drive trading revenues.

I do not have the exact stats, but equity trading is a multi-billion dollar industry. It is much smaller than it used to be, and shrinking more as commissions and trading spreads plummet, but it is still a big business. The world’s institutional investors still use the large and small brokers to execute trades. Technology is at the point where most funds could probably eliminate the trading commissions they pay, but for some reason they still pay something to their brokers. Some of this value comes from the trading platform itself, the brokers, especially the big ones, have some clear advantages in providing liquidity.  But if you ask most investors (and a growing number of traders) there is a clear sense that trading is increasingly a commodity.

So there must be something else, and part of that something else is likely research. Good analysts can provide value. I would lump the areas they provide value into five categories:

* In-depth knowledge of companies and management. Fund managers often cover hundreds or thousands of stocks. The research analysts cover a few dozen. They have the time to pick up deeper knowledge, and help keep the investors up to date when needed.

* Access to company management. It is hard for companies to get their message out, especially smaller companies. Management teams only have so much time to spend meeting investors. By dedicating a large portion of that time on speaking to research analysts, they can leverage the voice those analysts have.

* New ideas. This one is a bit tentative, but analysts can and do come up with good ideas.

* Legwork. Good research requires a lot of work, chasing down contacts, organizing schedules, building financial models, etc. Investors could do all of this themselves, but having the analysts do it saves time.

* Providers of consensus estimates. Every quarter, all companies’ results are compared to published estimates, and those comparisons drive big swings in stock prices. Those consensus estimates are still collated entirely from the public research analysts. (Estimize is seeking to change this.) Having a vote in this process is hugely influential.

* A public face. This is probably the most important piece. Analysts are public figures. They get interviewed on TV and in the press. Their sales force makes sure their voice gets head by thousands of investors. This is an important position, but also a tricky one as all the constituencies involved seek to influence it.

What it all boils down to is that analysts have a public-facing role and have the time to do in-depth research. Any business model seeking to alter the current model will need to provide a public platform with wide outreach and a way to collate useful data back to interested parties. Fortunately, these like like problems built for the Internet to solve.

 

The End of Nokia Series 40? I hope not

Among the big tech news items last week was Microsoft’s announcement that they were going to cut 18,000 employees and restructure the company. The big cuts seem to be coming from the recently-acquired Nokia handset business. The headlines around this were the discontinuation of the Nokia X phone which ran Android. This is not surprising. It makes little sense for Microsoft to be pushing a Google-based product. But buried in the news, was what I consider a much more important announcement, Microsoft will also discontinue Nokia’s Series 40 phones. This has huge implications. I am still digging into this a bit, so I am not 100% clear on what is getting cut, but if true, I think Microsoft may be making a very far-reaching mistake.

So first, a couple caveats. There is no official confirmation yet abut this. Most of the blog reports about this refer to a post on BGR India, who claims to have a copy of an internal memo from the head of the Nokia’s feature phone business. The Verge also confirms  this but provides little further detail. I delayed a little posting this, waiting to see if any harder news emerged, but so far nothing. My best guess is that this is true, it fits with the broader strokes in their CEO’s memo.

As I see it, the heart of the problem is Series 40. This is the tried and true operating system (OS) that runs low-end Nokia phones. In my opinion, Series 40 is the root cause of Nokia’s success in the 90′s and 00′s. For a very long time, it was the simplest and most intuitive mobile phone OS on the market, in a time when no one really appreciated the value of a mobile OS. My joke is that the killer app for mobile phones used to be the Snake game that came standard with Series 40, but more accurately it was the high degree of user friendliness that Series 40 provided. This catapulted Nokia into the leading market share position they held for the first decade of mobile.

That was then, and many people would argue that a pre-2007 (i.e. pre iPhone) feature phone OS is of no use in today’s smartphone world. But there are still 6 billion mobile phones out there, of which about 4.5 billion are feature phones and roughly a third of those run Series 40. We can debate the math a bit, but I do not think it is a stretch to say that there are still at least a billion Series 40 phones out there. That installed base rivals Android, and is far larger than the iOS base.

It is obvious that Series 40 is not a viable platform for future phones. Sticking with Series 40 as long as they did eventually brought down Nokia. But that is not the same thing as saying now is the time to kill the platform. From the memos quoted on the blogosphere, it seems like Microsoft is now cutting all investment in Series 40 with a goal of shutting down all Series 40-based phone platforms within 18 months.

Now maybe we are missing a piece, but this sounds like a “cold turkey”, hard stop to Series 40. I think a smarter plan would be to keep Series 40 going for a few more years, and use that as the transition platform to gradually move users to Windows Phone. This seems to be an abrupt end, where a smoother transition would make more sense.

The risk goes back to that statistic above, the fact that most of the world’s mobile phone users are still on feature phones. They may not own Series 40 devices anymore (having long since moved to Mediatek China OEM devices) but they are still very familiar with Series 40.

My guess is that when you look at Nokia’s numbers, they are still spending an enormous sum on marketing and support for Series 40. From a cost-cutting perspective, the decision makes sense. However, I think this misses a key strategic nuance. The Nokia brand is still worth a considerable amount. Consumers know Nokia and are familiar with Series 40. Having the old OS stick around a little while longer would seem to give Microsoft a big edge up in terms of distribution and brand reach. But the old Nokia has a massive manufacturing and logistics complex. For a Microsoft that seems to be refocussing back to software, ditching this must seem be a big temptation.

The flaw in this logic is that much of the world is still uncharted territory. Android seems to have the advantage here, but that is not a foregone conclusion. Microsoft should instead look to unload much of the Nokia manufacturing assets but license out Series 40 to Mediatek or some of its customers. Better yet, keep the whole thing in-house and use that as a springboard to eventually bring a few billion mobile users back into the Microsoft fold. Actually, to be precise, most of the people on Series 40 today have probably never owned or used a Microsoft product, so this is a way to reach a whole new audience.

Seen from a Developed World perspective, Series 40 makes no sense. But that is not where the fight is, at least that is not where Microsoft can quickly (or ever) win a fight. Better to save some of the Nokia supply chain baggage and Series 40, and use that as a very powerful springboard to expand in developing markets. I recognize that this is still very far from the core of what Microsoft wants to be, but if they already have the assets better to hold on to them. From an outside perspective, Microsoft appears to be throwing away something that could eventually be very important to them.

Put another way, it makes no sense that Vertu phones will be around longer than Series 40.

 

Vertu

Some challenges to fixing financial research

Following on my comments from a few days ago, any start-up that wants to attack the securities research space is going to face a few challenges.

First, is that there are already so many firms churning out research. The director of research of at one of the largest hedge funds once told me he had over 300+ brokers providing his team research. Over the past few years, I have encountered a lot of smaller brokers, and have been amazed at how many there are. Anyone who wants to tap into this market has to recognize that providing more research is not going to be the winning model.

Second, much of the research that is coming out is bad. There I said it. The truth is, if you ask any analyst they will tell you the same. Even good analysts do not always put out good research. This is an attention-driven business, and just like any ad-driven website out there, this means that analysts have to publish constantly, even when they do not have anything new or useful to say. So I still see analysts publishing daily notes with highlights of yesterday’s news. I do not think many people read it, but I understand why the analysts publish it. Couple this to the fact, that at many smaller firms, analysts are paid based on how much commission their trading desk does in the specific stocks they cover. This is a perverse incentive that drives many analysts to publish loud, attention-getting notes; and a disincentive to publish longer, more thoughtful pieces. Better to be noticed and wrong today, than forgotten and right a month from now. When you look at like this, your realize how similar financial research is to the broader web publishing industry.

A third problem is that there are no easy tools for solving the issue. Web-based media can apply all sorts of algorithms to optimize readership because the numbers of views they get are so large that statistical models work. For financial research, the target audience is small by web standards, with a huge array of things they care about. I think of this is a “columns and rows” problem. A typical, large web site measures millions of viewers on a dozen metrics – how long they stay on the site, what screen position they view first and most, etc. This is a database with millions of rows but only a few columns. By contrast, investors care about dozens or hundreds of stocks, have different extra requests (for different financial metrics, industry data, etc.) and any number of other subtelties, but there are only so many investors out there. This is a database with a thousand columns and only a hundred rows. My sense is that the data tools out there are not really optimized to solve these kinds of problems, and the sample sizes (i.e. the audience) is so small that statistical modeling does not yield great results.

Another problem is that research is extremely perishable. If an analyst has some hot new data, it gets disseminated in the market extremely quickly. Investors trade on it within hours if not minutes of its release, and within 24 hours the data is literally and figuratively yesterday’s news, and thus of no value. This is not only true of specific data, but of broader ideas or patterns of analysis. If an analyst figures out some clever new data source, investors find out what it is quickly and move to replicate it.

Finally, the biggest problem is that almost no one pays directly for research. The brokers “give it away” for customers who pay via trading commissions. This does not entirely eliminate subscription models, but it does make them harder. Over the years, I have known many analysts who have tried to hang their own shingle, but have struggled to find clients willing to pay hard dollars for a subscription. There are many of these shops who hold out hope for regulatory change. And every few years, we see headlines about some large investor who is moving to a model where they pay for trading and research separately. For the time being, anyone providing research has to contend with hundreds of competitors whose price is ‘free’.

All this sounds rather bleak, but I think there is a more optimistic conclusion to draw. As I mentioned in the last post, research is just another form of content. There are a lot of media companies struggling with the same problems, and many of the successful business models employed elsewhere, can be used for financial research. There are still plenty of technical problems to iron out, but also many potential solutions.

I plan to explore these more in my next post, but let me give one example. The way I see it, financial research is screaming for a freemium model of some sort. Someone should offer tiered research. Longer, less “actionable” notes (notes which do not demand immediate stock buying decisions) should be free. Clients could then judge the quality of the analyst’s knowledge and pay for the trading ideas, and pay for access to detailed financial models, and access to the analyst on the phone or in person. There are a few firms that are working on variations of this, but so far no one  has explicitly tried to copy any of the web-based models. My guess is that the next successful business to emerge in this space will not be a research provider, but a clever distributor of other people’s research.

My point is that technology and the web have the potential to make financial research viable again, and we are in very early stages of that happening.

The declining quality of equity research

In my previous post, I lamented the decline in the business model behind traditional equity research. I think equity research can and should play an important part in the financial markets. Unfortunately, the current business model means that analysts are getting paid less and less, and talent is leaving the industry.

This idea has been kicking around in my head for a long time. Two recent events prompted me to finally put in writing.

The first was the fall of Gowex, a multi-billion bankruptcy that appears to have been a complete fraud. As Benedict Evans pointed out in a Tweet  it is hard to imagine a company getting this big without equity coverage, and presumably that coverage would have unearthed warning signs much earlier. There are many more examples where limited or just plain weak research coverage let fraudulent companies bilk investors before getting uncovered.

The second topic was the recent debate over the valuation of Uber. First NYU professor Aswath Damodaran published a piece of FiveThirtyEight arguing that Uber’s last valuation of $17 billion was far overdone. Damodaran uses a good framework for this analysis, and his site has a lot of interesting reading. However, a few days later Bill Gurley, a partner at Benchmark Ventures, posted a response arguing the opposite, that Uber is worth far more and that Damodaran had made some wrong assumptions. Both sides used the same framework for their arguments, one that I would consider a pretty basic starting point for any equity analysis. I think Gurley got the numbers right, but both sides make valid points. The problem is that it is rare to see this kind of analysis anymore in equity markets. And as much as I believe Gurley is making a sound, honest analysis, he is an investor in Uber and thus an interested party. There is a broader problem in the markets when the best analysis is coming from interested parties.

What ails equity research?

Last week, I posted on the future of the IPO market in response to an online interview with Marc Andreessen. I wanted to follow that up with some broader comments about the state of equity research analysis.

In my post, I pointed out that one of the big sticking points for traditional, investment bank-run IPO processes is that the banks usually provide research analysts to follow the company after the IPO. Google can try to propose its own IPO mechanics, but few other companies have that ability. They need the research coverage and cannot rely on their own efforts to keep their story in front of investors.

The problem with this is that the traditional equity research model is essentially broken, and I would argue, is overdue for some web-based disruption.

First the basics, equity research analysts work for brokers and investment banks.  They provide regular written analysis of the companies they cover. The basic premise is that institutional investors, the people who manage our retirement assets, often cover hundreds of companies across a dozen industry sub-sectors. They have to focus on portfolio management and other tasks, so  they cannot do in-depth research on every stock in their universe. By contrast, equity analysts cover 20 or 30 stocks. They get paid to do in-depth analysis. They generally speak with management teams regularly, and develop other data sources on the companies and industries they cover. These analysts provide the in-depth coverage that their clients, the institutional investors, cannot do. The investors pay for this research by paying commissions for share trading with the brokers who employ those analysts.

Once upon a time, analysts were ‘rock stars’, among the most highly paid investment bank personnel. They achieved this status because beyond the traditional analysis work they did, they also helped the banks promote IPOs. Banks would pitch their analysts’ status as a reason for having that bank participate in a company’s IPO. Analysts could thus generate two streams of income for the banks – trading brokerage comissions and investment banking revenue.

Over the past decade this model has become entirely unglued.

First, the analysts’ role in IPOs was abused in the dot.com Internet bubble of the 1990’s. The abuses of a small number of analysts, let to a regulatory overhaul of the whole industry. Analysts are now largely prevented from participating in the selection of IPO underwriters. It is still a factor in company decisions, but the banks have had to drastically scale back their analysts’ involvement in pitching business. Investment banks can still share IPO revenue with their research departments, but the banks can no longer directly tie their analysts’ compensation to those revenues. It is murky and confusing, and deliberately ineffeicient, but the final impact is to greatly reduce analyst compensation.

The second problem is that brokerage revenues have come down sharply at the same time. It is much, much easier to trade shares now, as the process has gotten computerized. Brokerage commissions were once something like ten cents a share, but are now a penny or two, heading towards zero. So while the analysts still have sway in guiding investors’ trading decisions, the amount of dollars available to compensate them has shrunk.

The end result of all this is that analysts generate far less revenue than they used to, and get paid far less. Predictably, this has led to lower quality analysis. (See next post for more on this.)

Compounding all this is that the brokers who employ analysts do not fully grasp what the problem is or how to solve it. Maintaining a research department is expensive, as it requires a large compliance staff to meet all the new regulatory demands. Many of the old analysts are still employed, which also keeps the costs up. The old industry model no longer works, but no one is quite clear what the new model should like. Equity research is now a marketing function with the cost structure of a profit center, without the ability to actually generate revenue directly.

I think it is time to go back to basic principles. At heart, equity research is content. It is the height of folly, to think that this is going to be the one form of content that does not get entirely disrupted by the Internet and ‘software eating the world’. Yet most of the institutions involved still treat research the same way they have always have. Reports are still published as PDF’d documents, as if someone is going to print them all out. Research reports are generally all hived off behind paywalls. Social media usage is almost zero, and let’s not even discuss Twitter. For readers of this site coming from the Valley and Internet world,  you would be shocked, or slightly embarrassed, by the state of affairs.

Over the past ten years or so, many companies have sought to offer other models. The last Bubble saw sites like TheStreet.com pop up. More recently, sites like Quantopian have popped up. TheStreet.com has become a mass media site focused on general consumer readers. Quantopian, and some of the other sites, have a more professional user in mind, but only focus on one part of the problem. There is still considerable opportunity to improve equity research. What that looks like, will be the subject of a future post.

IPO is dying, but technology can save it

I just read the Vox.com interview with Marc Andreessen about IPOs and the state of the global financial markets. There are really two parts to the interview: one about IPOs and one a response to Thomas Piketty’s book Capital.  I have opinions about the latter topic, but will save those for another day.

However, the former topic strikes a resonant chord with me, so I thought I would put a few thoughts down. The IPO market is something I know a thing or two about. Given the subject of the interview, I thought my response might work as a Tweetstorm (for those of you new to Twitter, that’s a series of related Tweets posted serially.) I found that this approach is more interactive, but also a bit harder to convey all I wanted, hence this post.

I agreed with Andreessen on the generally poor state of the IPO market today. Or more to the point, the burdens that going public now places on small private companies. An IPO is an expensive endeavor – lots of lawyers, accountants and bankers. Much of this burden is induced by regulators, but a lot of today’s IPO practice is a function of the business models of the investment that have grown up in response to those regulations over the past eighty years.

At heart, the IPO process is controlled by the investment banks who underwrite stock market listings. Their position, along with the auditors, is somewhat dependent on regulatory bottlenecks that emerged from the 1929 stock market crash and the emergence of the SEC.

I would argue that much of these practices are now open to change. The banks are confronting a variety of changes to their business models brought about by new technology. But so far, the IPO process has proven immune to technological change. Google tried using its own IPO auction process, but that ended up as a very slim virtualization layer resting on top of the old, underlying architecture. Google could force it on the banks because of its status and success, but it did virtually nothing to change the underlying structure of the industry.

There are several reasons for this, and many of these can be addressed by improved technology. As I have argued in the past, too much of the preparation companies do for IPOs focusses on just getting to the listing. Too often companies forget about what happens the next day, and the following years. Decisions made in the heat of the IPO process have a long-lasting impact on how a now-public company is perceived by the market.

One of the reasons that the banks still have so much influence on the IPO process is that they still employ legions of research analysts. These analysts provide a very effective way for companies to communicate with the Street.  (Yes, within the confines of Reg FD.)

The average small cap IP stock starts with at least five research analysts poised to pick up coverage. These analysts help to make sure that newly-public companies retain investor interest even after the bell ringing fades. So Google used its reverse Dutch auction to allocate shares, but the banks still managed that auction. Google and every other company still had to contend with the old system because of the importance of having these analysts on board. Maybe Google could have lived without the coverage, but few other companies can.

The good news is that the research product is something that can be very easily improved by technology. Few companies have tried to do this because there are still a host of regulatory dependencies in place. But I would argue that there is still opportunity for a disruptive start-up to greatly improve on the current industry trend. Isn’t that what start-ups all try to achieve? Take a vertically integrated industry and unbundle it with the help of very cheap computing?

I will have more to say on this. Let me know if you would like to learn more, I have a few ideas kicking around my head.

Follow

Get every new post delivered to your Inbox.

Join 535 other followers