Showing posts with label network effects. Show all posts
Showing posts with label network effects. Show all posts

Thursday, January 21, 2010

Summarizing the iPhone transformation

Three years ago this month, Apple announced the iPhone. This was only a few weeks after I’d had lunch with Michael Mace, who I’d met in 2002 when I was trying (unsuccessfully) to get internal permission to study the Palm/Palm Source spinoff.

The day before the iPhone announcement, I suggested to Mike that we send a paper on an unspecific topic to an industry-academic conference called LA Global Mobility Roundtable (LA GMR). The next week, I emailed Mike to say “I think it would be fun to write something about the iPhone for the LA conference.”

And so began two papers on the iPhone that we wrote and got accepted at LA GMR and at DRUID 2007 conferences, and that were also presented at UCI and Boston University.

Since all of these presentations were before the first iPhone shipped, the audiences correctly noted the speculative nature of the papers. Particularly in Europe, people were skeptical that the iPhone would be a success — or could ever challenge Nokia’s insurmountable lead in smartphones.

By waiting until February 2008 to submit our paper to Telecommunications Policy, we could talk about some early success measures, such as the 4 million iPhones sold in the first 6½ months on the market.

We did two more submissions, and updated the data with each iteration, until we got word on Monday that the paper was accepted. The final corrections to the page proofs went in Thursday. (The final uncorrected draft is on my website, and I’ll link to the official copy when Elsevier posts it.)

The orientation of the paper evolved significantly over this period, thanks to prodding by the reviewers and some additional clarity on our part. (A dry run at Michigan State last spring also helped).

Still, I think we captured overall some of the transformations of the US (and lesser degree, global) cellphone industry due to the iPhone:

  • AT&T went to Apple because it wanted more people to use its data network. It generate data revenues, it set a precedent by requiring a data plan with every phone. Thanks to the iPhone, AT&T now seems to have more than enough data users, and data is becoming increasingly common across the entire industry.
  • Competitors copied Apple’s hardware, but none had (or yet have) copied its systems competencies. (Google has come the furthest and may still get there, while RIM is serving an entirely different market).
  • By linking to the existing WWW, Apple was able to solve the chicken&egg problem of attracting complements before demonstrating an installed base.
  • Before the success of the iPhone App Store, the iPhone succeeded because it provided the best approximation of the World Wide Web.
We could support the latter point with this great quote from Steve Jobs a month before the first iPhone shift:
[Cingular has] spent and are spending a fortune to build these 3G networks, and so far there ain't a lot to do with them. People haven't voted with their pocketbooks to sign up for video on their phones. These phones aren't capable of taking advantage of it. You’ve used the internet on your phone, it's terrible! You get the baby internet, or the mobile internet -- people want the REAL internet on their phone. We are going to deliver that. We're going to take advantage of some of these investments in bandwidth.
Like any journal or journalistic article, our paper is a simplification of reality, and some may quibble over some of the details — but I think we accurately capture for posterity why the iPhone was a hit. (Or, as a historian told me when I was getting started, “first, get the stories right.”)

I’m really proud of this article, which is probably why I spent more time (more than a day) on the proofs than any previous article — not counting the time Mike spent on it as well.

A decade from now, I think it will rank up there with my 2003 most-cited journal article in Research Policy on open source strategies, as well as my first journal article from the San Diego Telecom project, published a year ago in the Journal of Management Studies.

Updated February 13: Telecommunications Policy emailed me today to say the final (prepress) final page proofs are online at the Elsevier website: the article DOI is 10.1016/j.telpol.2009.12.002.

Wednesday, December 30, 2009

Productive publishing period

[HornTooting]
In terms of quantity, 2009 was my most productive year ever for academic publishing. After publishing three journal articles in 2008, in 2009 I published five: one about telecommunications, one about standards, two about telecommunications standards and one about open innovation.

The first two articles were based on a four-year collaboration with my now-friend Rudi Bekkers, looking at patents in W-CDMA. One paper focused more on the case study of standardization, while the second paper looked at the quality and timing of essential patents as laid against the standardization process. The latter paper was published in Telecommunications Policy, the leading academic journal on, well, telecommunications policy.

Building on Rudi’s earlier pathbreaking papers on GSM patenting, we noted several shifts from the 2G to 3G era in European mobile phone standards. The number of essential patents increased eightfold and the number of claimants increased threefold. Equipment makers retained about the same proportion of overall patents, but the network operators virtually disappeared, replaced by component suppliers (notably Qualcomm) and technology licensing firms (notably InterDigital).

These were our conclusions:

The sources of UMTS patent proliferation have often been ascribed to IPR-focused companies outside the ETSI process, particularly Qualcomm and InterDigital. However, this study shows that the largest numbers of patents are held by two firms (Nokia and Ericsson) centrally involved in the UMTS standardization, and the timing of their patenting suggests that they used their knowledge of the standard’s development for anticipatory patenting—further contributing to patent proliferation.

Still, a cozy oligopoly of four main UMTS patent holders might have produced a manageable IPR regime comparable that to the five major holders of GSM patents. However, the number of firms claiming at least a one patent has grown threefold, increasing the risk of holdup, transaction costs and royalty stacking for firms implementing the newer standard. This uncertainty is magnified by the self-determination of essentiality: while it is virtually impossible to determine how many of the 1227 patents are actually necessary to implement UMTS, at the same time other parties may fail to provide an itemized list of essential patents.
This was not even the longest collaboration of the papers. One paper was based on an eight-year collaboration with my friend Scott Gallagher, which began when we met during the bubble era conference of the Strategic Management Society (2001) in San Francisco.

Then as now, the goal was to re-examine, critique and extend the traditional view of positive feedback in the adoption of standardized goods. We brought together a number of observations that (when we started) were somewhat novel, although the field has not stood still during that period. After many delays (including other projects, work and life), the paper was published in September in the Journal of Engineering and Technology Management, a respectable journal that has attracted papers from some of the top names in the field

I’ve already mentioned on my other blogs the two other papers published in 2009.

In April, I published the cover article in the Journal of San Diego History, based on my research into the origins of the San Diego telecom industry. The paper was entitled “Before Qualcomm” to make it more relevant to the general readership, and traced the early round of spinoffs of Linkabit, the region’s seminal company. It also included discussions of the role of Qualcomm co-founders Andy Viterbi and Irwin Jacobs in applying Claude Shannon’s to space communications, drawing on my 2008 article in the Journal of Management Studies.

The fifth paper is the first in what I hope will be a series of papers that contrast open innovation with user innovation and related theories. Published in the Washington University Journal of Law and Policy, it (not surprisingly) focuses on policy issues related to open, user (and cumulative) innovation.

I’ll be glad to send out a PDF of the published version of any paper to anyone who’s interested.

This morning I got email notice of acceptance of my first paper for 2010, a paper on the success of the iPhone that will be published by Telecommunications Policy. Michael Mace have been working on this paper since 2007 — actually before the iPhone shipped — although our understanding of the phenomenon has shifted significantly since then. Additional details as they become available.

[/HornTooting]

References

Rudi Bekkers and Joel West, “Standards, Patents and Mobile Phones: Lessons from ETSI’s Handling of UMTS,” International Journal of IT Standards & Standardization Research, 7, 1 (January 2009), 13-34.

Rudi Bekkers and Joel West, “The Limits to IPR Standardization Policies as Evidenced by Strategic Patenting in UMTS,” Telecommunications Policy, 33, 1-2 (Feb.-March 2009): 80-97. DOI: 10.1016/j.telpol.2008.11.003

Scott Gallagher and Joel West, “Reconceptualizing and expanding the positive feedback network effects model: A case study,” Journal of Engineering and Technology Management 26, 3 (Sept. 2009): 131-147. DOI: 10.1016/j.jengtecman.2009.06.007

Joel West, “Before Qualcomm: Linkabit and the Origins of the San Diego Telecom Industry,” Journal of San Diego History, 55, 1-2 (Winter/Spring 2009): 1-20.

Joel West, “Policy Challenges of Open, Cumulative, and User Innovation,Washington University Journal of Law & Policy 30 (2009): 17-41.

Wednesday, May 27, 2009

Academic victory for open standards

One of my major research (and blogging) interests has been on the open-ness of standards. A particular pet interest has been on semi-open standards — how firms decide which elements of openness to offer and which ones to block. (I’m also interested in how open standards relate to other aspects of innovation openness such as open source and open innovation).

A decade before I even knew there was an academic literature on standards, there were four academics cranking out the seminal work on the fundamental economic principles of standards creation and adoption — a literature known as network effects. Writing in two teams, journals like American Economic Review and Journal of Political Economy were filled with papers by Michael Katz and Carl Shapiro or Joseph Farrell and Garth Saloner. (Katz was also an FCC economist and Shapiro a Justice Dept. antitrust economist).

My dissertation is filled with references to these two themes, as well as to Information Rules, the HBS book Shapiro co-authored with Hal Varian.

One paper I did not fully appreciate until recently (because it appeared in a journal most libraries don’t carry) is a 1990 paper on open standards by Saloner. I am often proud of my chapter on open standards (in a 2006 book on the economics of standards), but it’s now clear that Saloner was the first to seriously consider openness in standards as an intentional tradeoff.

All of this is a long intro as to why I was intrigued by a Stanford press release issued Tuesday:

Economist Garth Saloner, a scholar of entrepreneurship and business strategy, will be the next dean of Stanford University's Graduate School of Business, President John Hennessy and Provost John Etchemendy announced today.

Saloner, 54, who joined the Stanford faculty in 1990, is the Jeffrey S. Skoll Professor of Electronic Commerce, Strategic Management and Economics, and a director of the Center for Entrepreneurial Studies at the Graduate School of Business. He will succeed Robert Joss, who is stepping down after 10 years as dean. Saloner's appointment is effective September 1, 2009.
In addition to his prodigious research, Saloner is credited with helping to lead Stanford’s particularly complex re-architecting of its MBA curriculum.

What I find particularly interesting is Saloner is one of the few people within GSB that seems to care that Silicon Valley can be found just outside the boundaries of “The Farm.” Rather than local problems of interest, most of the faculty of GSB are oriented towards an international disciplinary audience such as economics, sociology, psychology, or applied math. At Stanford, the greatest concentration of Silicon Valley-oriented business scholars are found in one department of the Engineering School.

Will Saloner’s appointment make the GSB (and its new Phil Knight Management Center) a new hotbed for the study of Silicon Valley entrepreneurship? Or will the institutional norms of the various fields drown out whatever preferences the dean and local alumni might have? Stay tuned.

References

Garth Saloner, “Economic issues in computer interface standardization,” Economics of Innovation and New Technology, v. 1, n. 1 (1990), pp. 135–156.

Thursday, December 4, 2008

More apps is not better

As a standards researcher (and pundit and consultant), one of the myths I’ve been trying to fight is that more applications are necessarily better, i.e. a source of competitive advantage. It’s quite clear — as I observed almost a decade ago — that this one of several over-generalizations from the VHS vs. Beta format war of the 1980s. This morning I found another industry — smartphones — where the evidence is compelling.

The influence of the VHS vs. Beta war was powerful, because it happened at a time when contested consumer electronics format battles were rare. Atari and a handful of firms were producing consoles. All of the 8-bit home computers had pretty much died, supplanted by the 16-bit IBM PC. Due to fortunate timing (and a dramatic outcome), it became the exemplar of what a standards contest is about. However, it was not a typical standards contest. In particular, we have different types of 3rd party “software” use patterns. A movie (other than a kid’s video pacifier) you consume once or twice, and thus novelty and variety are key. It doesn’t generalize to other entertainment, because even music is something that you buy once and may use dozens or hundreds of times.

At the other extreme, PC software is something you get one of and tend to use over and over again. Yes , you want a program to serve your particular (perhaps niche) need, but you don't need five or a dozen. One spreadsheet, presentation and e-mail program are quite enough, even if I have 3 browsers installed on my laptop. (Videogames are somewhere in between, but that’s a whole ’nother post).

In my dissertation and subsequent paper, I documented a dramatic example in Apple’s recovery in the first few years of the Jobs II era (1997-present). The Macintosh market share was essentially the same as at the bottom, but if it had good implementations of the key applications — office, a browser, an e-mail client, a music player — a large number of users would say that‘s “good enough”.

In economics, the Betamax model became accepted dogma through the argument of “positive network effects.” I argued that Apple’s recovery demonstrated that PC users didn’t seek the maximum number of applications but “satisficed” if there was enough variety to cover their needs.

For a talk I wrote in the past 12 hours (and gave this morning at the Symbian Partner Event in San Francisco) I did some research on the relationship between app quantity and Q3 global smartphone platform market share.

Vendor
Platform
Est. Share
Apps
Nokia etc.Symbian S60
≈ 40%
> 12,000
AppleiPhone
17%
10,000
RIMBlackBerry
15%
n.r.
HTC, Palm, Motorola …Windows
≈ 12%
18,000
PalmPalm
≈ 2%
29,000

Even if the data is crude, the results are dramatic. (Disclaimers: Q3 data is estimated from the Canalys press release, which doesn’t provide an accurate measure of Windows or Palm OS market share or non-Nokia S60 devices. Q3 data for Apple is distorted by iPhone 3G launch. Measures of the number of applications are from 2008, but reported by various sources that may not be strictly comparable).

Looking at the data, the biggest driver of the number of apps (stock) is the age of a platform, even if the recent trend (flow) reflects the demand perceived by ISVs, i.e. current market share and momentum. The dramatic example of the latter is the iPhone, where the app store went from 100 applications in July to 10,000 earlier this week. Still, the iPhone — with rapidly rising share — has far fewer apps than the less popular, older and stagnant (or declining) Windows and Palm platforms.

Thus it’s clear that a huge supply of apps don’t cause success. A certain number might be a prerequisite, but beyond a certain number, more is not better. In fact, too many applications can create fragmentation and confusion among users until demand coalesces around 1-3 apps in a given category that gain enough scale to keep going.

Why didn’t we see this before? Distribution and disintermediation. When you had to get store shelf space, that limits the carrying capacity of the ecosystem, with the entry barrier being firms needing enough scale to win shelf space at dealers (or even catalog stores). In a virtual, post-Amazon electronic goods world, no such natural limit exists, so the hobbyist-programmer in PJs can post an app to the Apple or Google or Nokia app store.

Still, the fragmentation should now discourage (but not eliminate) further entry. New entrants will need to provide a compelling implementation or a find a significant unserved niche, or they will fade into the clutter.

Additionally, differences in the rate of application library growth will not significantly impact the market share of the top 2 or 3 platforms. As with videogames, some apps will come out on a particular platform first, but if they succeed on one, they will be quickly ported to the other major platforms. APIs and tools may slow the availability of apps for some platforms, but if there’s enough demand, eventually they’ll be ported.

Sunday, March 23, 2008

Nothing beats a good monopoly (2)

As I’ve said before, nothing beats a good monopoly. Eventually, the monopoly may come to an end due to substitutes, or the monopolist may squander his billions, or the would-be monopolist could even help his enemy get ahead.


If you can’t get a monopoly, then a duopoly or oligopoly can be nearly as lucrative. In fact, if there are high entry barriers and orderly competition (i.e. no price wards), the oligopoly may be preferable because it invites less antitrust scrutiny from the government.


This week saw the biggest IPO in US history, when the bank-owned Visa Inc. (NYSE: V) raised $18 billion in its offering Tuesday. In the first two days of trading the stock was up 17%.The markets were closed today for Good Friday, but at Thursday’s closing price of $64, counting the Class B shares Visa has a market cap of more than $62 billion, more than twice that of rival MasterCard (with a higher multiple).


Most people forget that Visa began life as BankAmericard in the 1950s, IIRC a decade or more ahead of MasterCard. At the time, it seems like BofA competed with AmEx and Diner’s Club by being slightly less exorbitant in merchant fees. BofA also pushed beyond the standard T&E (travel & entertainment) business to merchandise, and (IIRC) was less elitist in who it targeted. The network effects made the BankAmericard (later Visa) and MasterCard more widely usable, more widely adopted, and so on in a virtuous cycle that swamped the first movers in T&E cards, Diner’s, Carte Blanche and AMEX.


Both consortia leveraged their credit card oligopoly to win comparable merchant fees for debit cards — where they bore lower costs and lower risks — crowding out attempts to build a rival (competing) debit card fulfillment system. With two main credit card networks, not surprisingly there is little competition and high prices (hence the lucrative IPO this week).


The banks have even managed to delay (if not defeat) the world’s most powerful retailer. To avoid paying high bank fees, Wal-Mart tried to open its own bank but the banks were able to exploit Walmartophobia* and round up the usual suspects in their (thus far) successful lobbying to have the government block Wal-Mart from such vertical integration. (Although Wal-Mart is offering its own VISA debit card for undocumented workers and others in extreme poverty).


Of course, when someone sells an asset, you wonder why. Selling shares is improving balance sheets for banks at a time when they desperately need it. But, with the debit card shift nearly compete, are Visa’s best days behind them — or will their share of the oligopoly deliver high margins and high growth for decades to come.


* Walmartophobia. n. 1. An irrational fear of large chain store retailers; 2. political demagoguery that seeks to increase and exploit such fears.



Saturday, March 15, 2008

What's Web 2.0 worth?

Catching up on an interesting week.

Social networking site Bebo sold itself this week to AOL for $850 million. Some see it as a validation of the Web 2.0 market, including Microsoft’s seemingly foolish October investment in Facebook that implied a $15 billion valuation for the latter firm (as compared to the $500 million News Corp. paid for MySpace back in 2005).

From the one presentation that I saw in September, Bebo is a cool company with an aggressive push towards the mobile Web 2.0. However, as far as I can tell, it’s thus far a niche player that doesn’t lead in any national market. Although popular in the UK, it’s a distant fourth in the US after MySpace, Facebook and MyYearbook.

One way to look at this is that AOL is buying a fading star because it can get it cheap. Another is that the owners want to cash out, and the VCs are thrilled to get a 9x return in after 22 months.

The network effects are pretty strong here, so it’s not clear how a #4 property gains market share. Small entrepreneurial companies tend to lose their focus and drive after acquisition by a big company; Exhibit A is the AOL-Netscape merger. In fact, AOL has demonstrated a reverse midas touch over the past decade, squandering the tremendous market share advantage it once had from its walled garden and dominance of dialup service in the US.

On the other hand, Hotmail, MySpace and YouTube have continued to grow since their acquisition. So it’s possible (if not likely) that the AOL acquisition could be good for Bebo.

IMHO Bebo needs to dominate a niche rather than strive to be #2 or #3 in various national markets around the world. The idea of cross-promotion and integration with AIM seems the most plausible, but a lot depends on how much talent (and motivation) remains at Bebo after it gets gobbled up.

Thursday, February 21, 2008

Using free to overcome network effects

I heard an interesting news report on the radio yesterday, about a major breakthrough in the use of stem cells to treat stroke victims.

What I thought was interesting was that the breakthrough was not (as is customary) published in Science or JAMA. Instead, it was published by Public Library of Science, a relatively new family of journals that is directly challenging scientific publishers and their high subscription prices.

Some of the traditional journal prices are truly exorbitant — my favorite Elsevier journals annually charge libraries $2100 and $1300, respectively. On the other hand, this is a very thin market: we’re talking a few hundred libraries and a few thousand individual subscribers, not the millions who read Time or buy a Christina Aguilera CD.

So while libraries don’t to pay $1000+ to subscribe to journal, the story is a little more complex. In addition to the concerted efforts of publishers, there has already been pushback by moderates to efforts to require mandatory use of “open access” (free beer) journals.

This reminds me exactly of the open source debate. Some academics are open source advocates (tipoff — they say “FOSS” or “F/LOSS”, not “OSS”) that want the whole world to use open source. Others welcome the competition in terms of quality, efficiency or price, but leave it to the market to decide what is the best solution. A tiny number just view it as an interesting phenomenon and try not to take sides.

The open access journals don’t charge their readers, but they also save a lot of money by not having dead trees, subscription lists, marketing, or website authentication. They also get more readers by being searchable and linkable from the open Internet, and encourage the reuse and redistribution of their content. However, I have some questions about how widespread they will become, at least in the near term.

First, these “free” PLoS journals are not exactly free, because they charge authors a publication fee of $1K-$3K per article. This works on a $100K-$1 million biomedical research project, but not on a social science or humanities paper that might have a research budget of $100.

Second, for medical research in the US, there’s one agency (NIH) that’s mainly paying for the research to be produced and to be consumed, so it is willing to increase producer prices if it dramatically cuts user prices. The funding of social science and humanities research is much more fragmented, and thus the funders are more likely to pursue their parochial interests rather than seek a systems approach.

However, Wednesday’s PLOS story does suggest they’ve overcome a third problem: the chicken and egg problem of reputational network effects facing scientific journal publishers. The problem is that important research isn’t published in 2nd tier journals because, well, they’re 2nd tier. And without important work, these journals remain 2nd tier.

The stem cell article was published in PLoS One, which already has more than 1000 articles under its belt. So it appears that the PLOS crew has gone a long way to gain legitimacy among authors and readers.

Monday, December 24, 2007

Profitable Christmas PNE strategies

Positive network effects are normally associated with platform strategies such as those for videogame consoles and PCs, as outlined by the book Information Rules. In the last few years, new PNE strategies have been attempted with social networking websites, but many of these sites are still using unproven (if not dubious) revenue models.

At tonight’s first round of Christmas gift exchange, I became better acquainted with two new business models that seem to combine network effects and direct revenue models in novel (and promising) ways.

The first example was the JibJab animated cartoon site, which is best known for a series of political satires such as “This Land” played on the Tonight Show during the 2004 presidential campaign. The company’s claim to fame is superimposing facial pictures on top of cartoons and then tilting them back and forth to the music.

Now the company has branched out into personalized video greeting cards (not its first attempt to monetize its brand and technology). My sister sent a JibJab Christmas card in which her family’s pictures were superimposed on a movie of Santa’s elves goofing off at work. It turns out my teenage niece got a card from a friend, sent her own card to her friends, and then her mom said “let’s send this to our friends.” Two interesting freemium wrinkles are that some cards are free, and you start out with an initial credit, so it’s easy to get started (as my niece did) before you realize this will end up costing real money (although a lot less money than 20th century technologies like sending Kodak photo cards).

So the more people who send cards to their friends, the more potential customers JibJab gets, and each one gets lured in with a freemium model. My only questions is that once everyone’s doing it (as with the Blue Mountain ECards) the novelty wear off?

The other new business model is Webkinz, which for $14 gives you a small stuffed animal combined with a paid social networking game site called KinzChat. As with my niece (same niece) and nephew’s Wii, the Webkinz was a hot product in short supply this Christmas shopping season. Participation in the fad has even helped retailers carrying the product.

So Webkinz has to build a website with games, login tools, age-appropriate content etc., and then they get to command a high gross margin ($5? $10 per toy). The KinzChat logon only lasts for a year, and then the parents must go buy another premium-priced stuffed animal or junior will be kicked off the website.

No clue as to how long this fad will last — Beenie Babies, Cabbage Patch Dolls, pet rocks, and other similar fads eventually faded away. But it seems safe to predict that Webkinz will spawn a whole raft of online re-interpretations of low-tech toys.