Monday, June 30, 2008

Patent cartel formed to fight "trolls"

The Wall Street Journal this morning spins the PR of some big companies tired of paying patent royalties

Tech Giants Join Together
To Head Off Patent Suits

Several tech-industry heavyweights are banding together to defend themselves against patent-infringement lawsuits. Their plan: to buy up key intellectual property before it falls into the hands of parties that could use it against them, say people familiar with the matter.

Verizon Communications Inc., Google Inc., Cisco Systems Inc., Telefon AB L.M. Ericsson and Hewlett-Packard Co. are among the companies that have joined a group calling itself the Allied Security Trust, these people say.
The article goes on to cite horror stories of how those evil patent trolls that have been shaking down big companies. The big business alliance has as its CEO Brian Hinman, a big business licensing executive:
Previously he was Vice President, Intellectual Property and Licensing for IBM Corporation. While at IBM, Brian held various positions including Business Development Executive for IBM Research at the Thomas J Watson Research Laboratory. Prior to IBM, Brian was Corporate Director of Business Development and Licensing at Westinghouse Corporation.
The website Q&A says
What is Allied Security Trust?

AST is a Delaware statutory trust that was originally formed by several high technology companies to obtain cost-effective patent licenses. The Trust provides opportunities to enhance companies’ freedom to sell products by sharing the cost of patent licenses. At the same time, the Trust creates new opportunities for patent holders of all sizes to generate a return on their rights.

AST is not an investment vehicle. Its purpose is freedom of operation and cost reduction. It generates no profits and does not engage in patent assertions against other companies. AST maintains a “catch-and-release” commitment that returns to the market in a timely manner patents acquired on behalf of Trust members after licenses are secured. The Trust also addresses the increasing need for innovative companies to defend against costly patent law suits.

Why was the trust formed?

The Trust was formed in reaction to a marked increase in patent assertions and litigation involving high tech companies by patent holding companies, also known as NPEs (non-practicing entities) also known as “patent trolls.” These organizations produce no products or services of their own, and acquire patents, sometimes hundreds of them, with the sole intention of asserting them against operating companies and conducting patent litigation to extract settlements or licensing fees.

How many members are in the trust and who are they?

Currently, there are eleven members in AST. AST anticipates reaching a goal of between 30-40 members.
While some are hailing this cartel as (for example) bringing “some sanity to the patent-litigation racket,” there are at least four reasons why this is effort is suspect.

First, there’s the eligibility. The WSJ article says that any company can pay $250K to join and deposition $5m for the patent buying pool ($1.5-$2 billion). But the website imposes the additional requirement “Any company in the high tech field with annual revenues of a defined minimum level is eligible and encouraged to join.” If someone wants to put in $5.3 million, why does it matter how big they are? (Qualcomm makes $10b/year — are they eligible?)

So we have a group of big companies (no more than 40), working to their own common interest against the interest of nonmembers. Sounds like a cartel to me. The business model is that AST buys patents, grants nonexclusive royalty-free licenses to members, and then sells them.
Mr. Hinman said the group doesn't face any antitrust issues because it isn't a profit-making venture and its members don't actually own patents -- they just grant themselves a license to them.
This cooperation for the benefit of members against nonmembers has potentially collusive and anti-competitive implications. I’m no fan of big government, but I hope the DoJ will give this the same scrutiny as any other big business combination.

Third, we have the hypocrisy (conflict of interest) of the big firms suing others to put them out of business or merely to extract rents. Rich Tehrani lists the example of (member company) Verizon suing to crush Vonage — i.e., to use patents as an entry barrier to little companies that would increase competition. I would also list Alcatel’s efforts to squeeze Microsoft for a half-billion dollars.

So all we can say for sure is that this is a bunch of big companies that pay royalties who don’t want to pay royalties — whether to small companies, big companies or even universities. They intend to reduce the attractiveness of all IP licensing business models (a centerpiece of open innovation) in the name of fighting the dreaded “patent troll.”

That brings me to the fourth and final point. I have been studying IP licensing issues in the telecom industry for almost a decade, first with the Qualcomm teaching case and then for the past two and a half years with Rudi Bekkers studying all the W-CDMA patents. And the problem is, there is no viable definition of a patent troll — from a economic, legal or policy standpoint.

The claimed test (as implied in the WSJ article) that any company that “never produced a (product)” is a troll is just poppycock: that isn’t why patents exists. The patent system exists to reward inventors. For every horror story of a post hoc shakedown artist or silly story of claimed invention, we have a Bob Kearns, who invented the intermittent windshield wipers but the Big 3 auto makers decided to ship their own products without paying a license. Kearns sued for decades to get what was owed to him, ending up divorced, broke and dying without the recognition that he deserved.

And while yes, my perspective is biased because I’m writing a book that has a chapter about Qualcomm, what about Qualcomm? They brought CDMA to the cell phone industry when people said it couldn’t be made to work, designed systems to show it would work, built handsets, infrastructure and chips to jump start the industry. Is this a patent troll? (Let’s leave aside the knotty question of what the fair price is for their IP).

Or what about Dolby — Dolby Noise Reduction, Dolby Surround and Dolby Pro Logic? Again, they don’t make any products, just sell IP licenses.

My hunch (which I cannot yet prove) is that the best solution will be to reform the patent system to eliminate obvious patents, those that reflect prior art, and perhaps some of the incremental patents that are implied by prior art. I think Osapa — Open Source as Prior Art — is a great solution, and I hope there will be more. Less patents = less transaction costs = less uncertainty, and the role of patents would shift towards protecting major innovations like those that invent an entire new product category.

Friday, June 27, 2008

Nothing beats a good backup

Non-Mac owners can skip this.

I am nearing the end of the recovery from the March 25 hard disk failure that turned my professional life upside down. I spent 2 hours last night and an hour this morning manually merging between three backups for all my research, including papers in progress and the book.

From a technical standpoint, my main mistake was to rely on the Unix rsync command, which (until OS X 10.4?) doesn’t copy both parts of a Mac OS file (i.e., the resource fork). Fortunately, my March rsync backup is supplemented by a November 2007 finder copy and a few ad hoc .zip files. Also, a friend at Apple recommended FileXaminer, which can be used to fix the file creators (e.g. text files without .txt) of an entire folder of files.

I have a few object lessons that apply to Mac and non-Mac owners a like:

  • When you have multiple gigabytes and thousands of files, a great backup solution is much better than a good backup solution.
  • You should never rely on a single backup solution: a single type of software, a single piece of hardware at a single location, a single process.
My main backup is now Time Machine, which is convenient and automatic (other than I have to plug the hard disk in). I think my secondary backup will be probably be using a discrete backup solution (like Carbon Copy Cloner or SuperDuper) to a $150 hard disk from Fry’s stored at the office.

Thursday, June 26, 2008

Who cares about mobile phone operating systems?

Obviously I’m interested in Tuesday’s announcement of Nokia’s plan to buy Symbian, convert all its employees to Symbian employees, and then release most (all?) of the technology as open source. Since I do research about operating systems strategy, open source and the mobile phone industry — and have been a consultant to Symbian — this is of great interest to me.

However, this month has been the most crazy one of my research career. I have worked on nine different papers this month, and four of them (three of them brand new) in the past 36 hours. So let me offer some quick observations and provide the real post later on. (For thoughtful analysis, see the postings by David Wood and Mike Mace.)

I want to engage one quick point. Jason Hiner of ZDNet asks whether the war of mobile phone operating systems between open source Symbian, Android (the gPhone), and Windows Mobile can ever be won — if the fragmentation will ever coelesce around a single standard:

They all seem to be assuming that the mobile phone market will mirror the computer market, which is dominated by a small handful of platforms: Windows, Mac OS X, and Linux. The reality is that there is likely to be a much larger diversity of platforms in the mobile world.

In addition to Android, Symbian, and Windows Mobile, there is now the iPhone with its OS X-based platform. And, beyond those four, there’s a plethora of phone makers that run their own proprietary operating systems on a variety of phones, sometimes with a customized OS for each phone.

It’s going to be very hard to put the genie back on the bottle in the phone market. All of these different types of phones are already out there and will be in use for years to come. Some may argue that the smartphone market does not have as many players as the general mobile market, but the lines are blurring between standard mobile phones and smartphones.

All of this means that counting on software platforms to deliver mobile applications and services to a large number of users is probably not going to be very practical. There’s too much platform fragmentation and diversity, and that’s unlikely to change.
I am willing to agree that there will never be a single winner — even Microsoft admits that the fluke of the Wintel monopoly will not be repeated. But that doesn’t mean the contest is not worth fighting: there will be winners and losers in the mobile OS platform wars, and the financial returns are always better for the winners.

Meanwhile Oh Malik (in his excellent posting) shows more understanding of the mobile phone industry dynamics by handicapping the likely survivors of this war. In first place is LiMo (which Hiner ignored) at 60%, while Symbian and iPhone are tied for second at 50%.

Malik also notes that platform standardization is really only a factor on high-end smartphones which account for 10% of the market, while low-end phones are done as do-it-yourself proprietary operating systems. So there is a lot of growth left for multi-device, multi-generational standards for the vast majority of mobile phones that don’t use one of the major technologies. Nokia shifting S40 to S60 (once it’s in-house) is the most immediate opportunity. The other big opportunity is for Motorola to resolve its fragmented platform strategy — between LiMo, gPhone, Symbian and Windows — and put all its weight behind a single technology.

OK, so I’m a platform guy, and I love platform wars. Even so, I think the relative success (and survival) of these various platforms will influence the market share of handset makers and also how quickly users make regular use of the mobile Internet.

Tuesday, June 24, 2008

Virgin buying Helio, but why?

The FT reports that Virgin Mobile USA has agreed in principle to buy out SK Telecom’s losing US venture, Helio. Both are MVNOs (i.e. virtual operators) that run on Sprint’s CDMA network. The FT story says Virgin has 5.1 million customers and Helio had not quite 200,000 in January (but has likely been losing customers since then); not surprisingly, the combination will be called Virgin.

Google’ing for other stories on the subject pointed back to the FT report as an exclusive. However, one May story on Silicon Alley saw the combination as complementary, while four days later (i.e. six weeks ago) SK denied that any such discussions were underway.

The thing I can’t figure out: why would Virgin bother? It’s not going to catch the larger carriers (e.g. T-Mobile, with around 30 million subscribers). It’s not going to gain significant scale by increasing its subscriber base by 4%.

I can think of only one possible explanation: money. The deal gives SK a graceful exit from their failed US market entry, and saving face is a big deal for Korea’s largest cellphone operator. I suspect that SK made Virgin a deal they couldn’t refuse: perhaps a cheap price for Helio, perhaps side benefits (such as an investment in) Virgin. The terms will probably show up in a Virgin 10-Q or 10-K sometime down the road.

Open source without open governance

(This is a posting that was stuck in my outbasket over the weekend. For obvious reasons I'm pushing it out of the out basket).

Last week I saw an intriguing headline in an InfoWorld e-mail blast:

Nokia: Open Source Developers Should Play By Our Rules

It's becoming clear that the phone maker thinks open source developers need to adapt to the ways of commercial software vendors, not vice versa. Read on:
http://cwflyris.computerworld.com/t/3299928/121450395/121092/0/
The article was intriguing but I had to get some work done, so I didn’t have a chance to investigate right away. The link pointed to a blog entry by Tom Sullivan which pointed to an InformationWeek article by Serdar Yegulalp which pointed to a Business Week article by David Meyer.

When it comes to open source mobile devices, I’ve been writing about Nokia’s Maemo efforts (and Intel’s derivative Moblin) almost since the blog started. I first heard about Maemo at HICSS-40 from friends of mine at ETH Zürich who have written a paper on Mameo. (I can only find the slides online).

Now not knowing anything else, one would expect that Intel would have a better hope of getting OSS, for two reasons. First, software is complementary to its main hardware business, while Nokia’s systems busines (like Apple’s or IBM’s) is based on software system integration.

Second, Intel has a long experience with open source as a founder (and the main funder) of OSDL — the late, great Linux promotion entity born back in 2000. It has also done other work in house to work with Linux (as part of its successful strategy to supplant RISC Unix boxes with x86 Linux boxes).

As part of my own research on open source, one of the things that I looked at was how firms interacted with open source communities. I chose as a co-author Siobhán O’Mahony, because no academic knows more about the dynamics open source communities than she does.

In April, we finally got our paper published, which we called “The Role of Participation Architecture in Growing Sponsored Open Source Communities.” It was in a special issue of Industry & Innovation (a European innovation journal), based on a track at last year’s EURAM conference on open source, user innovation and open innovation.

I could write a whole paper about Maemo and how it fits into other open source communities, but alas I’ve promised to write other papers in the next 60 days. So let me just quote from our conclusions:
However, our study showed that sponsored open source software communities are fundamentally different from autonomous communities in the potential for goal conflict between sponsor and community members. Although both sponsors and members seek widespread adoption, the primary goal of a corporate sponsor is profiting from its investment, while the goal of an open source community would be improving the capabilities of the shared technology.

To gain interest from a community of contributors, sponsors needed to at least provide transparency. The openness of sponsored communities differed most in terms of accessibility, with most sponsors retaining privileged (monolithic) rights for some portion of the community’s decisions. In a few open cases, the sponsor shared some control with the community—and when sponsors relinquished more control to the community, those sponsored communities were transformed into autonomous ones.

As a consequence, we also found a dramatic difference between most sponsored and autonomous communities in terms of design decision related to accessibility, particularly in terms of governance. Governance of autonomous projects was largely pluralistic, shared widely among community members, whereas the ultimate decisions of sponsored communities were (with rare exceptions) controlled by the sponsor.
So, to put it bluntly: companies want to have their cake and eat it too, but if they exert too much control, people will figure this out and individuals (or other companies) won’t bother to participate.

The problem for Nokia is, the cases where tight control is where you’re the only game in town (cf. MySQL, SugarCRM). There are many other initiatives building code based on Linux, so if Nokia tries to hold things too tight with Maemo, people will just join LiMo (if it ever opens its gate), Android (if it ever opens, and if it ever ships), or OpenMoko or (fill in the blank).

Monday, June 23, 2008

McCain's prize idea

Today John McCain unveiled a plan to spend $300 million in government money as a prize for a better battery. ($300 million = $1/American: get it?) This is part of a stampede of politicians seeking to appear to do something, whether or not it does any good (cf. Congressional investigation of energy “speculators”).

Batteries of course are for electric cars, which purported to solve two of the problems of the gasoline-powered cards: their carbon emissions, and increasingly scarce supplies of petroleum that are driving up energy costs.

IMHO this is a bad idea from a policy standpoint — not the prize, but the target. One problem is that batteries are hard and there has been billions of dollars of R&D spent over the past two decades already making batteries for cell phones and laptops; from what I’ve heard, it will be expensive to make something better than a lithium-ion battery, and the next technology will only be slightly better. Also, having the ability to run an electric car gets rid of the auto’s emissions, but it doesn’t generate the additional electricity needed to run it (which might come from coal or nuclear plants) nor deliver it across the transmission grid.

Instead, two things better to spend the money on would be cutting the production costs by 90% for either photovoltaic cells (creating more energy) or LED residential/commercial illumination (using less energies). Both are known technologies that everyone expects will achieve cost goals in the next 10-15 years, but additional funding could pull that forward by 5 years or so.

But leaving aside the goal, what about the use of a prize?

On the radio, one guy interviewed said “we should spend it with the national labs.” The radio show host suggested another Manhattan Project.

Frankly, I think throwing it at the government is about the worst thing you could do. (Other than perhaps have no-bid contracts as set-asides in a pork-laden spending bill). Frankly, while the national labs have some smart researchers, they have no where near the concentration of talent working on government research during World War II, the Manhattan Project at Los Alamos or the MIT Radiation Lab.

Instead, the government needs to do a little open innovation of its own (“Not all the smart people. in the world work for us”) and use the market to get the best answers. Today, the top scientific and engineering talent is scattered across academia, industry and government labs. You want a wide range of ideas — in terms of approaches and technologies.

It turns out that the prize idea is actually one that worked in the past — whether in conjunction with or instead of the right of exclusivity (i.e. a patent). In her book, economist Suzanne Scotchmer showed that there are cases where a prize is the optimal incentive mechanism for attracting innovation.

My co-author and friend Karim Lakhani (of Harvard Business School) has also done research on the value of prizes, and is quoted in an April Fast Company article about how prizes are effective at stimulating innovation.

One dirty little secret of prizes: the sponsor often benefits from the losers, even if the losers do not. For the X Prize for space flight, all of the finalists have an incentive to try to develop commercial businesses to earn a return on their aerospace R&D, so the donor gets not one commercial spaceflight company, but probably two or three. EBay’s $100K prize for the “best widget” has even more nake self-interest: it would get dozens (hundreds?) of widgets to make eBay more useful, but only has to pay for one.

However, prizes do have a win-win aspect: free publicity. Even if you finish #2 in the X Prize, you get a lot of publicity and (perhaps) legitimacy that you can use to launch your business. Try getting that with an SBIR award or a patent.

Saturday, June 21, 2008

Switching costs: who decides, who pays?

This weekend at a conference I ran into an iPhone-carrying CIO of a local tech company. Since he runs Microsoft Exchange servers but hates Windows (refuses to run it on his MacBook Pro), I imagine he doesn’t want to be identified. Let me call him “LT”.

LT made a very important point about the switching costs that the iPhone faces in hoping to get adopted by American enterprises. Because RIM has been providing a good solution for years, the most savvy companies have long since installed BlackBerry push e-mail. I speculated that the switching costs for the entrenched BlackBerry users could prove an insurmountable barrier for Apple.

LT was carrying an iPhone running a beta of the iPhone 2.0 software, and it will go live at his firm once the final 2.0 software is released July 11. Employees will then have a choice of using a BlackBerry or an iPhone — so employees will vote with their feet.

Why go to all the trouble? Two words: top management. In most small- to-medium sized companies, if the top executives want a new toy, the IS department has to support it, and that’s what happened to LT.

It reminds me of the pilot study I did for my dissertation: I was studying switching costs, and had to decide whether to study standards adoption by individuals or organizations. I ended up doing my diss on consumers, but I made a conference paper out of what I learned about organizational standards adoption and switching.

What I found — consistent with my later dissertation findings on consumers — is that for customer-facing technology, the psychic switching costs were more important than the costs of the software or the deployment labor. The reason people don’t switch is that it’s a pain (or you can’t make them), not that the actual cost of switching is a deal-killer.

So if top execs want the iPhone, the IS department can support an iPhone. One of the things my study (and subsequent academic work experience) has shown is that, in some environments, staff doesn’t have much say because powerful users make their own decisions. Law firms and legal partners are one example; universities and faculty are another. I could imagine at some tech companies, spoiled geeks would be a third. (Or, worse yet, if you don’t support something, engineers will spend all their time trying to make it work rather than shipping product).

So I want to thank LT for reminding me of this reality that CIOs face for switching costs: whether or not it's a good idea (i.e., economically rational), if your bosss(es) wants it, you have to do it. Thus far, the iPhone wannabes have not caught up to Apple’s software quality (particularly ease of use).

Friday, June 20, 2008

Symbian's second 10 years

My friend, client and soon to be co-author David Wood has started a blog. (I warned David about blogging as a productivity destroyer, but he’s gone ahead anyway).

The blog is not one in his official capacity as executive VP for research at Symbian, but obviously it’s influenced by his work as the only remaining Symbian executive founder (and many years with Psion before that). One of his first articles (posted Monday) is entitled “Anticipating the next ten years of smartphone innovation.”

David reminisces a little about the first 10 years (Symbian was founded in June 1998) but it’s not primarily about that. The company has a slick official 10th birthday site. I half expected Andrew Orlowski of the The Register to a Symbian retrospective, but I guess he burned out with last year’s long feature on Psion’s last great product, the Series 5 PDA.

Instead — as the lead researcher of an IT company should do — David looks forward. Here are some key smartphone trends he predicts:

  • Component prices will continue to fall.
  • Quality, performance, and robustness will continue to improve.
  • Users will discover that phones can do more than just phone calls and SMS.
  • The smartphone ecosystem will to innovate in new services
  • Even cooler smartphones will come out
I’m not going to try to summarize the entire posting, because it’s easy enough to read here.

David’s blog is added to the blogroll on the right: a list of people I know personally whose occasional postings I find insightful. So far, I have decided as a matter of policy to omit those who seem to be full-time bloggers — if for no other reason that competing with them would cause me to spend even more time blogging.

5 billion served

Apple announced Thursday it has surpassed the 5 billion milestone on songs downloaded from the iTunes Store. I wanted to say it was reminiscent of the old McDonalds signs, but apparently that number was “over 3 billion served” — and thus more appropriate for the milestone Apple passed last summer. (Note: they didn't say how many movies they have sold to date).

This cements Apple’s position as the number one desktop-based music service. In a distant second in the US is AmazonMP3. Amazon’s success seems to be growing the market, not stealing share from Apple.

As it turns out, I finally got around to trying AmazonMP3 this week. As with the two earlier Pepsi/iTunes promotions, my tracks came from my wife’s two-case/week Diet Pepsi habit. This time around, with the Pepsi-Amazon promotion, 2½ cases = 5 Pepsi Points = 1 Amazon MP3. (But unlike the iTunes promotion, the points can be used for other products, like her Pepsi T-shirt that arrived this week.)

I decided months ago to use the free (Pepsi) downloads on Amazon to fill in the gaps in my Linda Ronstadt collection (just as I used the free iTunes downloads to fill gaps in my Jackson Browne collection). I once had a complete collection on tape of all her albums from 1967 to 1978, but on CDs only bought the good stuff from 1973 to 1976: Don’t Cry Now, Heart Like a Wheel, Prisoner in Disguise and Hasten Down the Wind.

The Amazon collection of Ronstadt tracks was pretty complete: 321 songs, counting remixes and other variants. I was able to find the Linda’s 1970 version of the Jackson Browne hit “Rock Me on the Water”, as well as her 1977 cover of the Elvis Costello song “Alison”. I had less luck with one of the songs from her earlier band Stone Poneys — perhaps because the band disappeared 40 years ago and isn’t around to negotiate download rights.

The songs downloaded easily, and once installed, they played great; the lack of DRM makes the songs much more convenient to use than Apple’s original DRM’d downloads. DRM-free MP3 files for 89¢ is about as close to a commodity as you get. The only problem was, the UI was terrible — hard to find songs, even harder to browse. (321 songs, not browsable by album? Gimme a break).

So Amazon will continue to commoditize the DRM-free download business. Perhaps it will help persuade more kids to buy rather than steal music, but somehow I doubt it: Apple seems to have the hearts and minds of the teen set (and gift cards from their parents), so if any one will get them to pay for music, it’s Apple. I certainly don’t see how Amazon will dislodge iTunes; perhaps a rival in CJK (China, Japan or Korea) will pull it off, but Amazon plays to the same audience that Apple does.

Mobile phones are another story: this market has yet to take off. At least in the US, Apple’s lead on paid desktop downloads give it a leg up on the sideload market. Real is still hoping to establish its PC-based subscription model (with sideloading), while both Nokia (with “Comes with Music”) and France’s Orange (with “Musique Max”) have proposed mobile-based subscriptions. (Some details of the business model are still being worked out).

I have about 15gb of legal MP3s on my laptop and various backup hard disks, and thus I’m pretty sure that any song I buy will go there, and sideloaded to my iPod or phone. We’ll see what my daughter does in a few years when she gets her first mobile phone, but right now she’s pretty PC (actually Mac) centric.

Thursday, June 19, 2008

Marginal customers and growth

The Rogers diffusion of innovations paradigm — along with common sense — says that less enthusiastic customers adopt a technology later. But this can be applied more broadly to any type of customers: cell phone calls at $1/minute are for pretty serious stuff (or seriously rich phone owners) but at $0.00 a minute, you call home to ask whether to buy braeburn apples or fuji apples.

Despite the Rogers paradigm that emphasizes marginal customers (e.g. “late majority”) for finishing the adoption of a new technology, I'd never made the connection between such customers and growing a mature markets. Until today, when I was reading the WSJ at lunch.

Reporter Paulo Prada explained why Southwest and other discount airlines won’t be trying to expand share during this time of high fuel prices:

Until recently, the game for discounters was about increasing the size of the flying public, much of it by luring first-time and infrequent air travelers with cut-rate fares. Earlier this decade, when a barrel of oil cost more than $100 less than it does today, that strategy made sense and made money. Now, "expanding the overall demand doesn't solve anything," says David Cush, chief executive of Virgin America, which earlier this week said it would scale back its planned capacity by the end of the year by about 10%. "You have to contract supply, because that sheds the lowest-paying, marginal traffic. It's all about keeping the higher-paying traffic."
In one paragraph, that pretty much says it all.

Of course, despite Hal Varian’s advice about price discrimination, very few industries sell an identical product at such varying prices as the airlines. So the temptation to sell one last seat (no matter how cheap) is highest during the growth phase of the airline industry — much higher than, for example, the push to sell one more car.

Traditionally the airlines had three major costs — labor, capital equipment and fuel. Selling an additional seat did not raise the first two costs, only the third. It will be interesting to see if marginal pricing falls out of favor if fuel prices remain high.

Wednesday, June 18, 2008

Britannica in search of a business model

Encyclopedia Britannica is in a life-or-death battle of quality vs. commodity, and so far commodity has won every round. In the past 10 days, both Wired and the Merc have reported that its latest plan is: if you can’t beat ’em, join ’em.

The wonderful Merc article by Lisa Krieger spells out the details: sales of the once invincible Britannica dead tree edition peaked in 1990, and the vaunted door-to-door sales team got the axe in 1996.

About the only thing it left out is what happened in between — how the bookshelf of paper got supplanted by the CD-ROM encyclopedia: first, Microsoft’s Encarta, then EB’s World Book, and now its own CD-ROM edition. Of course, the potential audience for a $30 CD (now DVD) is a lot greater than for a $1400 shelf worth of paper.

More recently, the problem is Wikipedia, the free user-generated content that has more articles of lesser quality at no cost. A discussion of the relative merits of EB and Wikipedia is worthy of a journal paper — actually several — but after presenting at Wikimania back in 2006, I realized that I’m not going to have time to write one.

I used to contribute to Wikipedia but got tired of wasting time arguing with people who don’t know what they’re talking about who decided to “fix” my (economic historian’s) contributions. Suffice it to say that I use Wikipedia (it’s cheap and convenient), but never trust it due to its flawed production process. For example, the article on Symbian lists two companies as “founder shareholders” who didn’t come in until months later, something that takes about 2 minutes with the NYT (or WSJ or FT) database to verify.

However, as with other commodization, Britannica is finding it can’t compete with free. (Perhaps Chris Anderson will offer some advice in his new book). Its demonstrably better quality is preferred by serious researchers, librarians and even a few teachers, but today’s K-12 schools are raising a generation of dolts who think looking something up in Wikipedia (or even on the free Internet) constitutes research. I know, because I get them when they turn 20, and have to teach them what real research is — not always succeeding at the task.

Britannica has yet to solve its fundamental problem of not creating enough value that people are willing to pay for it, other than competing with Microsoft in the DVD-ROM market. However, this month Wired and Merc reported that Britannica would start supplementing its professional content with outside contributors. Unlike Wikipedia (but as with Google) its contributors would have qualifications beyond just being able to type like a room full of monkeys. This seems like it will ultimately be as successful as hybrid open source strategies (i.e. not), but I can see they have to try something.

As an author, I am so there — to be able to write on a few topics where I’m one of a small number of experts in the world and not have to worry about vandals. (Linkabit? Open source business models?) However, I could find no evidence of the new program on the Britannica website, or news coverage beyond these two articles (or direct copies). So either it was a trial balloon, vaporware, or it’s in private beta.

Even if they go live, over the next two months I have five more papers to finish up (all but one co-authored) and send off to journals. That leaves no time for EB, and probably less time than usual for the blog(s).

Tuesday, June 17, 2008

Latest iPhone Flash rumors

This week brings the latest iteration of the Flash-coming-to-iPhone rumors (based on an analyst question to Adobe CEO Shantanu Narayen). Adobe is allegedly developing its own implementation, despite Apple’s longstanding antipathy to the platform. Of course, the rumors of Flash on the iPhone were rampant in March, February and even last July.

Another rumor is that Apple is developing its own Flash competitor called “SproutCore.” It appears that SproutCore is an JavaScript/Ajax framework and a way to access Mobile Me (née DotMac). The rumors seem tied to a WWDC session last Friday, and claim that the solution brings Cocoa APIs to Windows and the Web using JavaScript.

This seems more plausible, given Jobs’ history. NeXT opted to bypass a licensed implementation of Adobe’s Display Postscript to develop its own. OS X has shipped its own version of a PDF reader so that Acrobat is largely superfluous for Mac owners. (Apple has also sought to bypass Microsoft implementations for its file formats).

I have no horse in this race: I dislike Flash but am also unlikely to buy a SBC-provisioned iPhone. Still, if I had to bet, I’d bet on Apple bypassing Adobe again to provide its own mechanism for playing all those confounded SWF files.

Monday, June 16, 2008

Steve's new, new, new phone

I hear Apple introduced a new phone a week ago. I didn't see the announcement because I had a meeting at school, but I did watch the 90 second condensed version on YouTube (60 seconds of Steve Jobs and 30 seconds of self-promotion).

Since there has been an onslaught of coverage, let me just offer a brief reaction. What was introduced: a new phone, a new business model (with a new price), and a new software ecosystem.

On the phone, my friend Mike Mace captured it nicely: the announcement “was probably the minimum necessary to please the community.” The hardware changes include 3G, GPS, improvements in size and battery life; however, it still has a mediocre camera.

The remaining changes were in software, and the iPhone 2.0 update will be available free to old iPhone users ($10 for iPod Touch owners). This includes the Microsoft Exchange server access, some support for attachments, and of course the third party software apps.

For the business model, Apple has given up on its revenue share plan and has switched to a more conventional operator-subsidized handset price recouped via long-term contract. The standard subsidized price is now $200 (£100), but some operators (as in UK's O2 or Germany’s T-Mobile) will let you have the phone essentially free if you spend enough money on the service plan.

The good side is that Apple now has a much easier time getting its phone adopted by carriers — the old contentious rev share model didn’t scale, because few were as desperate as SBC AT&T. On July 11, the 3G phone will be available in 21 countries (according to the Apple website) or 22 (adding France, according to Steve’s keynote slides). Another 49 (48) countries are due Real Soon Now.

As the doomsayers predicted, we can now label the iPhone 1.0 business model a failed revenue share experiment. It was worth a try, but it didn’t work, because the industry was used to something else (and competitors are still using it).

The downside is that the hardware price cut from $400 to $200 is more than made up by the $240 increase ($10/month) in the required data plan. The other feature is that the phone will be activated before leaving the building, cutting out the gray market (and also the remote activation revenue stream of Synchronoss — as brilliant predicted last month).

I am interested to see whether the real price of the iPhone will drop over time, as it has with the iPod, or whether Apple will continue to only sell high-end smartphones. I can see the logic of not selling anything less than a 3G phone, because the attraction of the iPhone to carriers is that users have more mobile Internet use than any other device (and thus will take the expensive plans). Still, in the long run, will there be a family of products (as with the iPod and Apple’s laptops) or just a single product with different RAM specs (as today or with its Mac Pro desktop).

Apple’s announcement at WWDC emphasized the final point about the new phone — the software ecosystem with distribution via the iPhone App Store, due to go live on July 11 with the new phones. Apple will be selling and fulfilling the apps and taking its 20% cut along the way. Update 2:10 p.m.: John Boudreau of the Merc has a great story this morning on Apple’s successes in winning iPhone developer loyalty.

One survey of developers predicted that 70% of the apps will be free. This is like saying that 70% of the Macintosh apps are free — which is probably a low number. However, if the Mac model holds, all the major apps developed by multimillion dollar programming teams will have a price tag.

The free ones are usually simple one-trick ponies from a small (or single programmer) company trying to get PR. Under this model, the 1.0 will be used as teaseware and a paid 2.0 (or “Pro”) version will be offered.

The ones where the teaseware works will become modest little businesses like Bare Bones Software BBEdit (mentioned in the original paper about two-sided markets). The ones where it fails will either be pulled, or left as generic free publicity for a company seeking odd job contracting work, or as free sharing by a programmer who wrote the software for fun. The most famous Palomar Software alum, Glenn Andreas, gives away all sorts of OS X widgets to cross-promote his software products.

Overall, I think Apple this year will win the hearts and minds of US developers the way it won high-end iPod users last year. Native iPhone apps are not particularly easy to develop, but as with the original Mac days, Apple will attract a wide range of motivated developers wanting to push the envelope, which will produce lots of new apps (probably mostly consumer oriented) that will make the iPhone more valuable.

Most of these app developers will fail, eek out a modest living, or cross-promote some other business (doing ports for big companies). But if Apple creates some important winners, then this new ecosystem will become a major part of the iPhone value proposition and Apple’s business model. (Not bad for a company that started with a closed device).

Overall, this will help Apple’s overall strategy of growing the US consumer smartphone market, because most US consumers are not using smartphones. The business smartphone market seems to have much higher penetration and switching costs, and so I wonder how much upside is left there. Exchange support will allow access to the small fraction of sites that are using it as their mobile push solution. (Or, perhaps, the large number of sites with a small number of handsets).

RIM and BlackBerry are in Apple’s sights: I think this could stunt RIM’s attempted consumer crossover, but it would be a long time (if ever) that it impacts the business market.

As for overseas, the tepid response to the pre-3G phone leaves little to judge from. The apps may make the difference — if the iPhone apps are much better than those from the better established Symbian S60/UIQ community, and manage to stay ahead of Android.

Thursday, June 12, 2008

Do no evil and channel Tom Jefferson

Sometime after declaring independence from the Brits, our 3rd president famously said

were it left to me to decide whether we should have a government without newspapers or newspapers without a government, I should not hesitate a moment to prefer the latter.
Of course, this was before Medicare, crop price supports and even compulsory public education. (The Church and the Mafia have been able to handle such unrelated diversification, but I doubt any newspaper could pull it off).

In an interview yesterday with writer Ken Auletta, Google CEO Eric Schmidt said it had “a huge moral imperative to help” newspapers.

On that same theme, in a speech Monday in Washington (audio), Schmidt promised to do something to save newspapers:
"We all care a lot about this. Newspaper demand has never been higher. The problem is revenues have never been lower. So people are reading the newspaper they're just not reading it in a way where the newspapers can make money on it. This is a shared problem. We have to solve it. There's no obviously good solution right now."
However, that help doesn’t include bailing out shareholders, as several months ago Schmidt was quoted as rejecting vertical integration:
So far, we've stayed away from buying content. One of the general rules we've had is "Don't own the content; partner with your content company." First, it's not our area of expertise. But the more strategic answer is that we'd be picking winners. We'd be disenfranchising a potential new entrant. Our principle is providing all the world's information.
Blogs aren’t going to replace paid journalists, and clearly the print (or written) media are hurting. (Only this week, #3 weekly magazine US News is dropping back to online and biweekly in print).

Google obviously has been able to generate online revenue, while newspapers have been facing a traumatic adjustment to online competition — failing dramatically at both charging for content and giving it away free.

Will it be enough? My hunch is that Google won’t pay enough for content to preserve the current newspaper staffing or business models.

Large monopoly big city papers enjoyed monopoly rents, which benefitted both publishers and journalists. Without pressures on profits, the big papers tended not to be very efficient, with a few elite reporters writing a dozen articles a year. (At the smaller papers, we would write a dozen articles in a week or certainly every 2 weeks). So further wrenching changes are necessary before the big papers can hope to have a sustainable business model in the 21st century.

Starbucks helps commoditize mobile Internet

One of the topics that I have to cover with my MBA tech strategy students is about related diversification, vertical integration and cross-subsidization. Thirty years ago, SV startups made money selling a product, but clearly over the past decade synergies and economies of scope have brought a major change to all aspects of SV life: exit strategies, monetization opportunities, and (alas) the prevalence of competitors.

I made my post this morning about the challenges of getting 3G mobile Internet in competition with wired Internet, and then ran off to a few meetings with two friends who are running mobile startups. Since the majority of the party are coffee addicts, when the tiny sushi-ya wanted their table back we ended up at the Starbucks down the road. (Since I don’t drink coffee, I only end up at Starbucks to do a meeting.)

One thing led to another, and my compatriots explained to me what the Starbucks free Wi-Fi (announced in February) really means. Sure enough, I went to the website, which explains:[logo]

Complimentary Wi-Fi for Starbucks customers When you register your Starbucks Card and use it at least once a month, you'll receive two consecutive hours a day of complimentary Wi-Fi, courtesy of AT&T.
This is the ideal division of labor for our household. My wife drinks one or two $3 cups of coffee a month that she was going to drink anyway, and I get free Wi-Fi without having to go to a library (as I am right now).

The network connectivity is provided by SBC (which then provides free access to SBC DSL customers). SBC replaced T-Mobile, which this week sued the-company-that-pretends-to-be-AT&T for advertising the affinity card deal (begun June 3) before T-Mobile is completely gone from Starbucks (but they settled the suit yesterday).

So not only do the 3G carriers have to compete with people’s home, work and school Internet, they also have to compete with free Wi-Fi at 6,800 US Starbucks locations — subsidized by sales of double-mocha nonfat lattes, coffee mugs, and music by Dylan Filsand has-been boomer stars (to pick two examples from today’s coffee shop branch). And, in at least one town, by companies subsidizing citywide Wi-Fi for its own purposes.

Add to that, the increasing number of midsize airports are providing free Wi-Fi. So far this year, I’ve seen it in San Diego, Las Vegas, Denver and now San Jose.

With competitors (or substitutes) like these, no wonder municipal Wi-Fi and WiMax never stood a chance. Among the wreckage is MetroFi, once the Bay Area’s great Wi-Fi hope that is a week a way from going dark.

Will mobile broadband dominate?

The Times of London yesterday published an article entitled “Mobile to displace fixed-line internet 'within two years'”. To quote:

The mobile phone network may replace the copper wire as the principal method by which people connect to the internet in as little as two years, broadband experts predict.

Increased sales of laptops - which can be connected to the internet via the owner's mobile phone connection - the widespread roll out of high-speed mobile networks and the falling price of connecting to such networks have all contributed to the uptake of mobile broadband, they said.

One person in ten now regularly accesses the internet on a computer via a mobile phone connection, despite such services only having been on sale for less than a year, according to research released this week by You Gov. Of those, up to a third now connect their computers to the internet solely through the mobile network.

"This trend is as significant as the shift from home to mobile phones that took place in the mid Nineties," a spokesman for Top 10 Broadband, a price comparison site, said. "We predict that by 2010, mobile broadband will overtake home broadband as the default way to access the internet in the UK." A similar claim was made by Broadband Expert, another comparison site.
(It is not clear why they published an article this month based on a February press release by Top 10 Broadband).

It is interesting what the Times did not say. It did not claim that people would be giving up their 15" laptop screens for a 2.5", but instead that 3G dongles would supplant DSL as the primary consumer Internet access. (I also have two much larger screens at home and work, but obviously I could use them with a dongle connected to the same laptop). For my own use, both screen and keyboard size are key elements of the Internet value proposition (as when I research and post articles in this blog).

I don’t have any firsthand knowledge of British broadband (or the British broadband companies). On the one hand, I admit that in the past I have consistently underestimated mobile phone uptake in Europe. In 1996, I did interviews at Nokia and Ericsson and talked with my Finnish co-researchers and didn’t quite believe what they were telling me — including that penetration would go above 100% as some professionals carried two phones.

While I am willing to believe this will happen someday, the two year time horizon seems a little optimistic for several reasons.

One is shared broadband that is relevant to the majority of households: it makes sense for mobile singles, but does it apply to families? Three of us all use the same broadband subscription here at home, and of course we have friends who are families of four or five that also share their DSL or cable modem with the kiddies. Will people buy 3? 4? dongles and 3-4 subscriptions?

The second is the assumption that the wireless vs. wired technical capabilities will soon be at parity. One part of this is that mobile carriers can deliver ITU-claimed speeds over the air to a large number of users at the same time. shared bandwidth on a wireless network will deliver the ITU-claimed speeds. The other is that the wired rivals will sit idly by, and be unable to deploy 20, 50 or 100 mbps fiber to the home (as is happening in Japan, Korea and the US).

The third is that mobility is valued by people enough to switch from their fixed Internet line to their mobile line. This seems applicable to people who spend a lot of time in their coffee shop — or can get a 3G signal to surf the web during a long above-ground train commute. But does it apply to a large number of people who spend most of their day in two places, home and at work?

The argument does appear consistent with the marketing pitch that Verizon Wireless is making in its latest ads: Wi-Fi is a prison that holds you in a very small cell, but their (EV-DO) 3G dongle will allow you to use the Internet anywhere. I don’t know how successful their pitch has been.

But this brings me to my main point. Internet access is a commodity — I don’t care what route my bits take, as long as they get to my computer (or other device) quickly. Commodity services are, well, commoditized: people shop based on price. Mobile phone penetration in the US and Japan went nowhere until there were more carriers, more competition and much lower prices.

So unlimited data plans at $50/month per person are not going to fly at our (admittedly lower middle class) household. A family plan of data service for all of us at $50/month looks attractive. Then we’d just need to find a carrier that has coverage for us here up against the mountain. (The mountain shown at the top of my home page).

Wednesday, June 11, 2008

A dozen milestones in IT openness

InfoWorld has published an interesting retrospective — woulda, coulda, shoulda — of key passing points in the evolution of the IT industry.

Modestly labelled “Tech's 15 turning points,” nearly all the milestones relate to issues of IT openness and the implications both for the firm (that chose to be open) and the rest of the industry.

Below are the titles (and my commentary)

  1. Apple's NeXT move: to deliver its first new OS in nearly 20 years, Caesar recruits Brutus to migrate Apple to open systems
  2. Dawn of Free Software: RMS starts a communitarian movement that occasionally ships software
  3. Microsoft dodges a bullet: Microsoft is closed, but not so closed as to be broken up
  4. Handspring launches the smartphone era: by licensing an off-the-shelf PDA OS, they create a new product category (at least in the US)
  5. That '70s spam: costless open access invites abuse
  6. Rise of MS Office: tying one proprietary platform to another makes both valuable
  7. IBM's second coming: even the best proprietary platform strategy runs out of gas
  8. The ARPANet is for porn: in an open marketplace, man’s basest imperative will eventually be monetized
  9. The Web loses synch: formal standards take too long, and your competitors will copy your best de facto standards
  10. Linux staves off SCO: proprietary Unix company unable to put the genie back in the bottle.
  11. Intel dispels MHz myth: marketing triumphs over technology every time
  12. NetWare falls to Net: the dominant PC proprietary network is vanquished by the open internet
  13. IT made accountable: industry excess invites regulatory excess, which perpetually wastes piles of money
  14. Apple flips chip strategy: you can't fight global network effects
  15. Outsourcing goes global: open borders mean more competition and global job mobility
Interestingly, the intro mentions two even more important passages that are not in the article: Steve Jobs taking the 1979 tour of Xerox PARC, and IBM’s 1980 decision to outsource the CPU (and OS) for its new PC. But — being the subject of countless Cringely columns and even a movie — maybe those are so obvious as to be not worth mentioning.

Tuesday, June 10, 2008

Is Google toast on the handset?

Yesterday USA Today published a story entitled “Are Google, Yahoo the next dinosaurs?”

Here are a few excerpts

Microsoft (MSFT) has been pushing its Windows Mobile operating system for years. Today, it's available from 50 handset makers and more than 160 mobile operators worldwide.

Even so, it's been tough slogging, says Phil Holden, director of online services for Microsoft.

"What we've learned is that loyalty on the PC doesn't necessarily transfer to the mobile phone," he says. The wireless world, he adds, "has a lot of different dynamics."

One thing everybody agrees on: The mobile Web is an advertising gold mine just waiting to happen.

The fledgling mobile search industry generated about $700 million in ad revenue in 2007, JupiterResearch estimates. By 2012, revenue is expected to hit $2.2 billion and keep rising. Jupiter analyst Julie Ask says mobile search could eventually eclipse the traditional Web, which currently generates about $20 billion in ad revenue.
If you read to the end, the gist of the story is “Is Google toast on the handset? Or will they use handsets to extend their Total World Domination? It remains to be seen.” You didn’t need to read 1,900 words to know that.

Still, it has a lot of good details on potentially disruptive technologies for the handset.

Hulu, GooTube and iTMS

In the battle of free online video, GooTube has been the preferred destination by far for user-contributed videos — some legal, some not. However, some firms have also cooperated by posting their own videos, as with my favorite band, the Eagles, whose publisher has posted official copies (of ten) current and classic Eagles music videos.

The iTunes Store (née iTunes Music Store) tried to get into the business of selling TV episodes, but got into a pissing match with NBC over price discrimination. (A principle iTMS was willing to compromise recently to get hit HBO shows).

Viacom (owner of CBS) is trying to help NBC break the Google/Apple stranglehold, by posting “The Daily Show” and “The Colbert Report” to Hulu. Of course, Viacom sued YouTube over the unauthorized reproduction of its shows, including these two topical shows from Comedy Central.

It makes sense that Viacom and others will band together to try to increase their power over the distribution channel and reduce Apple’s and Google’s. As long as they don’t offer more favorable terms to related entities — or conspire to put Hulu competitors out of business — this is a pro-competitive effort that sustains multiple distribution channels and more choice.

I think the HBO and Comedy Central have something else in mind. Half(?) of the US doesn’t take premium cable channels, or at least these channels. So this is a way to have influence and monetize revenues out of this remainder. Assuming, of course, that the reason we don’t subscribe to pay cable is because there’s nothing there worth watching.

SCOTUS trims patent excesses

The Supreme Court of the United States issued an opinion eliminating one of the more obvious excesses of patent royalty-seekers. Basically it says that if I license a patent to Intel to make a chip, then I shouldn’t be surprised (or seeks additional compensation) when Intel sells that chip to someone to use in a phone or a PC or a router. This is an extension of the existing doctrine of “patent exhaustion,” which in layman’s term means that once you pay for use of a patent, your customers don’t have to pay again.

The decision by Justice Clarence Thomas in Quanta v. LG Electronics was unanimous. The ruling was covered by the WSJ, the WSJ law blog, the SCOTUSblog, and Patently-O, among others.

While this is seemingly a clearcut victory for sanity, there are two dissents from the peanut gallery. An EFF attorney argues that the limited scope of the ruling may encourage further litigation. And Patently-O blogger Scott Crouch argues that the SCOTUS ruling seems to allow contractual restrictions on the principle of patent exhaustion (i.e. to require additional downstream payment; as he quotes from the ruling:

“LGE points out that the License Agreement specifically disclaimed any license to third parties to practice the patents by combining licensed products with other components. But the question whether third parties received implied licenses is irrelevant because Quanta asserts its right to practice the patents based not on implied license but on exhaustion. And exhaustion turns only on Intel’s own license to sell products practicing the LGE Patents.”

“No conditions limited Intel’s authority to sell products substantially embodying the patents. Because Intel was authorized to sell its products to Quanta, the doctrine of patent exhaustion prevents LGE from further asserting its patent rights with respect to the patents substantially embodied by those products”
Certainly it would be bigger leap for the court to hold that “you can’t grant restricted rights via contract.” Of course, the licensees would find a ready loophole if the licenser's actions implied that full rights were being granted.

So is this a significant patent reform that produces further clarity in patent rights, or merely generate more tightly written contracts by licensors?

Friday, June 6, 2008

American consumers, leaders in mobile Internet!

Thursday the Merc had yet another article about Nokia trying to establish a toehold in the US market. Perhaps it has to do with their research center and CTO being down the road in Palo Alto.

This particular article was about trying to get Europeans, Asians and Americans to use more data services. The Finns were surprised to learn that Americans were the highest end users of the mobile Internet:

When Nokia compared keystroke usage by European users to a U.S. group, it got a surprise: Some Americans were using non-calling services and applications at a much heavier rate than the Europeans. That flew in the face of long-held assumptions about Europeans (and Asians) being ahead of Americans in using mobile phones for more than calling.

In the European research, Nokia put users in three categories, based on how many megabytes of data they were sending or receiving per month for activities such as e-mail, picture mail, Web browsing or downloading customized features (ringtones, for example). Usage was divided as 0-to-2 megabytes per month, 2-to-4 and more than 4.

In the United States, Nokia planned on using the same categories. But it had to redefine the one measuring the heaviest data traffic. That's because it found the top-end users typically going over 8 megabytes a month. There also were indications, based on applications that Americans downloaded in addition to the ones pre-installed on the phone, of a greater willingness to experiment and customize than Europeans showed.
I agree with the analyst quoted: this is not all that surprising.

The reason the mobile Internet will have a problem in the US is that we’re well conditioned over the past decade to do lots and lots and LOTS of stuff with our PCs on the Internet, and it will be hard for a little 2.5" screen and T-9 keyboard to replicate that. OTOH, once we have a good mobile Internet device — and a reasonably priced data plan — we will do all that stuff with our cellphone at Starbucks or in a library and not just on our laptop.

Seems to me that's the secret of the sudden success of the iPhone — a mobile device from a PC company that’s most closely replicated the desktop/laptop Internet experience. But the article on Nokia was too polite to make this link.

Ballmer is the nice one?

On June 27, Chairman Bill Gates of Microsoft shifts from five days a week to one. It will be the end of a long era in the PC industry, and one of the longest continuous periods of leadership for an tech industry entrepreneur. Ken Olsen of DEC comes to mind, but like Scott McNealy of Sun, it didn‘t end well; I suppose Larry Ellison will hang on for a few more years, just for the bragging rights over his über-nemesis.

Thursday morning, the WSJ had a long article on the close (but occasionally testy) relationship between Gates and his Harvard poker buddy, Steve Ballmer. (Official copy here, unofficial copy here).

There were some interesting revelations (like an abandoned attempt to buy SAP in 2003) and some familiar stories (like the role of Ray Ozzie in taking over Gates’ technical leadership of Microsoft, or Microsoft killing NetDocs long before Google documents came along). But the big point was the difficulty of Gates letting go:
Eventually, in January 2000, he gave his chief executive title to Mr. Ballmer. Mr. Gates became Microsoft's "chief software architect," a new position that, in theory, was below that of Mr. Ballmer.

Soon, the two men clashed as Mr. Ballmer tried to assert himself in his new job. As the firm's iconic leader, Mr. Gates still held sway that wasn't tied to a title: In meetings Mr. Gates would interject with sarcasm, undermining Mr. Ballmer in front of other executives, Mr. Gates and other Microsoft executives say.
...
Mr. Gates concluded that it was he who needed to change most. "Steve is all about being on the team, and being committed to the mutual goals," Mr. Gates said. "So I had to figure out, what are my behaviors that don't reinforce that? What is it about sarcasm in a meeting?" he said. "Or just going, 'This is completely screwed up'?"

Mr. Ballmer says that, as the top executive, he had to learn when to override decisions and when to just "let things go," he said. "We got it figured out," he said.

Soon, Mr. Gates started to hold back negative comments in meetings. During one deliberation among the executives who reported directly to Mr. Ballmer, Mr. Gates deferred to Mr. Ballmer on an important decision, prompting Microsoft executives to silently glance at each other with surprise, recalls Microsoft Vice President Mich Matthews.
Normally industry foes think of Ballmer as the prince of darkness — as with his famous “Linux is a cancer” comment. However, the article specifically said Ballmer “worked to settle Microsoft’s many lawsuits, taking a more conciliatory line than Mr. Gates typically had.” That implies that Gates saw the lawsuits as a personal affront to his baby, while Ballmer saw resolving them as just another business problem.

While there were interesting revelations, most of it was about the former CEO, who (supposedly) will be letting go in three weeks. It wasn’t as though there were great intimate insights into what makes Ballmer or Ozzie tick.

The other interesting post is from my friend, Prof. Shane Greenstein of the Kellogg School. Shane has a regular column in IEEE Micro, has penned a two-part retrospective on the three decades of Gates’ tenure. The basic question is: was Gates smart or just lucky? Shane leans towards the latter, but comes down firmly on the side of those who see Microsoft as abusing its market power over the past decade.

Here is Shane’s conclusion:
There is one enormous irony in the long arc of Gates’ managerial career. His temperament, savvy, and intellectual breadth are qualities that would have made him an extraordinary serial entrepreneur, founding one organization after another. Yet, the road he traveled was quite different: continual employment at a single firm for over 30 years.

That ultimately led to new types of challenges in a corporate setting and the singular tragedy of Gates’ career. He tried to retain the unqualified self-serving approach that had worked so well for him as an entrepreneur, even when the actions of a dominant firm required a different touch.
This also came up during the panel discussion Monday — Microsoft’s impression of its dominance lagging the reality until it was forced by various courts to back down.

Microsoft still has persistence, so if it can convince industry (and government) that it’s tamed its ruthless streak, perhaps it still has a few good years in front of it.

Thursday, June 5, 2008

Platform strategies: the academic view

Tuesday night I got back from a quick trip to London, which meant a lot of time in planes and five nights arguing with my body (mostly losing) about when (and when not) to go to sleep.

The reason I got on the plane (other than I could) was for a conference at Imperial College called “Platforms, Markets and Innovation.” The conference was organized by Annabelle Gawer, a lecturer at the Tanaka Business School. For us yanks, TBS dean David Begg noted Imperial’s role as England’s leading school of technology: 50% medical school (largest in Europe), 25% engineering, 20% science and 4% business.

Annabelle has spent the last decade studying the business of platform management, which was the topic of her 2000 dissertation at MIT which became a 2002 book from HBS Publishing.

I’ll admit I was a little slow to warm to Annabelle’s conception of the platform. But after reading her excellent 2007 article on Intel’s platform strategies (with Rebecca Henderson), I blame it on HBSP. Their formula for selling managerial books encourages (nay, requires) overclaiming and broad assertions that the book solves all the world’s problems — past, present and future. (For example, Clay Christensen’s excellent 1997 book suffers from this problem.) Meanwhile, academic papers (through the peer review process) tend to be more modest about claims that can be directly supported by the evidence, at least until you get to the part marked “implications.”

Annabelle and I are among the few people who’ve focused our academic energies on IT industry platform competition. We both build upon the masterful 1999 paper by my friend Shane Greenstein (and his former dissertation advisor Tim Bresnahan) which explains the success of computer platform strategies from 1965 to 1995. My own 2000 paper (with Jason Dedrick) on PC platforms emphasizes the role of technical control of a platform, while my most cited work (a 2003 paper on open source) is one of the first first academic papers on open platform strategies. (A 1993 paper on Sun Microsystems by Raghu Garud is about open platforms but doesn’t call it that). But in comparing myself to Annabelle, she is has clearly devoted her full energies to platforms over the past 8 years, while I’ve also been doing research on open source and open innovation.

Monday’s conference brought together the best platform strategy researchers in the world, including a lot of jet-lagged Americans as well as an equal number of European researchers. The goal was to air and discuss chapters from Annabelle’s forthcoming edited volume called (surprise!) Platforms, Markets and Innovation.

From a 15 minute PPT deck, it’s not always possible to understand the heart of an intellectual argument. Also, I can’t really capture in a 750-word blog posting 12 papers presented over a six hour period.

Instead, let me highlight those papers that offered new insights for my own work on platform strategies:

  • Annabelle’s introduction to the conference and the book presents her latest refinement of what platforms are and are not. (Unlike some theorists, she is going to great lengths to drawn boundaries.) Her current interest is in studying the link between platform design (and strategy) to industrial platform dynamics.
  • Jason Woodard of Singapore Management U. (with his advisor Carliss Baldwin of HBS), who presented very intriguing work to define a more precise way to represent platform architectural relationships — and to use that to predict which parts of the platform will have competition and which parts won’t.
  • A mathematical model by Geoff Parker and Marshall Van Alstyne about third parties adding value on top of the platform — and when should that layer be subsumed into the operating system to provide a building block to others. (E.g., OS X bundling a database, Windows bundling a browser).
Of great personal interest to me was the paper by Shane Greenstein on openness in the Internet. This is picking up a thread of a conversation between us going back to 2004, when he invited me to present a paper at his conference on standards policy. That conference paper became a chapter in his 2006 book with Vic Stango — probably my favorite book chapter of the 11 I’ve authored thus far. (The 2006 chapter on open standards and the 2003 paper on open source strategies are the two pillars of my research on open IT strategies).

In his presentation Monday, Shane contrasted the two great incipient platforms of 1995 and asked whether openness mattered. His conclusion was that openness made no difference to large business vendors (the IBM and MCIs of the world) who would have participated in a closed Internet, but it did keep IETF volunteers active and energized. On the negative side, since the platform isn’t owned, there’s no obvious leader to orchestrate a rollout (as with the August 1995 “start me up” celebration). I am hoping that in the final version will bring out the contrast between IETF and W3C, since both fit most definitions of openness but corporations were much more prominent in W3C.

As I said, this brief summary is not enough to capture the intellectual heft of the arguments: buy the book! As someone who did a 2000 dissertation on standards competition, I thought the field was going to fade away. But Greenstein is continuing the work in economics, while Gawer, Baldwin and Woodard (among others) are developing new insights with platforms. Meanwhile, even “two-sided markets” are telling us things we didn’t know before about standards and platforms.

The session concluded with an industry-academic panel discussion of platform strategies. More on this later.

Wednesday, June 4, 2008

The importance of unlearning

Among academic research on knowledge, creativity and innovation in the 1980s and 1990s was a smaller body of research on forgetting (or unlearning). The importance of unlearning was one of those insights that seemed counter-intuitive at first, but over time the argument grew on me until I realized it captured an essential truth of organizational change.

Perhaps the most-cited article here is the 1986 article by CK Prahalad and Rich Bettis on the “dominant logic” — the idea that managers filter their environment (particularly opportunities) through a cognitive framework rooted in past results. A simpler way to put it is the old saying “When the only tool you have is a hammer, everything looks like a nail” (attributed to Maslow).

Two things happened Monday to link this idea to Microsoft, Windows Mobile and the company’s future role (if any) in the smartphone space. (Time spent in a steel tube prevented me from posting this earlier).

First, serial entrepreneur (and loyal blog reader) Doug Klein got an op-ed published in Forbes.com in which he comments on Microsoft’s trouble thus far transferring its desktop Windows quasi-monopoly to the handset. His first paragraph is punchy and to the point:

As a start-up focused on delivering services to the next generation mobile Internet devices, we are constantly amazed at the failure of Microsoft to repeat its Windows and Office success in the mobile world. And the problem is just that: Microsoft has a history of seeing all opportunities within the narrow definition of its existing world and past wins. Repeatedly they have seen new devices, for example, as simply "little PCs." This has led to a long litany of disappointments in handhelds that will repeat unless a new approach is realized. [emphasis mine]
Second, I spend all day Monday at a conference on platform strategies hosted by Annabelle Gawer of Imperial College in London (more on the conference later). The closing event was a panel discussion with four industry and six academic representatives (including yours truly).

After Annabelle asked the industry panelists how their platform strategies have changed, I jumped in with a question to the Intel and Microsoft representatives. I didn't have my laptop on the panel (and couldn’t have typed while saying the question), but roughly what I asked was “You have a dominant position in the desktop platform, but it hasn’t worked that way in the mobile space. What’s different?”

The Intel rep, Alberto Spinelli, said that the mobile ecosystem is more complex, in that Intel has to try to influence government bodies and standards committees. Also, the industry is facing a convergence of PC and communications — not just in technology but in business and industrial dynamics. (Later on, he also emphasized the technical aspects of success, such as their Atom microprocessor being introduced this week in Shanghai.)

The Microsoft rep was Simon Brown. These are my written notes of his answer:
“Rule #1, there’s no single playbook or rulebook. … It’s pretty clear there will never be another business like the Windows business.” He then added that the second best business is the Office business, so there won’t be anything close to these two.

Microsoft has been working for years on Windows Mobile, and it has been a highly iterative process. “You have to be prepared to unlearn a lot of things.” For example, Microsoft has to do things for handset makers it never did before [for PC makers] and thus has to learn to “rewrite our own rules”.
In the program, Simon was listed as “Vice President, Field Evangelism, Developer and Platform Group”. He took on these responsibilities for EMEA in 2003 and worldwide in 2007, so obviously he’s spent a lot of time thinking about ecosystem management. I thought he was one of the most insightful and articulate speakers on the panel, and the fellow academics that I talked to afterwards seemed to agree.

Ironically, a year ago I blogged on mobile phone platform fragmentation after a visit to Annabelle at Imperial. For “platform” I meant mobile phone operating systems such as Microsoft’s; on the hardware side, ARM-licensed processors have a near monopoly, a monopoly that Intel hopes will someday be its own.

If Microsoft hopes to consolidate the world around Windows Mobile, it has its work cut out for it. I don’t see anyone displacing Symbian in the global market, while in North America both the iPhone and Blackberry will continue to grow more quickly than Windows Mobile. Still, Simon’s answer suggested Microsoft has figured out that “same old, same old” won’t work.

This remind me of something Apple friend used to say. The challenge of competing with Microsoft was not their 1.0, which was almost always terrible — but if it was an important market, they had enough money and determination to keep iterating and making it better, until they eventually had a winning product. (Sorta like the Terminator, except that you get at least one chance to live.)