Saturday, August 30, 2008

A positive spin on Wikipedia chaos

Maverick Sen. John McCain’s selection of maverick Gov. Sarah Palin caused a lot of confusion in political and journalism circles Friday. As with other events, it sent a lot of people to Wikipedia to learn more.

One of those going to Wikipedia was Mercury columnist Chris O’Brien, who’s become their most insightful tech columnist since Dean Takahashi jumped ship earlier this year.

O’Brien used his own curiosity and a major news event to study (informally) how a previously little-known topic becomes elaborated by cooperative information production. As he reported

As I write this late Friday afternoon, there had already been more than 1,200 edits to her entry. What I saw unfold over the course of the day was a chaotic, complex, messy process, but one that ultimately led to an article that was far longer and deeper.

Wikipedia tends to push people's buttons. Founded in 2001, the online encyclopedia allows anyone to edit articles, relying on the wisdom of the crowd to contribute and to improve the amount and quality of information. I've found that people either see this as a symbol of what's best about the Web, or a sign that society no longer cares about accuracy and expertise.

I saw the frenzy around Palin's entry as an interesting test case. While she had a decent-size entry before Friday, she was hardly a major public figure. So there was plenty left unsaid there, plenty of gaps to be filled in.
Eventually, as with other controversial topics on Wikipedia, the website had too many contributions — and too many distortions and fights — and thus had to ban anonymous contributions to the Palin WikiPedia entry.

It used to be that Wikipedia marked prominently that a page was partially or totally “locked down” by administrators. Today, if you go to a page (without signing in to Wikipedia) on a controversial topic — like Palin, abortion, the Iraq war — the “edit this page” option is missing, unlike the entries for Michael Palin, adoption, or the Iran-Iraq War.

O’Brien seems to see a positive side in Wikipedia’s ability to deal with a sudden interest in the one-term Alaska governor.

Reading the article, it is very detailed and — from what I can see — relatively balanced. Here’s two sentences that are better (in terms of clarity and neutrality) than one often sees nowadays from the AP, once the gold standard in fairness:
Palin has strongly promoted oil and natural gas resource development in Alaska, despite concerns from environmentalists. She also helped pass a tax increase on oil company profits
In some other places, there is subtle editorializing by juxtaposing two facts together, even though the individual sentences are neutral. This is hardly unique to this page (or Wikipedia), although it probably wouldn’t be found in a professional encyclopedia.

It appears that the Palin entry has drawn a wide range of supporters and critics, and that by a very labor-intensive intervention, Wikipedia administrators have shaped a comprehensive (and relatively fair and accurate) profile in a very short period of time. O’Brian sees this as a good thing.

I think the jury is still out. The challenge for Wikipedia has always been in the thinly-populated pages, where any bozo (or any group of bozos) can say what they want and not get detected — or use their persistence to shout down people who actually know something.

The Siegenthaler libel is a serious example of this problem. My own unsuccessful fight (as a business historian) over the misleading use of “Fairchildren” (in a page I created) caused me to quit Wikipedia. It is (slightly) reassuring that the edit war is over and that the current discussion is relatively accurate in how it presents the various claimed uses of the term.

Of course, I recognize that — along with professional journalists, librarians and a small minority of Americans who profess to care about the “truth” — I’m on the losing end of a sysyphean fight against Wikipedia as being “good enough.” The Internet and a wide range of volunteer wiki contributors, bloggers, citizen journalists or whatever you want to call them are fueling the commoditization of information.

A decade ago, then UCI professor Yannis Bakos showed that competition for information goods (i.e. those goods with a zero duplication or distribution cost) will reduce the price to zero. So much of for the value of the accurate, professionally-developed encyclopedia of my childhood.

Friday, August 29, 2008

Steve's not dead yet!

More than 30 years ago, the most famous skit from Monty Python and the Holy Grail concluded with the line “I’m not dead!”.

Thursday, Bloomberg (the news service, not the politician) apparently sent Steve Jobs’ obituary to clients, even though he’s not dead yet. As the (London) Telegraph summarized it:

The opening sentence described Jobs as the man who “helped make personal computers as easy to use as telephones, changed the way animated films are made, persuaded consumers to tune into digital music and refashioned the mobile phone.”

The 2,500-word piece also included praise for Jobs from his rival Microsoft boss Bill Gates, details of his rise from college drop-out to technology billionaire, and a list of his family “survivors”.
Bloomberg later retracted the (contingent) obituary about the Apple CEO.

Of course, the current issue of Forbes has a list of CEO candidates to replace Steve if (when) he leaves. Since Jobs will surrender control of Apple a 2nd time only when they pry it form his cold, dead fingers, such speculation is premised upon Jobs getting sick and/or dying:
Jobs' health status is still unclear. Apple p.r., insisting the boss' health is a "private matter," has done little to end speculation that the pancreatic cancer for which he had an operation four years ago may have returned.

Since anything can happen to a boss at any time, Apple's board would be smart to have an insider on deck to take over as an interim chief, says Anthony W. Scott, president of recruiting firm ChampionScott Partners. But for the longer term, he says, the company may need a star from outside. The problem: Apple has solid role players but none with the Jobsian mix of design, marketing and business smarts. Jobs' demanding approach and tight control has chased away some strong replacements.
Forbes listed two insiders and six outsiders on its roster of possible replacements, including heir apparent and alter ego Tim Cook — the short term replacement — but no plausible long-term replacements. The replacements are all bureaucrats, mostly middle managers who rose to the executive (but not CEO) suite — plus Ken Kutaragi, Sony’s PlayStation guru.

If I had to choose from the assigned list, I would probably make Tim Cook (or some other suitable business oriented grown up) chairman, and appoint as president a passionate, visionary product genius. Perhaps Jon Rubinstein will be bored with his gig at Palm and come back to the mother ship.

The alternative is to acquire (or merge with) some startup with a genius entrepreneur. (That’s how Steve Jobs and NeXT came aboard). But who? There’s no obvious choice from the fastest growing 25 tech startups. Off the Business Week top 100, the only acquisition that comes to mind is Research in Motion.

Yes, they could hire a passed over (or ambitious) Google exec. Motorola didn’t do so well hiring a passed over Sun Microsystems exec, HP spent years in turmoil after it hired a Lucent exec who wanted to go far.

But right now, no one’s making products that are kicking Apple’s butt, so where would you find executives who would make its products even more compelling? Apparently, stockholders are also not convinced, and so the shares are down abut 2% this week.

Postscript: As I was preparing for bed last night, I realized I was focusing entirely on the company and not at all on the man — who, no matter how much larger than life he might appear, is still just a man. None of us know the time or place of our demise. Steve has three children at home (likely tweens or teens) and one grown daughter, and so whether or not he has imminent health problems, I hope that he’s spent the time recently to give them the father they need — for the day (whether 1 year away or 30) that he’s not around anymore.

Thursday, August 28, 2008

Clay chronicles a decade of disruption

In 1997, Harvard’s Clay Christensen published the book The Innovator’s Dilemma. I’d have to count it as one of the two most influential and widely read innovation books in Silicon Valley — the other being Geoff Moore’s Crossing the Chasm. (A third seminal book on innovation, Hank Chesbrough’s Open Innovation, is having much more influence in the heartland of the US and in Europe than here in the Valley).

The point of Christensen’s 1997 book is that sometimes a cheaper and inferior solution beats out a more sophisticated (but expensive) solution. In this case, talking to your existing customers won’t help because they won’t be interested in the less-capable solution. Christensen did his dissertation on disk drives (going from minicomputer down to laptop sizes) and then extended these ideas to other industries.

This week in Forbes, Christensen published a 10 year retrospective of major disrupters, from 1997-2006. Four of these were IT industry choices that will be very familiar blog readers: Google (1998), BlackBerry (1999), Skype (2003) and YouTube (2005). Equally familiar were the three entertainment-related choices: Netflix (1997), the iPod (2001) and the Wii (2006).

Christensen must be doing some consulting in health care, because two of his choices were in this arena: MinuteClinic (2000), a drugstore-based diagnostic chain and Philips’ HeartStart (2004), a $1,500 home defibrillator.

In fact, the Forbes column is a syndication of his consulting company’s Innovator’s Insights newsletter. As with other such newsletters, the goal is to position the consulting company as a thought leader and improve its brand awareness and image. Christensen co-founded Innosight in 2000 with a former Booz Allen consultant.

The 10th disrupter? My personal favorite: the 2002 Roomba, which was a much lower cost (and higher volume) robot than any iRobot had ever previously built. I don’t know if (or when) we’ll buy a Roomba, but when we remodel our house we’re going to design the ground floor for the Scooba so that no one has to hand wash a floor ever again.

Wednesday, August 27, 2008

HP doubles down on services

On Tuesday, HP closed its $14 billion acquisition of EDS (Electronic Data Systems), the former powerhouse of IT services founded by Ross Perot. The acquisition is the 2nd largest for HP, after its $20 billion purchase of Compaq in 2002. (Ex-Merc report John Paczkowski makes a snide comparison on the WSJ blogsite).

The Merc reports that HP had 172,000 employees and has added 140,000 EDS employees. An unspecified number of the latter will be axed when CEO Mark “Mr. Efficiency” Hurd makes his Sept. 15 analyst presentation.

However, I have to quibble with one adjective in the report:

The massive deal, which HP says will expand its business in the lucrative field of technology consulting and outsourcing services, is the company's biggest acquisition since HP bought Compaq for nearly $20 billion in 2002.
HP had profits of $7.3b on revenues of $104.3b, or a net after margin of 6.97%. EDS had net income of $0.7b on revenues of $22.1b, or a net after margin of of 3.2%.

Services are high growth, but they’re not high margin. They may even become a commodity as services expertise becomes more widely dispersed.

The oft-drawn parallels to IBM are apt: IBM has also bet heavily on services, and HP is following them to the promised land (or over the cliff).

Tuesday, August 26, 2008

New wrinkle #1 on newspaper business models

This week I found two new wrinkles on efforts by newspapers to fix their broken business models, being decimated by the commodization of information (including news) in the Internet era.

One new wrinkle was the flyer in the San Jose Mercury News, advertising their new “Mercury News e-Edition.” As the flyer says

Starting August 2008, the Mercury News is offering a FREE TRIAL of the new e-Edition to our current print subscribers. And, as an existing subscriber, you can get access to an exact digital replica of the Mercury News in PDF format every day at substantial savings if you subscribe to the e-Edition. You’ll see very story, picture, and ad for as little as 4¢ per day. Offer ends August 31, 2008.
Based on the flyer, I tried the demo, which was very versatile:
  • Unlike the usual HTML version, you see actual page images, with photos and ads.
  • It has a much better browsing model than any HTML or PDF solution. As with the dead tree version, flipping through pages is quick and clicking on a given shows the story in expanded text to the right.
  • The “jump” is a hyperlink, so you can easily continue with the story.
  • Text can be enlarged, an automatic “large print” edition for aging Boomers and pre-Boomers.
  • It has a low-res dialup version (marked “56K”).
  • You can get a PDF of any page, and download a ZIP archive of one or all of the pages. The PDF is fully printable with selectable text.
  • They claim to allow laptop and PDA browsing, although to read it on an airline you’d need to download the PDFs before takeoff.
Minimal digging shows that the Merc is using NewsMemory, a turnkey service that proudly proclaims
NewsMemory offers publishers high-tech, headache free solutions in creating electronic editions, electronic archives, electronic tearsheets, and electronic clips.

Because the NewsMemory solutions are primarily based on service driven and turnkey models, newspapers are able to get started with little or no investment and can immediately begin to create new streams of revenue.

NewsMemory electronic editions are the most “publisher-friendly” e-editions available on the market.

NewsMemory’s technology automatically converts any publication into an online e-edition without the need for heavy PDF downloads, proprietary viewers, or slow plug-ins.

NewsMemory provides a turnkey solution with no need to change the newspaper’s workflow and it seamlessly integrates with existing technologies.

NewsMemory is “electronic editions made easy.”
The turnkey solution comes from Tecnavia, a Swiss company that has been peddling NewsMemory it for more than five years. The service is already being used by Gannett papers, the largest US newspaper chain.

OK, this is a nice technology, generally improved (and more cleanly and cost-effectively integrated) from the conventional dead tree version. The website promotional copy even reminds Bay Area ecofreaks that they can even “Save a tree.” (I kid you not).

But what about the business model? It’s the same old same old — pay us a weekly subscription. All electronic is $59/year vs. $130 for all paper and $69 for a hybrid electronic/paper with paper only on Sunday. (There are some contradictions in the website pricing — clicking through on $49/year becomes $59/year). There is also a suggestion that the newspaper plans on using the e-mail addresses for spamming (under the guise of single opt-in e-mail advertising).

It’s a great solution for people who like the idea of newspapers, but what about those who think Yahoo News or CNN.com is good enough? Are they going to go to the trouble of signing up and paying money?

This online edition makes one thing very very clear. Why do I give a hoot for the national or foreign editor sitting in San José? I don’t need most of section “A” or the national business or sports coverage — I can get what I need from the NYT or LAT or WSJ or Time or Sports Illustrated. What I can’t get from these sources is news of my Almaden neighborhood, San José, or California.

My prediction: within five years, this sort of approach will migrate to a free, local-only paper available online, supported by tightly targeted local advertising. For the national/international news, there will be an electronic front-end provided either by the national chains (Gannett, Tribune, NYT, McClatchy) or syndicators (NYT, LAT/Washington Post).

Monday, August 25, 2008

Endless beta

In looking up citations of our 2006 book Open Innovation on Google Scholar, I found that the data miners confused two books on the topic with similar names. Lacking any better recourse, I decided to report the bug to Google. The following day, I got back this reply:

From: "Scholar Support" <scholar-support@google.com>
To: "Joel West" <XXX@sjsu.edu>
Subject: Re: [#XXXXX] Mis-indexed article

Hello Joel,

Thank you for your note. As you may have noticed, Google Scholar is in beta, and we're currently working out some of the kinks. We appreciate your bringing this indexing issue to our attention, and we'll pass it on to our engineering team. Thank you for your assistance in improving Google Scholar.

Sincerely,
Greg
The Google Scholar Team
In looking up when I first tried Google Scholar, I note this comment in a November 2004 e-mail that I sent to my SJSU colleagues:
there are still problems, particularly in listing the same book or article twice (because it was cited two different ways).
These problems of not matching references still exist, and perhaps they are unsolveable without human intervention. (Personally, if everyone used DOI in their references, there would be no problem). Here, I wonder if trying to compensate for this matching problem erred too far in the other direction — reporting a false match.

That, however, was not my point of raising this. While I love Google Scholar, perpetually hiding behind the “beta” claim is a cop-out. Before Google, no self-respecting software company would run a public beta for 4 years. Of course, this is a bad habit of Google’s: Google News went through beta for at least 3 years, and Grand Central also remains in beta after its 2007 acquisition.

A year ago, Dean Guistini made this exact point in more passionate terms:
At the outset of today's post, let me say that perpetual beta is a pointless Web 2.0 notion (a cop-out) and decidely unhelpful to academics and librarians. Beta-testing. In beta. Not quite finished yet. To be released in full soon. At times, the race that Google scholar seems to be running is against itself - both tortoise and hare. GS has few real competitors, and is cavalier about how it is developing. Why does it do this? you ask...Because it can.
He goes on to berate Google for its secretive responses to an interview with a library-oriented journal.

Dean’s absolutely right: Google does what it can because it can. Total World Domination has its privileges.

Even given that, I don’t understand Google’s motivations in this particular case. For Google Scholar, why is Google is either “beta” or secretive about its usage, given that it has no competitors and no discernible business model. (Yes, it certainly would be possible to charge academic databases for paid download click-throughs, but that’s chump change for the $20b/year Internet behemoth).

The thing seems like a public research experiment, rather than a production service — a toy made by PhDs for other PhDs.

Saturday, August 23, 2008

Oxymoronic deception trend

From Friday’s WSJ:

To combat costs, Burger King is testing its $1 Whopper Jr. with smaller hamburger patties — down to two ounces apiece from 2.2 ounces.
From the Random House dictionary via Dictionary.com:
whop·per [hwop-er, wop-]
–noun Informal.
1. something uncommonly large of its kind …


jun·ior [joon-yer]
–adjective …
7. being smaller than the usual size: The hotel has special weekend rates on junior suites. …
Even if BK’s oxymoron once made sense, it’s clear that the latter half of the phrase is winning out over the former.

Of course, this is but the latest in a trend of deceptive price increases via reduced portions. My personal peeve is the vanishing half gallon of ice cream, led by Dreyers/Edy’s and Breyers. See this particularly profane exchange with the CEO of the Dreyer’s, as well as Breyers blaming Dreyer’s for the trend. (However, it appears this time Breyers was ahead of Dreyer’s).

Friday, August 22, 2008

An antidote to iPhone complacency

My posting last night on Apple vs. Nokia got picked up by Seeking Alpha. It’s gratifying to get the exposure and discussion, although (as with any online discussion) the quality of the posts was variable.

Most of all, I was surprised to see the suggestion that I was too pessimistic on Apple. Readers of the Seeking Alpha site don't know me the way that my blog readers do, so let me fill in a few blanks.

I bought my first Mac in January 1984 and have never owned a DOS or Windows machine. I wrote a book on Mac programming and wrote columns or articles for 3 Mac publications. I started a Mac-only software company in 1987 and ran it for 15 years. Before the Jobs II era, we would have said "I bleed in six colors."

Today I'm a little more dispassionate as an academic strategy researcher. I did my PhD thesis on Apple losing market share in the US and Japan. I published a book chapter about why the conventional wisdom on Apple's cloning decision was wrong. Now I teach technology strategy to MBA students and consult to software companies.

My long history with Apple is EXACTLY why I think the ahisotric Apple bigots (particularly the iPhonatics) are missing the boat. In the 1990s, Apple had great products and technologies and still almost died. I know, I was there, and it’s why in 1993 I started looking for a new career to replace being a Mac ISV.

Yes Apple has had enjoyed a good run of innovation success. As I’ve noted earlier, in MP3 players Apple is crushing Microsoft and sells the vast majority of standalone MP3 players in the US. It also has dominant mindshare (again in the US) in smart phones.

However, when it comes to innovation, past performance is no guarantee of future success. Look at Apple in the 1990s. Look at Sony. Look at Ford or Chrysler or GE.

OK, some wiseass thinks because I make a blanket statement "don't stand still or (fill in the blank) will catch up," I don't know what I'm talking about. Would you prefer (say as an AAPL shareholder) that management says to the troops "We are so far ahead that no one will ever catch up?" Of course not.

Exhibit A is the old bumper sticker (and T-shirt) "Windows 95 = Macintosh '89". The problem was, Apple’s innovation (with the exception of the first PowerBooks) slowed to a crawl after System 7. Thus, Macintosh 89 = Macintosh 95 = Macintosh 2000.

Exhibit B is that 10x as many people bought Windows 95 as Mac OS 8, even though the latter provided a demonstrably better user experience. For Windows 3.1 the ease of use difference was dramatic, but for 95 it was not, and Windows 95 had other advantages: cheap hardware, more hardware variety, a larger potential installed base, more applications. Ease of use is important but it’s not everything.

A decade ago, Apple got crushed by Microsoft and nearly died. Today, there’s an even wider range of companies that could do to the iPhone what Windows 95 did to the Mac. What would it take?

First, Apple’s competitors would need to recognize what Apple has, and that it’s selling better. Nokia may be in denial, but I don’t think Microsoft or any of the major vendors in the US have missed Apple’s success.

Second, it would require the resources to apply to catching up to Apple. Samsung, LG, Microsoft and Nokia all have the resources to do so, and I think Research in Motion does too. In the short term, I’m ruling out Motorola and Sony Ericsson because their recent record on innovation is more dismal.

Third, it requires the ability to execute, in this case on software and user interface design. Obviously Microsoft could copy Windows and the iPod so there’s no reason that they can’t copy the iPhone. The other firms haven’t done well on software, but there’s no reason why they couldn’t procure that expertise. Maybe the gPhone is halfway decent. Or someone buys the PalmSource team. Or companies use the market to find some other open innovation solution.

Once LG or Samsung (or Nokia) has a decent alternative to the iPhone — particularly a CDMA phone — thanks to Apple’s foolhardy Cingular exclusive, the iPhone knockoff will have the upper hand with a majority of the market. The Koreans and Europeans will also have an advantage in their home markets where the iPhone has had a much smaller impact, in addition to the global economies of scale that Apple currently lacks.

Finally, other firms catching up to Apple will only happen if Apple is still roughly the same place when others match Apple’s existing offerings. Sure, Apple is on a roll, and as long as Steve Jobs remains savvy and healthy, their odds look good. But it’s not a lock.

Remember Netscape Navigator? The Motorola flip phone? The Sony Walkman? The Chrysler minivan? (The Boeing jumbo jet?) In many cases, a revolutionary product is all about the concept, and a concept can be copied. It’s not just about innovation activities, but also about the potential for those activities (as Geoff Moore argues) to achieve separation. If you can’t achieve separation, we call that commoditization. (NB: MCI, AT&T, the airlines, banks, or enterprise software vendors).

So I wouldn’t short Apple, but I also wouldn’t bet any sizable sum that all of its competitors will be asleep at the wheel for the next three years. And if Apple management shows signs of being as complacent as the Seeking Alpha iPhonatics, then sell! sell! sell!

Nokia don't get no respect

On Tuesday my friend David Wood of Symbian published a passionate rebuttal to a Forbes article about how the iPhone has won the hearts and minds of Silicon Valley, while Nokia has failed.

The article by Brian Caulfield aptly portrays Nokia as the Rodney Dangerfield of the cell phone industry:

Welcome to the kangaroo court, Silicon Valley style. Nokia may sell a phone somewhere on this planet every 18 seconds, but among the digerati in the Valley, that doesn't get the Finnish handset giant much respect. Here, the natives are all toting iPhones and BlackBerrys and raving about new horizons on the mobile Web.

Tech blog impresario Michael Arrington [said] "I believe that Nokia and Symbian [the software that powers its smart phones] are irrelevant companies at this point," he pronounced from the stage.

Quite a verdict, considering that Nokia sells close to half of all smart phones worldwide (and 40% of all phones) and has 9,200 applications written for its phones. In early July it plunked down $410 million to buy the portion of Symbian it didn't already own.
Unlike David, I think the article is pretty fair — at least from an American standpoint, which is all it claims to be. The article notes Nokia’s global dominance and calls the verdict a “kangaroo court” (i.e. completely unfair).

However, the point of the article is Nokia’s failure to have much of an impact in North America, either with the tech industry or with consumers. Lord knows that it’s trying, by moving its CTO to Palo Alto. It’s also clear that Nokia as the most aggressive US university outreach program of any mobile phone company, with multi-man year efforts at Stanford, UCLA and MIT. But its handset share and mindshare are almost off the radar.

So it’s indisputable that Nokia’s (and with it Symbian) so far has lost in the US market, including the high-end smartphone market that they dominate in the global market. The iPhone and Blackberry are winners and Nokia is an also-ran. The question that the Europeans (and Japanese and Koreans) are asking is: so what?

The so what is that before the iPhone, efforts to kickstart the mobile Internet have largely failed, at least in the developed countries. Operators and manufacturers come up with all sorts of technologies and businesses but they’re not getting adopted.

The iPhone is getting used and is getting the mobile Internet adopted. It’s also winning the hearts and minds of third party software and services — both for the cool factor, but also because it has users that will try these technologies. I know both geeks and housewives that swear by it, just as the Mac is gaining share on Windows in the desktop.

Ease of use is a big deal, and Forbes gets it even if Nokia doesn’t. I will probably never own an iPhone until they end the Cingular exclusive. However, I do own a Nokia E65, which is a pretty good phone, a mediocre PDA and a useless web device. Overall, the S60 user interface lacks the consistency and regularity of the iPhone or even the early Palm PDAs.

The iPhone-like design is certainly the way forward in North America. It’s possible (but by no means certain) that it’s also the way forward in Europe and Asia.

In a standards war, we assume that winning third party developers feeds the positive feedback loop driven by network effects. However, winning third party developers is no guarantee of success. The Mac had cool apps in the 1980s and 1990s but later got crushed by Windows 95. In Symbian, the UIQ APIs had far more apps but S60 sells more than 80% of the Symbian phones (and thus UIQ is being phased out in favor of S60). Palm did a great job of winning ISVs which did nothing to solve its long-term slide in new products (and thus market share).

Most marketing problems have a basis in fact. Successful companies usually assume that marketing problems are because the market isn’t getting their message (NB: Microsoft, Intel) — but often it’s because they’re not listening to the market. Nokia (and its soon-to-be subsidiary Symbian) can continue to shoot at the messenger, or they can respond to the iPhone challenge by making their products easier to use and more compelling.

My hunch is that Apple has at least another year or two before Nokia gets its software act together. (And if Nokia doesn’t, then Microsoft, RIM, LG or Samsung will). So, as when it faced Windows 95, Apple better have something up its sleeve to further advance innovations when competitors catch up to its first mobile phone act.

Thursday, August 21, 2008

What's up with that?

Tired of getting its butt kicked by Apple, Microsoft reportedly will use Jerry Seinfeld to anchor a $300 million ad campaign to rebuild its brand. My initial reaction: What’s up with that?

The campaign is intended to address the slow update of Windows Vista, as well as the increasing (but still small) market share of Apple’s Mac OS X. IMHO the article in this morning’s Wall Street Journal sugar coats the problem:

Microsoft's immediate goal is to reverse the negative public perception of Windows Vista, the latest version of the company's personal-computer operating system. Windows is Microsoft's largest generator of profit and revenue, accounting for 28% of the company's revenue of $60.4 billion in the year ended June 30.

The software has sold well, and Microsoft retains an overwhelming share of the market for operating system software over Apple. But Apple's computer sales have been rising, and Vista is dogged by the notion that it has technical shortcomings and is hard to use. Apple's latest Mac vs. PC ads take swipes at Vista. Microsoft says early problems with Vista have been largely alleviated.
So that Microsoft has a problem seems clear, and Jerry Sienfeld is an iconic cultural figure who who can reach a wide audience.

My question is: why is Seinfeld doing it? Sure, he’s not a movie star (think Brad Pitt or Geoge Clooney) who refuses to do ads in the US but will sell himself overseas to the highest bidder. Beginning in 1992 — at the height of the popularity of his series — Seinfeld signed up to pitch AMEX cards.

Still, why is Seinfeld associating himself with a troubled brand? Is it just the money? He’s already pulling in $85 million a year, so the $10 million from Steve Ballmer (while significant) is not going to change his life.

One possibility I hadn’t considered is that spending hundreds of millions on ads and $10 million with Seinfeld) would establish Microsoft’s interest in style over substance. Certainly that was the reaction of the readers of the WSJ blog this morning.

Windows Vista has a lot of problems, with even Microsoft executives having trouble using it. One-third of business PCs are upgrading from Vista to XP, even though it’s extra work.

Microsoft is convinced the problems are one of image and not of substance (despite ongoing claims to the contrary). They seem to be ruling out the possibility that the world has grown tired of being forced to upgrade (by many firms) to bloatware, which seems like a risky assumption to make.

Wednesday, August 20, 2008

Copyright common sense

Regular readers know that I’m generally pro-IP and pro-IP business models. Properly structured, intellectual property rights reward innovation and creativity, giving us more and better.

However, I’m cheering a victory tonight against the overstretch of the “copyright cartel” (as their opponents demonize them). The case involves a YouTube video and a lawsuit filed by Universal Music against YouTube here.

Here’s tonight’s SF Chronicle update of the AP story:

(08-20) 19:58 PDT San Jose -- In a victory for small-time music copiers over the entertainment industry, a federal judge ruled today that copyright-holders can't order one of their songs removed from the Web without first checking to see if the excerpt was so small and innocuous that it was legal.

The ruling by U.S. District Judge Jeremy Fogel of San Jose was the first in the nation to require the owner of the rights to a creative work to consider whether an on-line copy was a "fair use" - a small or insignificant replication that couldn't have affected the market for the original - before ordering the Web host to take it down.

A 1998 federal law authorized copyright-holders to issue takedown orders whenever they see an unauthorized version of their work on the Internet, without having to sue and prove a case of infringement. Some advocates of Internet users' rights - including the Electronic Frontier Foundation, which represented the individual user in this case - contend the procedure has been abused.
The principle of fair use is that society benefits by allowing certain exceptions to an absolute copyright control, allowing both small amounts (such as my story quote above) and for specific purposes (such as for commentary, research or scholarship). Playing 15 seconds of a song in a documentary about the music scene is different than putting an MP3 file on Gnutella.

The law is also quite clear that “fair use” has limits. In the case of Gerry Ford’s memoirs, The Nation published only 300 words from an illicit copy of the book, but it was enough to kill a deal the publisher had to serialize a larger excerpt in Time. The Supreme Court ruled that even this brief excerpt violated fair use because of the impact on the publisher’s business.

Thus, today’s ruling is no guarantee of victory for the defendant, a mom who used 29 seconds of a Prince song in a YouTube video of her toddler. That is as it should be — whether some use is “fair use” is a question of fact to be decided at a trial.

But it means that the DMCA has to conform to the same principles of copyright as any other copyright law. Perhaps the record industry will now be a little more judicious (and reasonable) in challenging use of its work. Given Hollywood’s difficulty in dealing with the Brave New World, somehow I doubt it.

Monday, August 18, 2008

Apple stock kerfuffle fizzles out

Back at home after a long series of conferences and some time away from Internet access.

Nancy Heinen settled the SEC’s stock option backdating allegations, bringing to a close Apple’s entire backdating kerfuffle.

Veteran Apple watcher Peter Burrows had the best explanation of both Heinen’s decision and the overall allegations against Apple.

It was disappointing to see Heinen issue a PC polemic upon the settlement:

With this lawsuit behind me, I look forward to addressing the greater challenges of social justice and economic justice.
The first problem with such drek is that it implies that satisfying customers, creating economic growth and providing high returns to investors are not worthwhile societal goals. Yes we’d expect such malarky from a lawyer-politician but not a former exec of two Silicon Valley companies (NeXT and Apple Computer).

Second, from what I’ve seen, tainted (or disgraced) business execs who start spinning their humanitarian goals are a) guilty as sin b) not going to do anything economically useful society ever again. Michael Milken and Bill Gates come to mind.

Friday, August 15, 2008

Don't listen to your partners

Most high tech technical executives know Clay Christensen’s 1997 book The Innovator’s Dilemma. In it, he argues that under certain codntiions — basically when a high priced product category is supplanted by a commodity product category — it’s dangerous for a firm to listen to its existing customers.

In those cases, the customers will say “give us more of the same” and “no, ignore that cheaper, less capable technology.” This is exactly how mainframe and minicomputer makers underestimated the PC.

Wednesday, at the next-to-last session of the 2008 Academy of Management (my last session before Disneyland), Prof. Allan Afuah offered early results from his study that extends this idea in a new direction: the risk of listening to partners. Allan’s study looked at videogame console makers trying to make a transition across technology generations that requires an architectural innovation (as defined by the famous Henderson & Clark paper on this subject).

For a Wednesday morning session, this was an astonishing turnout — 7 on the program but 17 in the audience. In addition to Allan, there were other interesting papers on innovation topics. The one where I had greatest personal interest was by Oliver Alexy (of Technische Universität München) with his paper on how the announcements of open source revenue models affect stock prices.

Saturday, August 9, 2008

LiMo moving forward

Normally every August, I attend the LinuxWorld SF conference. It’s a handy way of catching up on what’s going on in open source. However, this last week while LinuxWorld was in Moscone I was back in the Boston area at the user/open innovation conference.

I’m sorry I wasn’t there, because it sounds like it would have been an opportunity to learn the latest about LiMo and their plans to create an embedded Linux mobile phone platform.

LiMo announced seven new handsets, for a total of 21 now available using the LiMo combined Linux and mobile-specific stack.

On the one hand, LiMo is much further along than the world’s most famous embedded Linux alliance the gPhone aka Android aka the Open Vaporware Alliance. On the other hand, it’s well behind the 77 shipping phones (at last count) that Symbian has attracted while supplying two-thirds of the world’s smart phones.

Right now, the LiMo handsets appear to be mainly Motorola (which has been toying with Linux for years) and NTT DoCoMo (which could hurt Symbian’s Japan business). Samsung, Motorola and LG — the world’s #2, #3 and #5 handset makers — are all hedging their bets between LiMo, OVA and Symbian.

Limo’s marketing director claims that LiMo is pressuring Symbian with its openness, and dismissed Nokia’s plans for a Symbian Foundation:

“It means we're moving toward a more collaborative environment,” he said. “We see it as an endorsement of the strategy we developed. LiMo has had an advanced and intricate governance model, and now analysts have said the governance model they're working on for Symbian will be very similar, so we see it is a validating point for the idea of collaboration and openness. It allows us to shift the conversation around openness from debating its merits to talking about tactics.”
Although I have personal (and professional) ties to Symbian, I’ve not talked to anyone in authority at LiMo, Symbian Foundation (Nokia) or the OVA. However, as an analyst, I would say this statement counts as blowing smoke. Here’s why:
  • LiMo has shipping product, little market share, but still is a gated source project, with a minimum admission fee of $40k/year.
  • Symbian has market share, has shared source with its licensees (including the world’s five biggest handset makers), but hasn’t started either the gated or open source process.
  • Android has neither open source nor handsets (yet).
Neither LiMo nor Symbian (let alone OVA) are open source communites. As any one knowledgeable about open source will tell you, gated source is not open source.

There’s no reason for Symbian (or Android) to copy LiMo’s “open source” strategy because there’s nothing to copy. The only successful model for an industry cooperative open source effort is the IBM-founded Eclipse (which borrowed many ideas from Apache), and it’s clear that Nokia is planning to copy Eclipse. Its planned $1,500 annual fee (same as Symbian’s newest commercial program) is a long way from LiMo’s $40,000.

So someday at least one company will have an open source mobile phone platform that is fully open in terms of production, governance and IP. (It’s possible that LiMo and Android will merge, but given the egos involved, I don’t see it happening before 2010). Who will get there first?

I think it comes down to business models. As I mentioned yesterday, one of my favorite papers is about how “open” turns out to be both a loaded word and a fuzzy concept. It costs a lot of money to run an open source foundation, let alone keep the code current and up to date. Charging money for gated source is one way to support the effort, which falls apart if you actually move to an open live code repository.

If you can’t charge for source, another way to pay the bills is to cross-subsidize the efforts from another profit source. LiMo doesn’t have a sugar daddy, but both Android and Symbian do. So from an economic (rather than technical or organization) sense, both OVA and Symbian will have a much easier time opening the kimono than will LiMo.

Friday, August 8, 2008

TiVo the Olympics with semi-open, semi-compatible standards

The Olympics begin today, and NBC hopes to earn back its nearly $1 billion payment by broadcasting and webcasting thousands of hours of Olympics coverage. In Thursday’s USA Today, technology columnist Edward Baig wrote about how this coverage is going to tax any TiVo’s storage capabilities — particularly for viewers who go for HDTV (which is 10x as bulky as NTSC).

A sidebar by Baig notes that the TiVo HD and other recent models are expandable with off-the-shelf commodity hard disks that use the eSata interface. TiVo used to require you to buy a larger TiVo to get more storage, but not any more (perhaps due to competition?)

This remind me of a favorite among my own papers, a 2006 book chapter that attempts to spell out what is an “open” standard. The term had previous been abused — either as a marketing slogan, i.e. magic pixie dust to be sprinkled on platform technologies (cf. “OpenVMS”) — or a binary yes/no classification that grossly oversimplifies a wide range of alternatives. Instead, the paper notes that there are both degrees of openness (like completely to not at all) and dimensions of openness (e.g. for competitors or complementors).

Normally platform owners want complements, but sometimes (as with TiVo selling expanded disk drives) they try to lock them out so they can get all the add-on sales themselves.

This was a big deal in the 1970s and 1980s when IBM and DEC were charging obscene margins for commodity disk platters because they had an IBM- or DEC-compatible controller. Entrepreneur (turned congressman turned professor) Ed Zchau made a fortune working around these problems so he could sell cheaper disk drives to minicomputer owners. During my computer job in high school, we figured out how to write our own disk driver software so we could use a cheap 3rd party drive (20mb if I recall correctly) on our HP minicomputer.

For owners of the newer TiVo models, Baig reviews $200, half-terabyte spindles from two of the big three (Western Digital, Iomega) and notes Seagate has promised its own model Real Soon Now. These same drives also work with the Scientific Atlanta DVR that’s bundled by all the cable TV companies. Interestingly, TiVo recommends the WD but not the Iomega, even though Iomega claims theirs will work. I don’t know what’s going on, but there’s two good possibilities.

One possibility is that TiVo doesn’t really want to be open to all third party complements, because it has some sort of marketing deal or strategic alliance so that TiVo recommends WD in exchange for something of value (cash, royalties, support help, comarketing).

The other possibility is that TiVo has found a real incompatibility and doesn’t want to burn the slim margins with thousands of $50 support calls. In this latter explanation, the eSata “standard” is not actually all that complete, and so various host and drive implementations are mostly but not quite entirely compatible.

Normally I’d suspect the marketing explanation, but my luck with hard disks the last few years supports the incompetence argument. I buy about two hard disks a year for backup (bigger every time) and use them with one of the four Macs we have at home or my personal Mac at work. Between FireWire 400 and USB 2.0, every drive is either flakey out of the box or fails to work reliably (i.e. be visible to the host) within a year or two. I even switched to name brand drives, but to no avail.

Once upon a time — more than 30 years ago — I could debug RS-232 serial connections with an oscilloscope to check waveforms and timings. Today, the complexity of some of these interfaces requires an equally complex debugging instrument that may be no better at implementing the host protocol than the host device.

As with the incompatible modems of the market, time to market is causing firms (presumably on both sides of the cable, not to mention their component suppliers) to ship implementations that mostly but not entirely conform to the standard. I’m sure there are buyers out there that would pay an extra 10% ($20 on a $200 disk drive) for something that conformed to the standard — and perhaps even was tolerant of slightly nonconforming implementations.

But none of the trades — InfoWorld, CNET or the like — are providing this information. Is it because they don’t have the equipment, because they don’t pay their testing lab directors well enough to get a good EE, or because they don’t want to piss off advertisers? I dunno, but it seems like

Thursday, August 7, 2008

Open to user innovation

I’m now finally back in California, after five days and four nights in the Boston area. The first four days were spent at the HBS-MIT conference on user innovation, which was held at Harvard Business School Monday-Wednesday.

This is the sixth conference, but the first to mention “open innovation” and the first one I’d attended. I’ve been blogging on some of the interesting stuff over at my open innovation blog. (BTW, this is not a brand extension too far — the OI blog is intended to be a summary of academic research related to open innovation, for an academic or semi-academic audience).

On Wednesday night and Thursday until I dashed to the airport, I was at the MIT libraries (and also the archives) doing research for my planned book, From MIT to Qualcomm. I found some fascinating tidbits, like the terms of the endowment that made Claude Shannon in 1957 one of MIT’s first endowed professors.

I’m probably going to be offline much of the next week, as I head Friday to the annual Academy of Management conference, which draws some 6,000 professors and graduate students in OB, strategy, entrepreneurship and related management disciplines to listen to workshop, papers and panel discussions.

On Friday I’m speaking to entrepreneurship doctoral students, on Sunday I’m talking about standardization, and on Monday I’m presenting the open innovation side at a panel session on user and open innovation.

Did I mention that the conference is in Anaheim? Anyone who’s met my daughter (and my wife’s aversion to roller coasters) knows that somewhere in there I’ll be spending time at the Magic Kingdom.

A radical step for saving newspapers

In response to my gloomy predictions about newspaper commoditization, newspaper columnist (and editorial cartonnist) Ted Rall says: “News does not want to be free.” Rall’s plan: pull their websites, copyright the stories, and cut off the news services (which for decades have had the rights to republish the content of their newspaper clients).

Here I find myself partly in agreement with Rall, which is a real shocker. If I were to read every one of Rall’s columns for the past 6 months, I’d say I violently disagree with almost all of them — they’re not just wrong, but bone-headly so.

Here, however, I agree with two main points of Rall’s analysis. First, the non-response by America’s newspapers is failing abysmally. If the managers of these businesses were thinking long-term, they’d take risks — die fighting rather than die quietly — because their current strategy is clearly going to fail. (I might modify the plan to — as one prominent newspaper publisher suggested — stop free websites but keep the content available for a fee).

Second, one of the biggest problems that the newspapers face is that their content is being provided by AP, AFP, Reuters, DJ etc to the websites that are slitting their throats, such as Google and Yahoo news. Anything that helps these competitors just hastens the inevitable.

Rall’s plan is a radical break from the past, but the past strategy has failed. I personally think it would be more fun to do it Rall’s way, and if I were a newspaper heir — with a big city metro — I’d try to convince my cousins to let me give it a try. However, there are at least two management theories that strongly suggest this isn’t going to work.

First, in his 1997 manifesto Clay Christensen basically said that low-end solutions get underestimated until they wipe you out, and that the only way to survive such a threat is to cannibalize yourself. Pulling all your content off the web — rather than trying to make a successful business on the web — is worse possible thing to do in Christensen’s book.

Second, there’s the ol’ prisoner’s dilemma problem. If the NYT and NY Post and Daily News and Newsday all take down their websites, maybe New Yorkers will actually buy papers. But maybe the New York Sun or New York Observer won’t, and enough readers will consider these papers “good enough”. If one newspaper defects (ink-stained wretches: think newspaper strike) the rest will defect too.

OK, so most cities aren’t like NYC: they don’t have four printed dailies, but one. Still, the big cities tend to have a suburban daily (or chain of mini-dailies). Some cities have a free subway daily, and nearly all have a free weekly that might want to offer an online news update.

Even without competitors, newspapers have the problem of substitutes: they’re called TV stations. Survey after survey reports that local television has 3x the audience that local newspapers do. So Google and Yahoo wouldn’t hire local reporters to counter Rall’s Revolution, but the TV stations already have them. (They’re called “producers,” not to be confused with the sit-down and stand-up news readers).

I can’t see how this cat will ever get back in the bag. The good news is that because the newspapers are largely irrelevant to 90% of the American population, they could probably form a cartel to control the distribution of their proprietary IP (i.e., rein in the AP) and there would be no plausible antitrust agreement against it. But the chances that hundreds of newspapers will agree on anything seems slim: perhaps once they’re all owned by three big chains, then it will happen.

Wednesday, August 6, 2008

No joint venture is forever

On Wednesday, the WSJ and Forbes reported that Siemens would like to dump the Fujitsu Siemens joint venture. The 9-year-old joint venture has annual revenues of about €6.6 billion ($10 billion).

The original JV made a certain sense. The PC industry is a commodity business with low margins and some economies of scale. Since the industry uses commodity technologies there were no major incompatibility to be solved. Fujitsu and Siemens were top vendors in their home markets without presence in the other, so the two businesses had some upside and not much downside.

The last public figure I could find (2005), the JV had a 3% global share. I guess if I gave IDC or Dataquest $10K I could see the real estimates. Some stories (without any supporting evidence) claim the JV is losing market share. Of course, we don’t have public information on the JV’s P/L.

Apparently Fujitsu still wants to have PCs. The reason IBM kept PCs (long after it made money off of them) was the claim that it was necessary to have a full line of computers to serve MNCs. Hitachi (with far less share) quit the PC market last year.

The new Siemens CEO, Peter Löscher, is dumping losing divisions — either using the (old) Jack Welch rule or just to appear to be doing something. If they are dumping money losing divisions, what about Nokia Siemens telecommunications networks? Speculation says this is also on the block.

Historically, joint ventures fail within a few years. Perhaps the parties can’t agree on a strategic direction. Or perhaps one party wants to learn everything they can from their partner and then use that information to become a direct competitor — think Chinese automakers.

Here the issue is just simply choosing to stay in a lousy industry. Some might claim the JV has underperformed, but it seems the real answer is that Siemens should have bailed out of PCs nine years ago — or at least three years ago when IBM dumped the PC that it created back in 1981.

Saturday, August 2, 2008

As if GooTube weren't enough

The entire GooTube business model is based on taking other people’s content without paying for it, and then giving it away. Viacom doesn’t like having its content stolen, so someday a higher court will let us know which one is right.

But while attending the workshop Friday, I found out about yet another stolen video content site: SurfTheChannel, which tries to use two loopholes to avoid lawsuits by Viacom et al.

First, SurfTheChannel is based in Sweden and thus claims to be immune to the DMCA. Secondly, it disclaims liability because “SurfTheChannel does not host any content on it\'s Servers.”

YouTube today is 425x344, while 1080i would be 14x as much data. However, storage costs and bandwidth and improving so fast that movie publishers are increasingly facing the same download problems that music publishers have been dealing with for the past decade. These sites are even further examples that fighting IP piracy on a global Internet (given the inherent lack of global IP enforcement) is a Sysiphean task.

When we discussed the problem at the workshop, the conclusion was that content will have to be self-supporting, through approaches such as product placement or mid-roll ads. So if Ford pays all the production costs for a Toby Keith movie or music video with Ford F-150 product placement throughout it, then the producers don’t have to rely on any subsequent revenue streams from selling the views of the video. In fact, if the video gets given away on websites throughout the world, Ford will probably be even happier. (Ignoring for a minute that F-150 sales are miniscule outside the Americas).

Beyond the normal one-off product placement are Sears' sponsorship of the Bob Vila show and Extreme Makeover: Home Edition. These are far less subtle updates of the General Electric Theater and the Texaco Star Theater.

So is this the future of video production, when everything becomes an informercial?

“Louis, I think this is the beginning of a beautiful friendship.”

“Rick, how about a Lucky Strike?”

“Certainly. After all, Lucky Strike means fine tobacco.

Friday, August 1, 2008

Web 2.0: deja vu all over again

Thursday and Friday I was invited to attend the “Understanding the Networked Digital Industry” workshop hosted by USC’s Institute for Communication Technology Management. The workshop had a lot of great insight about what’s happening in telecom, Internet and other information industries. I don’t have time to cover all the interesting talks, but did want to write about my own talk.

I joined a panel entitled “Creating New Value Propositions.” When I was asked to speak, I decided to talk about Web 2.0 business models. So my talk was entitled:

Web 2.0 business models:
Did we learn anything from Web 1.0?

The presentation was in two parts.

The first part summarized the use of the term “Web 2.0,” starting from the definition by Tim O’Reilly back in September 2004. I first summarized the O’Reilly definition, noting examples like Facebook, MySpace (part of News Corp.) and Flickr (part of Yahoo). I then offered my own summary of what defines today’s Web 2.0 companies:
  1. User generated content (which I tied back to the von Hippel work on user innovation, a new wrinkle for some of the audience)
  2. Social embeddeness, with self-defined affinity, either for friends or by similarity in self-revealed preferences (e.g. Amazon).
  3. Integration via APIs, and thus converting websites to platform technologies.
My graduate student project last year on mobile Web 2.0 business models included these three points, but found that mobile Web 2.0 also included ubiquity and location awareness.

I then summarized the Web 1.0 problems and how they apply to Web 2.0. Effectively, Web 1.0 was commoditized due to low entry barriers (compared to say retailing or radio stations), too many entires, low perceived customer value for commoditized content, and questionable revenue models. Web 2.0 has exactly the same problems.

Sure enough, the FT (in May) and the Merc (last Sunday) printed articles that noticed that the rulers of Web 2.0 have no clothes. Quoting from the FT article by Richard Waters and Chris Nuttall:
Many members of the Web 2.0 generation of internet companies have so far produced little in the way of revenue, despite bringing about some significant changes in online behaviour …

The shortage of revenue among social networks, blogs and other “social media” sites that put user-generated content and communications at their core has persisted despite more than four years of experimentation aimed at turning such sites into money-makers.
Meanwhile, Chris O’Brien on the Merc had a great column last Sunday:
I attended the Facebook developers conference to hear founder and CEO Mark Zuckerberg discuss the future of the social-networking site…

What really struck me, though, was his response to a question during a session with reporters after his keynote: How are you planning to make money from all of these new services? His answer: We'll figure that out later.
How will it turn out? Clearly a Web 2.0 shakeout is coming; this seems like 1999 of the Web 1.0 (i.e. dot-bomb) era, which means that the shakeout should happen the next 2-3 years.

I found this observation to be remarkably prescient:
[B]ubbles and consequent shakeouts appear to be a common feature of all technological revolutions. Shakeouts typically mark the point at which an ascendant technology is ready to take its place at center stage. The pretenders are given the bum's rush, the real success stories show their strength, and there begins to be an understanding of what separates one from the other.
What makes this even more delicious is that the comment was made back in 2005 — when TIm O’Reilly was explaining what happened with end of the Web 1.0 era and how it would mark the beginning of the Web 2.0 era.

Hopefully before “Web 3.0” is coined, someone will take seriously the problem of inadequate revenue models.