Monday, November 30, 2009

Markets for IP and innovation

In discussing open innovation with some visitors today, we got to talking about markets for innovation — which are similar to (but not the same as) markets for IP.

There are two modes of open innovation, outbound and inbound, and the outbound mode depends on being able to monetize the innovation. Maybe if you’re IBM you can rely on indirect monetization (NB: IBM Global Services) but most companies need more direct monetization. Inbound OI works as long as there is a supply of innovations, which might be motivated by money or by non-monetary incentives.

So I was asked, how do firms find innovations? In other words, how are markets organized? We know from economics that markets play a number of important roles, including search, matching buyers and sellers, providing feedback/constraint on claimed quality/performance and price-setting.

From easiest to hardest, I think there are three types of innovations that might be sourced via open innovation:

  • components, such as semiconductor chips
  • IP, such as non-exclusive (or exclusive) patent rights
  • custom innovations, such as Threadless or other user-generated content
Search can be difficult, but is greatly improved due to the Internet. As with any market, matching a price to quality/features is the hard task, particularly for new or thinly-traded goods.

To me, the advice given for one of these markets for innovation would not necessarily apply to the others, because they are sufficiently different that lessons from one might not transfer to the others.

Similarly, there are two (or three) different IP business models. For some companies (such as Dolby or Qualcomm) IP licensing is the primary business model and thus revenues have to cover all IP development costs. For others, the IP revenues are incidental or supplemental, and “success” here means incremental revenue but not necessarily enough to justify the R&D in the first place.

A subset of the incidental case (or perhaps a separate case) is the salvage case, as when Xerox is unloading the Xerox PARC patents to boost the bottom line because they never figured out what to do with them in the first place. A salvage operation is particularly misleading as a role model, because usually the IP is being sold for a fraction of its original cost on the theory that some revenue is better than nothing.

So a word to the wise: don’t believe a consultant if he tries to sell you a one-size-fits all innovation market (or IP licensing) strategy. One size does not fit all.

Saturday, November 28, 2009

We deserve better commodity information

It’s no news that Wikipedia, with all its flaws, is the default information source of a generation of skulls full of mush. If this wasn’t obvious enough from my college students, it was brought home a week ago when interviewing FLL robotics contestants (ages 9-14), when nearly all said their project “research” consisted of Google and Wikipedia. (One team said Google and Yahoo).

However, since then, Wikipedia’s problem has been a front page Wall Street Journal story Monday (and blog entry) on how Wikipedia is losing volunteers, specifically 49,000 in Q1 2009. The Telegraph had the most comprehensive follow up stories although the Times of London had good coverage (including a great article on the four sources of error.)

The impetus for the original WSJ article was the academic research of Felipe Ortega, who is part of a group studying open source software but actually did his Ph.D. dissertation on Wikipedia (a related but quite different species). He’s been tweeting to offer his comment on the current news coverage. While the bulk of his research hasn’t gone through the peer review, the abstract suggests he’s taken seriously all the research design issues.

After all the articles and the academic study, the official Wikipedia response is pretty unsatisfactory. It changes the subject, arguing that while the tide of new volunteers roughly matches the ongoing losses, at least the site traffic and number of articles continue to grow.

However, none of this relates to two inherent problems in Wikipedia that the current management is unable to solve, plus the third (and potentially catastrophic) outcome of WIkipedia’s commoditization of information.

The first problem is that Wikipedia publishes content by persistent idiots. Now that there are dozens or thousands of individuals trying edit articles on almost any topic, there are chronic edit wars with rival editors taking out each other’s changes in edit wars.

Competition is healthy — if there’s a selection mechanism based on quality or performance. Wikipedia has no such mechanism. Instead, what gets published comes from people who whine and bitch and moan, who win out over people who know what they’re talking about but have better things to do with their life. This works well for chronicling Simpsons episodes but not for summarizing academic research or major historical controversies. (Yes, I know that there are capable contributors, but in every battle between idiots and experts, the idiots are winning.)

I tried to sell this angle to a reporter I spoke with Monday, but I guess he thought it was just the griping of a snobby college professor who gave up years ago after watching his work be mangled by twits. However, in the 27 comments (thus far) to the official Wikipedia response were these five comments:

  1. Well, I have taken hours editing and polishing a biographical article about a scientist. There is nothing in the article now that is under dispute, yet it is probably going to be taken down and deleted as one editor is exercising his or hers petty power-plays.
  2. My most recent experiences have been quite negative: edits reverted with no reason, pages tagged as grammatically terrible when they were no such thing, or tagged as “not up to WP’s standards” when they were stubs and in some cases *still editing*. These taggings tended to be “drive-by” in the sense that some other editor dropped the tag onto the page or made their reversion but then failed to respond to explanations on the talk page for days.
  3. I used to spend a lot of time writing for Wikipedia, amending entries and creating new articles. Now it seems that a small number of self-appointed editors run the site. If I create new articles then they are nearly always deleted. If I correct information I know for a fact is wrong, it is reverted back and I am warned by the small sub class of elite editors
  4. I took on editing the Albigensian Crusade page a while back, a fairly simple job because what’s known about it comes principally from three contemporary chronicles dealing with the specific subject. A chronicle is self-indexed by time, therefore it should have been adequate to simply point readers in the direction of the sources, but no, that was inadequate, full references please. I got started, went so far, and checked if this was right. The %*$^^@# responsible refused to take the time to feedback, and was quite rude about it, so I stopped. Other appeals to administration went nowhere, and I concluded this is a system full of chiefs who can’t be bothered to get their hands dirty actually editing,
  5. I, for one, am one of those professional contributors who left Wiki in disgust. After spending a lot of time creating pages or adding a lot of content, some amateur came along and dumbed down the content and added fictious pictures that were purported to be of the creatures listed. It became a waste of my time to provide a lot of information that could be cut-and-paste into term papers, dissertations, reports, etc., and have some arm-chair contributor wreck it all.
This is a problem I’ve known about since soon after I joined Wikipedia in November 2003. The entire production process would have to be ripped up to fix this. Even Amazon has a way of providing feedback on user contributions so that readers know whose comments have been useful, even if it (and other processes) is fatally broken for highly polarized topics like politics.

One problem I didn’t see coming was the inevitable shift from original writing to maintenance mode. I started my main burst of Wikipedia contributions (2003-2004) by creating 11 new articles, from venture capitalists Eugene Kleiner and Tom Perkins to adding two missing campuses (CSULB, CSUSM) of the 23-campus CSU system.

Today, thanks to the law of large numbers (and the long tail) are very few significant articles left to be written. (Yes, Wikipedia has an article on only one Joel West — and it’s a lame one — but I don’t consider that a major omission.)

This reminds me of what I experienced in my first few years as a professional programmer: it is so much more more fun to write new code than maintain someone else’s code. In fact, as I became a manager I learned this is a major recruiting and staffing problem — even when you pay people, let alone when they’re volunteers. Over and over again, I saw that the manager or other “stuck” (high switching cost) programmers had to take the scut work so you could offer the new exciting stuff to attract the best talent.

Clearly, at Wikipedia existing volunteers don’t want to do the scut work, nor do the newcomers. If it’s de minimus, then (to use an analogy) perhaps good citizens will just pitch in and pick up the candy wrapper, but nobody’s going to spend a weekend clearing up the trash along the highway just for the fun of it.

Wikipedia is running out of good jobs to hand out. If you can’t give out fun work, how are you going to attract people? What I didn’t see six years ago was that inevitably Wikipedia’s content base would mature: first in English and eventually in all the major languages. When this happened, the opportunities for adding new content would mainly be limited to current events like new hurricanes or those Simpsons episodes.

However, I find hope in Wikipedia’s current troubles, as they suggest a solution WIkipedia’s most invidious problem: the commoditization of human knowledge. Monopolies are bad, even if they are for free goods. When I was interviewing open source leaders, the Apache (and most “open source” types) seemed to get this, while the free software types (Linux, OpenOffice) did not.

Competition is inefficient, but it provides choice. Monopolies at best mean benevolent dictators, and few benevolent dictators remain benevolent forever.

The mind-numbing ubiquity of WIkipedia is teaching a generation of kids to be lazy and uncritical consumers of information — whether it’s truth or merely wikitruth. They take what shows up on the first page of Google or in Wikipedia and assumes it’s true, even when it’s not.

When I was a kid, I would do my 5th grade reports using World Book, Encyclopedia Brittanica, usually one other encyclopedia like Collier’s or Compton’s, and also the Information Please Almanac. (If the report was important, I would also try to find a real book or two.) This wouldn’t make me an expert, but at least I would get multiple perspectives.

Today, Wikipedia’s commoditization of information means that Encyclopedia Britannica is struggling and its previous nemesis (the Encarta CD-ROM) is gone. At least a five-year-old version of the Columbia Encyclopedia survives as Reference.com.

Once upon a time, I assumed that the network effects meant that nothing would ever compete with Wikipedia. This week shows that in less than a decade it’s possible to create a significant body of knowledge with volunteer labor. None of the existing rivals have yet succeeded, whether Citizendium, Conservapedia, Liberapedia or Knol. However, with this large body of existing (or potential) body of would-be Wikipedia labor becoming available, they are certainly trying.

I will be curious to see if we can achieve success from volunteer organizations that focus on the quality rather than the quantity of contributions. In this direction, Citizendium (by WIkipedia co-founder Larry Sanger) is using a somewhat modified version of the Wikipedia process, while Google’s Knol is heading in a different direction by emphasizing authorial integrity over cumulative production.

Given the almost total lack of competition, anything that provides a viable alternative to WIkipedia is a good thing. It will be a good thing if a decade from now we have three or four online encyclopedias to choose from, much as today we can choose from three or four cellphone carriers.

It’s likely that one of these alternatives will be Wikipedia. Perhaps if its leaders take its current problems seriously, it will still be the most popular alternative out there and will be able to meet its current modest fundraising goals.

Thursday, November 26, 2009

An ally in demanding Climategate accountability

While nearly all of the Climategate outrage and ridicule has come from global warming skeptics, an ally of the embarrassed scientists has finally argued that it’s time for fellow environmentalists to come clean.

Environmental blogger George Monbiot wrote Wednesday in the Guardian:

I have seldom felt so alone. Confronted with crisis, most of the environmentalists I know have gone into denial. The emails hacked from the Climatic Research Unit (CRU) at the University of East Anglia, they say, are a storm in a tea cup, no big deal, exaggerated out of all recognition. It is true that climate change deniers have made wild claims which the material can't possibly support (the end of global warming, the death of climate science). But it is also true that the emails are very damaging.

The response of the greens and most of the scientists I know is profoundly ironic, as we spend so much of our time confronting other people's denial. Pretending that this isn't a real crisis isn't going to make it go away. Nor is an attempt to justify the emails with technicalities. We'll be able to get past this only by grasping reality, apologising where appropriate and demonstrating that it cannot happen again.
Monbiot likens the loss of credibility by CRU head Phil Jones to the expense padding scandal that has wracked the British parliament recently:
Most of the MPs could demonstrate that technically they were innocent: their expenses had been approved by the Commons office. It didn't change public perceptions one jot. The only responses that have helped to restore public trust in Parliament are humility, openness and promises of reform.

When it comes to his handling of Freedom of Information requests, Professor Jones might struggle even to use a technical defence. If you take the wording literally, in one case he appears to be suggesting that emails subject to a request be deleted, which means that he seems to be advocating potentially criminal activity. Even if no other message had been hacked, this would be sufficient to ensure his resignation as head of the unit.

I feel desperately sorry for him: he must be walking through hell. But there is no helping it; he has to go, and the longer he leaves it, the worse it will get. He has a few days left in which to make an honourable exit. Otherwise, like the former Speaker of the House of Commons, Michael Martin, he will linger on until his remaining credibility vanishes, inflicting continuing damage to climate science.
Monbiot particularly faults the university for stonewalling and denial, “a total trainwreck: a textbook example of how not to respond.”

Monbiot is trying to save the credibility of his global warming allies. As an academic, I want to save the credibility of academia and the scientific process (beyond just meteorology). However, the argument goes beyond just trying to save a cause.

This is a far more important societal problem. If our institutions are to have credibility, then failure must have consequences — a rule to be applied to government, business, academia, churches, charities and all other institutions.

Ethical leaders will demand consequences for their friends, not just their enemies. Alas, this is an all too rare sight nowadays (NB: partisan cover-up by both parties for Congressional corruption within their ranks.)

Hat tip: Chris Morrison, BNET.

Tuesday, November 24, 2009

Climategate and academic accountability

Due to inadequate computer security, 156 megabytes of embarrassing emails by the leading British climate research unit have surfaced in an incident now called “Climategate.” This is reminiscent of the embarrassing Bill Gates emails about crushing competitors (or Richard Nixon and his tapes): people will make sure that there are no records of dishonest behavior, rather than ending the behavior.

However, it seems unlikely to significantly change the debate over anthropogenic global warming, as pundits, politicians and the public will filter the news through their existing biases. Those who believe in AGW will say the issues are minor or isolated, while skeptics will say it proves a vast left-wing conspiracy. (This also sounds a lot like 1974).

Critics point to snippets in which researchers talk about cherry picking data, exaggerating claims, padding results. Although many of these things seem like a bad idea, there are a lot of gray areas and such exaggeration happens in many scientific endeavors. As economics blogger Meg McArdle notes, “ it means we need to be less romantic about the practice of science.”

However, one passage should be setting off alarm bells in every research university in the world. A March 2003 email by Prof. Michael Mann of Penn State — quoted by the Wall Street Journal — advocates a blacklist against an existing journal:

In fact, Mike McCracken first pointed out this article to me, and he and I have discussed this a bit. I've cc'd Mike in on this as well, and I've included Peck too. I told Mike that I believed our only choice was to ignore this paper. They've already achieved what they wanted—the claim of a peer-reviewed paper. There is nothing we can do about that now, but the last thing we want to do is bring attention to this paper, which will be ignored by the community on the whole...

There have been several papers by Pat Michaels, as well as the Soon & Baliunas paper, that couldn't get published in a reputable journal. This was the danger of always criticising the skeptics for not publishing in the "peer-reviewed literature". Obviously, they found a solution to that--take over a journal!

So what do we do about this? I think we have to stop considering "Climate Research" as a legitimate peer-reviewed journal. Perhaps we should encourage our colleagues in the climate research community to no longer submit to, or cite papers in, this journal. We would also need to consider what we tell or request of our more reasonable colleagues who currently sit on the editorial board...
The 2003 paper, written by Willie Soon of Harvard and Sallie Baliunas of the Mount WIlson Observatory, questions whether the 20th century is the warmest of the past 2000 years. (According to objective measures, Climate Research is ranked 29 out of 48 meteorology journals — which would mean it’s reputable but not prestigious.)

This is not how science is supposed to be conducted. You might disagree with other interpretations of the data, but the way you win is by producing better studies. You don’t blacklist journals — for contributions or citations — because they sometimes print contrary points of view. The process of science is supposed to be about the evidence, so trying to destroy the careers of those who disagree with you is beyond the pale.

This makes clear that at least some of the researchers are no longer conducting scientific research — in search of the truth — but only want evidence that supports their position. Is this because their reputations will suffer? Is this because they are part of a social movement? I have no direct knowledge either way.

Legitimate institutions apply the rules equally to friends and enemies, as captured by the old saying “rule of law, not of men.” If the academic institutions valued their legitimacy, they would reprimand those intent on violating the norms of scientific research. Since the Royal Meteorological Society and American Meteorology Society are strongly lobbying for political action to reverse global warming, this would require criticizing those whose beliefs they share.

Absent strong external pressure, institutions are notably reluctant to admit their own failings. The institutions (like any under attack) will probably ignore the controversy and hope it goes away. I think this will work: unlike Nixon’s efforts at stonewalling, there are no Woodward and Bernstein dogging every lead in this scandal.

Odds are, the professional societies will close ranks rather than sanction clear ethical violations. The offenders will continue to be rewarded with journal publications and generous government support for their research.

Free markets and free societies depend on feedback mechanisms to catch and deter transgressions. Failures of accountability in academia reinforce a pattern that I’ve observed over the past decade: non-profit institutions are less accountable than companies, because there are no shareholders who can throw the bums out after they’ve flown a plane into the side of a mountain.

Monday, November 23, 2009

Now they tell us

Even the official spokespeople for American liberalism, the Gray Lady herself, admits that the US has a spending problem. From the front page (above the fold) of this morning’s New York Times:

Federal Government Faces Balloon in Debt payments
At $700 Billion a Year, Cost Will Top Budgets
for 2 Wars, Education and Energy

By Edmund L. Andrews

WASHINGTON — The United States government is financing its more than trillion-dollar-a-year borrowing with i.o.u.’s on terms that seem too good to be true.

But that happy situation, aided by ultralow interest rates, may not last much longer.

Treasury officials now face a trifecta of headaches: a mountain of new debt, a balloon of short-term borrowings that come due in the months ahead, and interest rates that are sure to climb back to normal as soon as the Federal Reserve decides that the emergency has passed.

With the national debt now topping $12 trillion, the White House estimates that the government’s tab for servicing the debt will exceed $700 billion a year in 2019, up from $202 billion this year, even if annual budget deficits shrink drastically. Other forecasters say the figure could be much higher.

In concrete terms, an additional $500 billion a year in interest expense would total more than the combined federal budgets this year for education, energy, homeland security and the wars in Iraq and Afghanistan.
Of course, the NYT has to put on the most favorable pro-administration spin on the story. The US had “decades of living beyond its means” and the excess spending this year is “is widely judged to have been a necessary response to the financial crisis and the deep recession.”

The article has only indirect mention of where the borrowing is coming from. No mention that the US budget deficit was $1.4 trillion for 2009 — 205% higher (3x) the previous record of $459 billion. Nor does the NYT mention that the Obama administration plans to add another trillion to the deficit during FY 2010.

Nor is there mention of the trillion dollar increase in spending if either the House or Senate version of health care reform becomes law. As former CBO head Douglas Holtz-Eakin analyzed the bills Saturday:
First and foremost, neither bends the health-cost curve downward. The CBO found that the House bill fails to reduce the pace of health-care spending growth. An audit of the bill by Richard Foster, chief actuary for the Centers for Medicare and Medicaid Services, found that the pace of national health-care spending will increase by 2.1% over 10 years, or by about $750 billion. Senate Majority Leader Harry Reid's bill grows just as fast as the House version. In this way, the bills betray the basic promise of health-care reform: providing quality care at lower cost.

Second, each bill sets up a new entitlement program that grows at 8% annually as far as the eye can see—faster than the economy will grow, faster than tax revenues will grow, and just as fast as the already-broken Medicare and Medicaid programs. They also create a second new entitlement program, a federally run, long-term-care insurance plan.

Finally, the bills are fiscally dishonest, using every budget gimmick and trick in the book: Leave out inconvenient spending, back-load spending to disguise the true scale, front-load tax revenues, let inflation push up tax revenues, promise spending cuts to doctors and hospitals that have no record of materializing, and so on.
So instead of spending cuts, the NYT series seems oriented at preparing well-to-do elites for punitive tax increases, rather than restoring spending to the % of GDP where it was 2 or 3 years ago (still not a small number). By one calculation, government spending hit 37.4% of GDP in FY2008 and 45.3% of GDP in FY2009. Excluding the peak of World War II (1943-1945), both are the highest numbers ever recorded in US history. And — unlike World War II — most of the spending is locked into the base budget, to continue indefinitely whether revenues increase or not.

Sunday, November 22, 2009

Writing from beyond the grave

Michael’s Crichton’s latest novel, Pirate Latitudes, is being released this week. Presumably it’s his last novel — since he died a year ago — and will bring his heirs royalties from at least one more Spielberg movie.

As Amazon helpfully explains:

Michael Crichton's novels include Next, State of Fear, Prey, Timeline, Jurassic Park, and The Andromeda Strain. He is also known as a filmmaker and the creator of ER. One of the most popular writers in the world, he has sold over 150 million books, which have been translated into thirty-six languages; thirteen have been made into films. He remains the only writer to have had the number one book, movie, and TV show at the same time.

Pirate Latitudes was discovered as a complete manuscript in his files after his death in 2008.
His first novel, Andromeda Strain, really influenced me and provided a glimpse into the genius of this doctor-novelist who kept producing hits for nearly four decades. Most of his books touched on ethical dilemmas faced by intelligent individuals in complex situations, a plot element usually simplified away by other sci-fi authors.

The quality of his work was not directly proportion to its popularity. I loved Timeline which was made into a long-forgotten (but charming) movie. Meanwhile Jurassic Park turned provocative speculation on cloning extinct species — and theme park business models — into a bombastic blockbuster.

Crichton’s non-scifi social commentaries were less successful. Rising Sun titillated audiences at a time when fear of Japanese economic domination was greatest. State of Fear and its critique of environmentalism as religion briefly made Crichton persona non grata among the PC set, but was eventually forgotten as ER continued promoting Hollywood sensibilities every Thursday night at 10pm.

Airframe gave a rapier critique of corrupting influence of oligopsony demand upon public safety, but without Jack Nicholson shouting “you can’t handle the truth,” any hope of reform died with its author.

So having recently finished Sphere, I’m looking forward to Pirate Latitudes, even if a 17th century English pirate doesn’t appease my need for a science fiction fix.

Friday, November 20, 2009

Inevitability of e-book success?

Bloomberg ran a story Friday focusing on the adoption of e-books in college classrooms:

As Sony Corp.’s e-book devices vie with the Kindle to win over readers, the real showdown may come later: when a shift to electronic textbooks at schools threatens to eclipse the current market for the products.

Within five years, textbooks will be the biggest market for e-book devices, dwarfing sales to casual readers, predicts Sarah Epps, an analyst at Forrester Research Inc. in Cambridge, Massachusetts. Corning Inc., which is developing glass screens for e-readers, expects textbooks to fuel about 80 percent of demand for those components by 2019.

“Print will expire faster in the textbook world than in the trade book world,” Epps said. “The technical barriers will disappear and five years is enough for the content to catch up with demand. The potential is there.”

“The Millennials are very comfortable reading things online in a way their parents and grandparents are not,” said San Jose State University Professor Joel West, referring to the generation born in recent decades. “We will be seeing electronic textbooks become commonplace in the next 10 years.”
I said a lot of other things when interviewed about this a few months back:
  • Amazon‘s achilles heel is the proprietary mobi format against everyone else’s e-pub, but if college students are using a book viewer for 4 years and renting books for one semester, this becomes almost a non-issue.
  • Moving from selling dead tree books (with printing costs and inventory risk) to renting e-books will reduce the publishers’ costs dramatically. If publishers don’t share those savings with consumers — given the student and politician outcry about textbook prices — there will be hell to pay. I suspect, however, that most will play games with planned obsolescence in hopes of keeping their margins up.
  • I doubt that e-book reader is a separate category over the long term. To me, it seems obvious that the e-reader will go the way of the pocket camera and the MP3 player as a dead-end stand-alone device.
The unfortunate thing for Amazon and its Kindle lead is that it’s much easier for other publishers to attract the relatively small list of best-selling college texts than it is to attract a full range of popular books.

On the other hand, I think the textbook market could allow Amazon an opportunity to exit the reader business — as I believe it inevitably will — and focus on its core competence of distribution (presumably at that point indifferent as to format). Under this scenario, rapid growth in the textbook market could very well force a disaggregation of the market into distributors and players.

So Sony and Apple (and perhaps Nokia and Dell) will be competing on the hardware side and Amazon/B&N competing on the distribution side. Colleges generally shy away from mandating a particular vendor for other hardware, so I think “buy an e-pub reader” is more likely to catch on with college syllabi than “buy a Kindle.”

Saturday, November 14, 2009

Nobel experiment

Economists and author Steven Landsburg notes the economic ignorance displayed by a well-known NYT columnist:

It’s always impressive to see one person excel in two widely disparate activities: a first-rate mathematician who’s also a world class mountaineer, or a titan of industry who conducts symphony orchestras on the side. But sometimes I think Paul Krugman is out to top them all, by excelling in two activities that are not just disparate but diametrically opposed: economics (for which he was awarded a well-deserved Nobel Prize) and obliviousness to the lessons of economics (for which he’s been awarded a column at the New York Times).

It’s a dazzling performance. Time after time, Krugman leaves me wide-eyed with wonder at how much economics he has to forget to write those columns. But today’s, on why America should consider European-style employment protection, is his masterpiece.

[E]xactly which brilliant European policies does Krugman believe the U.S. should now consider with favor? Among others, labor rules that discourage firing and incentives for “short-time work schemes”, where everybody puts in fewer hours. You’ve got to admire the effort it took to get from Nobel-quality economic analysis to the sort of stuff that economics professors around the world work so hard to drill out of their less talented freshmen.

If you want to increase employment by making each worker less productive, there are lots of ways to do it besides short-time work schemes. Instead of making them work half-days, we could require all manufacturing workers to work with one hand tied behind their backs. Or we could really handicap them by filling their brains with nonsense. Krugman to the rescue!
The latest in outsourced economic criticism, as a cost saving measure in these difficult financial times.

Friday, November 13, 2009

Google commoditizes another layer

The news says that Google is offering “free Wi-Fi” at 47 airports through January 15. Google issued a press release Tuesday, posted it to their blog and created a new website FreeHolidayWiFi.com. (They’re also sponsoring Wi-Fi on Virgin America flights).

My wife was traveling through two airports Tuesday and got to try out the free Wi-Fi. She also saw the airport advertising displays giving Google credit for this “free gift.”

However, there’s a problem with this story. At least two of the 47 airports (SAN, SJC) are ones that I regularly frequent that already have free Wi-Fi. They both had it last month and last year. So what does it mean to give us free something we already had? Does that mean Google’s (or the airport) is going to take it away? Or does it mean the airport already had the costs real low and thus it was relatively cheap for Google to buy sponsorship?

At a broader level, Google favors anything that commoditizes Internet access. An early example was when it began giving away free Wi-Fi in its adopted home town of Mountain View — and helping to end the mirage of paid municipal Wi-Fi networks.

Google wants Internet access that’s fast, ubiquitous and cheap, because in simple economic terms Internet access is strongly complementary to wasting a lot of time feeding keywords into the Google money machine. And of course cheap Internet access also grows the market for all of Google’s Internet services — which will be helpful should it ever find another profitable revenue stream beyond search.

Update, 8:30am: In response to a comment (below) about whether Google’s sponsorship is bad, let me better articulate my late night concerns.

I'm frustrated by a lack of transparency — not the first time with Google.

  1. What does it mean to "sponsor" Wi-Fi that is already free? How much is Google paying? Perhaps a newspaper reporter at the San Jose Mercury, San Diego Union or Las Vegas Sun will ask some pointed questions.
  2. Why is it advertising in the airports that it’s doing us a favor without noting that it was previously free?
  3. What happens when the sponsorship ends? I.e., does Google sponsoring it mean that our formerly free service is no longer free?
Sure, the AP story Tuesday about Google’s free airport Wi-Fi (and smaller efforts by Microsoft and Yahoo) said
The 47 airports include some, such as Mineta San Jose International Airport and McCarran International in Las Vegas, that already provide free Wi-Fi. Sponsorships help the airport keep the service free.
but “help” doesn’t explain whether this is defraying part of the cost, paying the entire cost, or helping airports make a profit — nor does it say what will happen when Google stops “helping.” Airports are almost entirely public entities, so more transparency here is certainly a legal requirement.

This is obviously a trial balloon. If Google pulls out and the service at these airports goes from free to free to paid, there will be a huge backlash — against the airports and against Google.

If I had to guess, I’d say Google is laying the groundwork for running ad-supported free Wi-Fi at all US airports, with these as pilot projects. Such free Wi-Fi would improve Google’s image and political standing with an influential segment of the public (frequent travelers), at a time when it faces increasingly pointed questions (e.g. in antitrust, Google Books) in its march to Total World Domination.

Thursday, November 12, 2009

We’ve already established what you are

I found it really odd to visit the NYTimes.com Wednesday and have my screen taken over by a Flash animation that turned out to be a paid ad. It wasn’t just the animation, or the intrusive ad picture on the right, but the fact that it took over what should have been the New York Times masthead.

As it turned out, the ad was an amusing spot featuring the Mac guy and PC guy arguing over Windoze users switching to Mac.Once upon a time newspapers used to have policies that called for strict separation of editorial and advertising, but as times get tight — and newspapers enter a long secular decline — those boundaries have become blurred. (It also seems to be happening with radio.)

When I saw this, I was immediately reminded of the old joke attributed to GB Shaw, which ends with the punchline: “We've already established what you are, ma'am. Now we're just haggling over the price.”

Wednesday, November 11, 2009

"What were they thinking" department

With my undergraduate strategy classes, we just got through talking about mergers and acquisitions. This year, as in previous years, I made three points:

  1. Due to egos and compensation, CEOs bias towards creating bigger but not necessarily better companies.
  2. Adding revenues without increasing profitability does not improve a company’s strategic position.
  3. When you are doing a major acquisition, there are two types of companies available: expensive good companies and lousy cheap ones.
I also point out that 99% of “mergers” are actually acquisitions, with a clear dominant partner. (NB: HP-Compaq).

In this morning’s Merc, Chris O’Brien writes about tech companies that are both unwinding acquisitions gone bad while continuing to make other acquisitions. As O’Brien notes, this is not a particularly good time in the economic cycle to get a good price for divestitures (although presumably new acquisitions will be relatively cheap).

A number of these acquisitions were ill-conceived from Day One. Some are dumber than others, as when eBay spent billions to buy Skype but some lawyer (or biz dev guy) forgot to get rights to Skype’s technology.

O’Brien summarizes some of the broader trends of deals gone bad:
Bryan McLaughlin, a partner with PricewaterhouseCoopers' Transaction Services, … said that in the third quarter, which ended in September, about 40 percent of the acquisition deals involved some kind of divestiture, up from 25 percent for the same period one year ago. That is, companies weren't buying smaller, stand-alone outfits; they were buying essentially the castoffs of other companies.

And a recent survey by Pricewaterhouse found that 69 percent of the 215 companies polled expected divestiture activity to either stay the same or increase over the next year.

Many of these divestitures are the fruit of ill-considered acquisitions made over the past few years. This failure rate should come as a surprise to no one in the board room or executive cubicle. A few years ago, McKinsey & Co. published a study indicating that 70 percent of mergers failed to generate the expected returns. Hope, however, seems to spring eternal in boardrooms as companies keep making deals.
Small companies continue to get acquired as an exit strategy, and often these are a cheap way to gain access to talent and technologies. The problems seem to be around the big chest-thumping acquisitions that make headlines for a company and its CEO.

Interestingly, once a company overpays for an acquisition, there’s not much of a reason to divest it unless you can find some greater fool who’s also going to overpay. So the specter of firms divesting their acquisitions suggests they either bought a bad company, or at least the claimed synergy used to justify the acquisition never materialized.

Update 1:30pm: After I posted this article (and after the bell), HP announced it’s spending $2.7b to acquire commodity networking company 3Com, at a 39% premium to its closing price and more than 2x trailing revenues.

Tuesday, November 10, 2009

Success in business is not success in politics

Michael Bloomberg has done it, but Jon Corzine failed miserably. Both Carly Fiorina and Meg Whitman hope to do it: translate business success into political success.

Michael Skapinker of the FT has a provocative column this morning about why this often fails — a column that says more about the failings of the political system than of the leaders.

He saved the best for the final three paragraphs:

The real difference between business and government is that government does not go out of business. When a company boss announces a strategy, the implicit message is “we need to do this to survive”. Employees may think the strategy misguided but they realise that their jobs depend on their leaders getting things right.

Civil servants know that, unless their activity is privatised, their departments will still be around years from now. Some may lose their jobs from time to time but seldom on the scale of the private sector and not without a far bigger public fuss.

I recall talking to a neighbour who had moved from corporate life to a government-owned organisation. He was seething at his new charges’ unwillingness to accept that they had to change. What, I asked him, would happen to them if they didn’t? Not much, he admitted. That is what makes life so hard for the corporate executive who wants to sort out the country.
It’s hard to imagine a more telling indictment of the failure of government. “Public servants” are paid with taxpayer funds, but instead they assume they can outwait leaders pushing change. This brings an utter lack of accountability — or worse.

Based on years of research into government corruption, economist Robert Klitgaard developed a simple formula:
monopoly power + discretion - accountability = corruption
So a government agency that doesn’t face consequences for failure — whether an administrative agency or my pension fund — is one that is (or will become) at some level corrupt, working for the benefit of the employees and not society who pays them.

No organization or organizational unit in society should think that it will (or deserves to) exist forever. Even the Catholic church — perhaps the oldest continuously operating human institution — is closing branches and having to remake itself in order to continue to attract resources. Certainly my own employer (like so much of public higher education) is scrambling to find ways to become more effective and efficient as politicians allocate ever-decreasing amounts of tax revenues.

Rather than give up, I hope this means we will get more experienced business leaders seeking political careers — pushing for deeper and lasting reforms to make government work better. (Someone like a Fortune 500 CEO rather than a movie star). I think that’s change everyone can believe in, except of course those bureaucrats who rightfully fear losing their lifetime sinecure.

Of course, these executives would first have to get elected. (Most) public employee unions will fight hard to block election of any real reformer. However, arrogant ex-CEOs often shoot themselves in the foot, as when Al Checci (of Northwestern Airlines) lost in the 1998 Democrat primary for California governor. If he’d won, we might have had real reform for the dysfunctional mess in Sacramento and no opening for the governator — or we could have ended up with just another Jon Corzine.

Monday, November 9, 2009

It was 20 years ago today

[NYT front page]The Berlin Wall fell 20 years ago today, but you wouldn’t know it from US news coverage (or the priorities of the Administration). In my local paper, the lead story is about a 40 year Palo Alto guitar shop. (Sure, this is a story not otherwise commoditized by Google news, but half the front page?) Perhaps taking her cue from the administration, even the gray lady herself didn’t think it worth mentioning on its front page.

To its credit, LATimes.com listed it among the hot topics, along with the Lakers and the latest movie grosses, including a touching story about a mom born in 1961 (when the Wall as raised) and her son born in 1989. And the LAT website replated its lead (is it possible to replate a website?) to highlight the Berlin bash marking the festive occasion.
This seems like madness: 200 or 500 years from now, the fall of the Berlin Wall will be listed as one of the 4 or 5 most important events of the 20th century, along with D-day and perhaps Armistice Day, the death of Kaiser Wilhelm or the invasion of Czechoslovakia.

Just to remind the American people (and politicians) of the milestone, Angela Merkel only the second German leader to address a joint session of Congress. She used her speech in part to recall her own doubt that she would ever escape the wall:

In 1957 I was just a small child of three years. I lived with my parents in Brandenburg, a region that belonged to the German Democratic Republic (GDR), the part of Germany that was not free. My father was a Protestant pastor. My mother, who had studied English and Latin to become a teacher, was not allowed to work in her chosen profession in the GDR.

Not even in my wildest dreams could I have imagined, twenty years ago before the Wall fell, that this would happen. It was beyond imagination then to even think about traveling to the United States of America let alone standing here today.

The land of unlimited opportunity – for a long time it was impossible for me to reach. The Wall, barbed wire and the order to shoot those who tried to leave limited my access to the free world. So I had to create my own picture of the United States from films and books, some of which were smuggled in from the West by relatives.

What did I see and what did I read? What was I passionate about?

I was passionate about the American dream – the opportunity for everyone to be successful, to make it in life through their own personal effort.

I was passionate about all of these things and much more, even though until 1989 America was simply out of reach for me. And then, on November 9, 1989, the Berlin Wall came down. The border that for decades had divided a nation into two worlds was now open.
She thanked America for its role in bringing freedom to Central Europe:
I thank the American and Allied pilots who heard and heeded the desperate call of Berlin’s mayor Ernst Reuter as he said “People of the world, … look upon this city.”

For months, these pilots delivered food by airlift and saved Berlin from starvation. Many of these soldiers risked their lives doing this. Dozens lost their lives. We will remember and honor them forever.

I thank the 16 million Americans who have been stationed in Germany over the past decades. Without their support as soldiers, diplomats and generally as facilitators it never would have been possible to overcome the division of Europe. We are happy to have American soldiers in Germany, today and in the future. They are ambassadors of their country in our country, just as many Americans with German roots today act as ambassadors of my country here.

I think of John F. Kennedy, who won the hearts of despairing Berliners during his 1963 visit after the construction of the Berlin Wall when he called out to them: “Ich bin ein Berliner.”

Ronald Reagan far earlier than others saw and recognized the sign of the times when, standing before the Brandenburg Gate in 1987, he demanded: “Mr. Gorbachev, open this gate … Mr. Gorbachev, tear down this wall.” This appeal is something that will never be forgotten.
But after winning the Cold War, America has moved on. The people of Central Europe remember America’s leadership, but America (or at least the current ruling party) want to forget. The president flew to Copenhagen to lobby for one city’s Olympic bid but snubbed the Germans, declining to fly to Berlin to meet with America’s most important Allies (or hear the U2 concert).

Meanwhile, many problems lay unresolved, not the least of which are those of the former soviet republics (notably Ukraine and Georgia) which are not quite free, not quite vassals of their powerful neighbor. The FT (and other British papers like the Guardian) have run many stories over the past week that engaged these ideas, the ongoing challenges, the residual instability of the smaller (or less independent) countries of central Europe, and what might be done in the future.

The problems of Europe are not all solved, and they will come back again in a way that impacts Americans. In the meantime, Slavic studies are considered a marginal and unimportant field, the way that Arabic studies were a decade ago or Chinese studies 20 years ago.

Note on title: I realize the lyrics to Sgt. Pepper aren't exactly appropriate, but there's something appealing about the idea of the normally dour Merkel belting out the Beatles. Besides, the FT had already used “When the wall came tumbling down”.

Sunday, November 8, 2009

Carly reconsidered (I)

Carly Fiorina announced her candidacy for US Senate on Wednesday, two days after my undergraduate strategy students presented their analysis of the HP-Compaq merger. Both events have caused me to reflect on her qualifications as a CEO and a politician — and to admit that I may have been wrong about the former.

Ever since she made noises about entering politics, I’ve been seriously conflicted. On the one hand, I had a front row seat as she single-handedly worked to dismantle “The HP Way.” On the other hand, I would certainly agree with her politics more than many politicians who might run for office here.

Compared to joining HP just before the bursting of the dot-com bubble, Fiorina’s timing for her inaugural run looks to be perfect, both on the issues and on personalities.

Incumbent Barbara Boxer has a near-perfect record as judged by her “liberal quotient” compiled by the Americans for Democratic Action , crossing the ADA only once from 2003-2008 — by voting for George Bush’s trillion dollar Medicare prescription drug benefit. If last week’s elections are any indication, 2008 could mark the high water mark for the Democrat party‘s liberal wing.

Fiorina has positioned herself as an economic conservative and social moderate (particularly on abortion), perfect for reaching independent and Republican women. My wife is a big fan after reading her memoir, despite having lived through Fiorina’s fight over The HP Way. If my wife and sister-in-law are any indication, Fiorina will get the enthusiastic support of educated soccer moms (both stay-at-home and career-oriented) if she can articulate a coherent story about why the government should live within its means the way that private citizens have to do.

On the one hand, Fiorina’s reputation would be a liability running against a well-liked figure. As a CEO, Fiorina was a polarizing figure — criticized as being abrasive and imperious — even if that might be less noticeable in the US Senate, where every Senator considers him/herself one promotion away from becoming president.

On the other hand, Boxer has shown her own imperious side, whether in dealing with the US military or as a committee chair. When it comes to egos, Fiorina may have met her match challenging a politician who — as columnist Dan Walters put it — “has written two novels with a fictional version of herself as the heroine.” Meanwhile, Fiorina’s recent (thus far successful) fight against breast cancer could serve to humanize her.

In terms of managerial and economic aptitude, Fiorina is grossly overqualified. Law school graduate senators can’t manage their way out of a box, let alone successfully micromanage a $14 trillion economy. Whatever her mistakes, Fiorina credibly initiated and supervised one of the most difficult mergers in US history, and thus should be able to manage a US Senate office of 80 boot-lickers as well as understand broader issues of job creation and economic growth.

I’ll be curious to see whether Boxer underestimates her opponent. Fiorina is one of the smartest and most disciplined public speakers I’ve ever heard, while Boxer seems to generally assume that she’s preaching to the choir (perhaps because she began her career in a safe Marin County district). An open debate between the two women would play to Fiorina’s strengths.

My sense is that if the election is on economic issues, then (barring some unexpectedly rapid economic recovery) Fiorina should win easily. However, Boxer will try to make the election a referendum on Fiorina (and vice versa), and the election will go to whoever is most successful in that regard.

The other thing she’ll need is a campaign staff. California Republicans once had an impeccable brain trust that won all but 2 gubernatorial elections from 1966-1994. But the party has been unable to win statewide elections since then, the governator not withstanding. And business execs-turned-politicians have a terrible track record of cultivating and heeding the advice of seasoned campaign professionals.

Photo from Fiorina’s campaign tour taken from her campaign website.

Saturday, November 7, 2009

Definition of insanity

From today’s Wall Street Journal:

A familiar definition of insanity is to keep doing the same thing and expecting different results. So in the wake of yesterday's report that the national jobless rate climbed to 10.2% in October, we suppose we can expect the political class to demand another "stimulus." Maybe if Congress spends another $787 billion in the name of job creation, it can get the jobless rate up to 12% or 13%.

It's hard to imagine a more complete repudiation of Keynesian stimulus than the evidence of the last year's job market. We've now had two examples of such stimulus—President Bush's $160 billion effort in February 2008 and President Obama's mega-version a year later—and neither has made even the smallest dent in employment. As the nearby chart shows, Mr. Obama's economic advisers sold the stimulus by saying it would keep the jobless rate below 8%.

The White House says the stimulus created as many as one million new jobs, but this is single-entry economic bookkeeping. … But such spending isn't free. Every dollar in new government spending is taxed or borrowed from the private economy, which might have put it to better use.

If the government takes $1 from Paul, who would have invested it in a new business, and gives it to Peter, who spends it on a new lawn mower, the government records it as a net gain for economic growth via consumption. But the economy is hardly more productive as a result. …

The policy lesson here is for both political parties. President Bush's cave-in to Democrats in 2008 meant that there was no debate in Washington over policies that might have produced a much better stimulus at that early point in the recession. Like so much else in Mr. Bush's final year, he lost his policy bearings and forgot the lesson of 2003: A stimulating tax cut needs to be immediate, permanent and at the margin of the next dollar earned. Instead, for the last two years, the U.S. and most of the world have been pouring money into a Keynesian cul-de-sac.
The latest example of outsourcing economic criticism in these hard times.

Friday, November 6, 2009

New York adopts an industrial policy

I didn’t get why the NY attorney general is suing Intel, other than it’s the same publicity-seeking path his predecessor to get elected governor. Antitrust enforcement is a Federal issue, whether for the US DoJ/FTC/FCC or the Eurocrats in Brussels.

However, blogger Geoffrey Manne suggests that NY has an interest in hurting Intel and helping AMD, given that AMD is talking about building a $3 billion plan in upstate New York. For some reason, I thought the idea of Federalism and the Constitution was to prevent inter-state trade wars, but I guess both have gone out the window along with original intent.

Like most lawyer-politicians, the AG’s understanding of business and economics is dubious at best. FT’s Lex aptly summarized the likely effect:

Share prices for Intel and competitor Advanced Micro Devices barely reacted. The problem for AMD, which is set to face Intel in a Delaware courtroom in March, is that legal victories offer only consolation, and perhaps the chance of a pay-out to help pay down debt. The period when the company had a clear technological advantage and opportunity to make a dent in Intel’s market share of about 70 per cent has passed.

[E]ven if, as alleged, Intel is shown to have forced customers to guarantee market share levels in return for cash rebates, the structure of the industry will probably remain unchanged.

Thursday, November 5, 2009

Maybe Google is serious about OSS

With its open source release today of its Closure JavaScript tools, Google is starting to suggest that it may eventually become a good open source citizen.

The newly released tools include a JavaScript compiler, a very broad JavaScript class library, and Java/JavaScript-friendly templates. All are released under the Apache 2.0 license, a very permissive license that essentially lets outsiders do whatever they want (rather than a viral or semi-viral license like EPL or LGPL).

Sure, Google has released other code before. Some of it has been experimental (beta quality) examples. Much consists of libraries (or example code) to get at Google’s APIs and thus provide complementary products that increase the value of Google’s services. Even Android would fit in the latter category — albeit at a larger scale — since it helps make handsets that access Google services.

However, the Closure Compiler, Closure Library and Closure Template are production level code that’s been used for years. These are the same tools that Google uses to implement gmail, Google Docs and Google Maps. Theoretically, a competitor could use these tools to make web-based services that compete with Google: Microsoft and Yahoo are unlikely to do so, but I would be surprised if some of the foreign portals (without large internal R&D staffs) don’t quickly adopt the technology.

Of course, Google’s not doing this out of the goodness of their hearts, and the ultra-secretive company has not said why this release now of this code. Perhaps it doesn’t think rivals will use it, perhaps it has a huge head start, or perhaps it’s just non-strategic commodity code. Conversely, perhaps it’s trying to shape de facto industry standards for JavaScript (now that Netscape and AOL are gone), either to influence complementors or assure a supply of well-trained newby programmers.

Still, this is only one step of openness from a semi-open company leading the race for Total World Domination. Even if — as blogger Matt Asay and Google spokesperson argue — Google is a leading (if not the leading) contributor of open source code, that doesn’t mean that they have become good open source citizens.

Revealing code is only one part of sponsored open source openness, which also includes shared authority over the future of the code and governance of the community, providing both transparency and permeability to outsiders. This has proven impossible for many companies — both big, powerful, autocratic ones used to throwing their weight around, and smaller so-called “open source” startups that use OSS as teaseware for their real (commercially licensed) product.

IBM was the first and one of the few companies to prove it was serious about open source — but then it could sell hardware and services if the OSS commoditized its software offerings. Its Eclipse Foundation remains the largest and most successful multi-vendor sponsored open source community.

Five years ago, HP had a similar business model and comparable principles (if not level of investment) in Linux and other open source efforts. However, under its new penny-pinching CEO it seems to have cut back on many R&D and community oriented efforts, and the former Linux Systems Division (later Open Source Business Office) now seems to be just a web page.

Apple isn’t normally considered an open (or open source), but its WebKit library is perhaps one of the most successful and most cooperative firm-sponsored efforts after Eclipse. Google, Nokia and now even RIM are working to make the library suitable for the industry’s wide range of mobile devices.

Meanwhile, Nokia got off to a bad start last year — following the model that Sun used for so long — of a nominally open project that’s completely controlled by the sponsoring company. Maybe both will get better someday (and to be honest I haven’t checked Sun recently), but letting go is clearly hard for big companies to do.

So will Google ever let go on strategically important code on projects? Or will it always require that it retain control of such efforts? My crystal ball can’t see through all the walls of secrecy erected by the Monster of Mountain View, so I refuse to hazard a guess. But I suppose anything’s possible.

Hat tip: Matt Asay’s Twitter® feed.

Wednesday, November 4, 2009

Scalable services

IT enabled scalable services such as Google and other SaaS vendors are quite different from the millennia-old model of labor-intensive services. This was the point I made in an open innovation talk earlier this week — a point I have been mulling over ever since.

Near the end of and near the end I discussed business models and the general shift away from mass production towards services. Below is a slightly expanded version of one of my slides.

Model
Example
Basis
Personal servicesBlacksmith, barberPersonal skill, locaiton
Mass productionCotton gin, Springfield rifle, Model TDesign cost, manufacturing, economies of scale, cost of goods
Information goodsContent: WSJ.com article, iTunes song†Design cost, economies of scale, cost of license
Automated servicesSaaS: Google map, search, mailDesign cost, economies of scale
† excluding information goods delivered in tangible form.

I won’t claim it’s terribly profound, but the act of making the table forced me to think about some implications.

One point was an old one — there are “services” gurus who confound information goods and services to make their area of interest seem more important than it is (even though it would be very important even without exaggeration). Stamping out an identical information good over and over again is not delivering a service — it’s selling an intangible product as is well discussed by the book Information Rules. (Normally we’d think of this as $0 COGS product, but certainly an important class of information goods are sold based on royalties.)

Also, in this intuitive taxonomy I want to hold aside “services” involving the selling and renting of tangible goods, since much (or all) of the value comes from the good and not the customized personal experience. Services involving money also seem very different, even though they’re an important recent area of open innovation research.

Some businesses include a combination of products and services. With my MBA students Wednesday night — talking about disruptive innovation — I noted that for some low-priced commodity products, the cost of providing any personalized service (such as a tech support call) will destroy all the product margins from the product.

In my talk, I also briefly mentioned the “Pharma 2010” view of systems biology by PwC Consulting (now part of IBM). If I’d had time to track down the PDF, I would have put up the opening paragraph of the PwC study (instead of paraphrasing it):
In 2010, the pharmaceutical industry (Pharma) will not only make white powders; it will sell a variety of products and therapeutic healthcare packages that include diagnostic tests, drugs and monitoring devices and mechanisms, as well as a wide range of services to support patients. Companies that learn how to make “targeted treatment solutions”, as we call them, will deliver bigger shareholder returns than they have ever delivered before.
Even this hastily drawn, over-simplified taxonomy communicated the point that I thought was important for the middle managers to understand: don’t think of services the way we used to do — or perhaps the way Accenture or EDS or IBM Global Services does — as labor-intensive, low-margin businesses. Instead, think of them the way Google does — high up front cost, positive returns to scale, that are scaleable indefinitely. These sorts of 21st century business models are completely different than the Bronze Age services model that we normally consider, and are quite feasible for companies that have access to unique and valuable knowledge that can be delivered electronically.

In making this argument, I was dimly recalling (but had no time to look up) what Randy Stross said in the talk he gave last year about his book Planet Google. The oral presentation emphasized that the ideal pursued by Sergei and Larry — in their zealous embrace of algorithmic solutions — that humans shouldn’t touch anything, but instead everything important should be delivered by the computer and not by manual labor.

By using the index and browsing my paper copy of Randy’s book (now in paperback), I was unable to find the relevant passage (which I thought would come in Chapter 3, “The Algorithm.”). However, thanks to Google Books — I did find a discussion of Sergei and Larry’s scalable business model in Chapter 2 (on pp. 48-49).

Tuesday, November 3, 2009

Nook vs. Kindle: next wave of Android disruption

James Fallows has a detailed comparison of the Nook and Kindle. Clearly by using ePub, Barnes is challenging Amazon’s proprietary vertically integrated content distribution system. One interesting angle mentioned by Fallows is that Google books is making its existing online (out of copyright) books available in ePub format, thus increasing the content available for Nook. (For now, Kindle owners have to convert the book to Kindle’s proprietary mobi format, presumably because Amazon doesn’t want other formats on its device).

The Nook is based on the Android 1.5 platform and thus will run Android apps. B&N is mealy-mouth on whether they’ll provide an SDK. (First they need to enable the Android WebKit browser).

Still, the Nook is a very high profile (and probably the highest volume) example to date of what my friend Bill Weinberg calls “Android Beyond Mobile.” By solving some of the UI/SDK problems of embedded Linux — without adding a lot of proprietary royalty-bearing layers on top of it — Android will be an attractive platform for many different single-purpose mobile devices.

Who knows, maybe there will even be GPS navigation devices based on Android. TomTom and Garmin shares plummeted last week when Google announced free turn-by-turn navigation services in Android 2.0, suggesting an opportunity for a disruptive market entry by rivals to make Android-based car devices without having to incur the TomTom/Garmin R&D costs.

Of course, there’s no reason Android’s ambitions are limited to mobile. Settop boxes anyone? Competitors to Media Center and AppleTV?

Hat tip to TeleRead, a very good blog on e-books.