Monday, March 29, 2010

E-reader smackdown!

The iPad is coming Saturday as a combined e-reader, Internet tablet, oversized iTunes Store client and jumbo iPod Touch game platform. Its most direct competitors seem to be the Amazon Kindle and the Android-based Barnes & Noble Nook.

Who will win this e-reader smackdown? In handicapping efforts Amazon, Barnes & Noble and Apple efforts to win US users, I can think of a number of possible factors:

  • Hardware device form factor, weight, screen, speed, battery life. Advantage: unknown.
  • Software features, easy of use. Advantage: Apple
  • Content variety and depth. Advantage: for now, Amazon
  • Merchandising promotion of content. Advantage: unknown (all three are strong
  • Price of the hardware, the content. Advantage: unknown (price wars have yet to begin)
  • Ecosystem of non-book add-ons. Advantage: Apple, with B&N (via Android) possibly catching up
  • Physical distribution for cross-promotions. Advantage: Both Apple and B&N have a strong retail presence, but only the B&N stores are about merchandising content.
  • File format, potentially reducing switching costs. Advantage: unknown, and unknown if anyone will care.
Obviously much of this is a systems play, with both end-to-end competencies and also the ongoing maintenance and development of a platform and platform-based products. Amazon has online services skills second only to Google, so the back-end is the company’s clear strength.

On the other hand, only Apple has managed platform strategies before — and it‘s done several very, very well. (Let’s ignore the Newton and Apple /// for now.)

Many startup companies are trying to make stand-alone e-readers, but none of these will be able to make the end-to-end systems. The stand-alone play was tried a decade ago with the Rocket eBook — it didn’t work then, even before consumers had three viable systems to choose from.

However, at least two other (self-imagined) systems integrators and erstwhile Apple rivals are conspicuously un-aligned. Microsoft’s efforts to create a music store (think Zune) have failed thus far, but its hardware partners like HP and Dell certainly want to sell tablets — and probably with more to offer than just Flash-based websites that Apple doesn’t have.

The other conspicuous omission is Nokia (and perhaps Samsung and some of the Chinese handset makers). They’re not going to sit idly by as Amazon and Apple swoop up customers and create switching costs, but I don’t see how they can realistically create an alternative on their own. Will Nokia try to add books to Ovi? Will its competitors get the Wholesale Applications Community to create a bookstore too? None of these seem viable.

There will be entry, exit and consolidation. Some of the startups will die. There’s no reason to think that Amazon & BN will make hardware forever, so perhaps some startup’s VCs will merge its portfolio company with the Kindle or Nook spinout.

Perhaps Palm’s investors will merge what‘s left of the company with one of these companies, and then offer it as an open-architecture e-reader company for the libraries of other publishers or retailers. The Nook is being developed in Palo Alto, so presumably it is populated with veterans of all local platform companies (Apple, Palm, Sun).

The one clear opportunity is Barnes & Noble going abroad. Although their sales skew towards the US, both Apple and Amazon think of themselves as global companies and are likely to continue their go-it-alone strategy abroad. Barnes & Noble will have its pick of partners in Europe and Japan if it wants to offer its hardware, back-end systems and ecosystem to retailers that cannot realistically establish their own systems abroad.

Sunday, March 28, 2010

Chicago's contribution to economic freedom

There is a certain irony that the city that is today synonymous with corrupt machine-style politics — thanks to the late Richard M. Daley — also brought us the 20th century’s most compelling and influential arguments for economic freedom, thanks to the assembled intellectual might at the University of Chicago’s school of economics.

Milton Friedman may still be dead, but his co-conspirator (and fellow Nobelist) Gary Becker is still very much alive, teaching at Chicago and visiting Stanford’s Hoover Institute.

Fellow Hoover Fellow Peter Robinson interviewed Becker for the WSJ, which ran Robinson’s column (alas, not the interview) on Saturday. Much of the column focused on how ObamaCare could be (or could have been) fixed to make the healthcare system more — rather than less — efficient.

More generally, Becker laments the difficulty of getting voters and policymakers to make good economic decisions:

"Of course that doesn't mean there isn't any systematic bias toward bad policy," he says. "There's one bias that we're up against all the time: Markets are hard to appreciate."

Capitalism has produced the highest standard of living in history, and yet markets are hard to appreciate? Mr. Becker explains: "People tend to impute good motives to government. And if you assume that government officials are well meaning, then you also tend to assume that government officials always act on behalf of the greater good. People understand that entrepreneurs and investors by contrast just try to make money, not act on behalf of the greater good. And they have trouble seeing how this pursuit of profits can lift the general standard of living. The idea is too counterintuitive. So we're always up against a kind of in-built suspicion of markets. There's always a temptation to believe that markets succeed by looting the unfortunate."
Either Robinson or Becker is too kind to mention the converse problem: the public tends to underestimate the tendency of politicians to act in their own self-interest, rather than in the public interest — although a year ago Becker noted the unjustifiable pork-barrel spending in the stimulus bill.

Economist David Henderson last year coined a term for those who impute such good motivates, despite evidence to the contrary:
What should we call people who seem to regard government as the solution regardless of the evidence? I propose the term "government fundamentalists."

Economist Jeff Hummel recently captured the essence of government fundamentalism this way: If markets don't work, have government intervene. If government intervention doesn't work, have government intervene further.
Thanks to Becker, Friedman, and others, we have intellectual theory (and evidence) that establishes the value of free markets. Now we just need more voters to appreciate that value.

Friday, March 26, 2010

Time for TiVo to say Ta Ta!

We discussed the prospects for TiVo in class last week, as I’ve done many semesters for the past few years with students in my technology strategy MBA class . As the company moves into its final act, the picture is becoming quite clear.

As in most semesters, there were some highly loyal TiVo fans in the room. TiVo had an early and unusually high adoption rate here in Silicon Valley. However, loyalty is not enough. My grad school classmate, Hope Schau (now at Arizona) published several papers (including an oft-cited “A” journal paper) on how Apple Newton owners were highly loyal — but that didn’t make it a viable business.

As we discussed TiVo’s “challenges” — euphemism for problems — it seems to me that TiVo faced a perfect storm, buffeted by old substitutes, new substitutes, and commodity low-cost rivals.

The old substitute is VCR: that's how I’m watching 24 this semester while teaching my Monday night honors class. Eventually the old substitute will die off, but then along came the new substitutes. There’s Fox.com (ABC.com, NBC.com, etc.), Hulu, the iTunes and Amazon stores. There’s even the build-your-own computer solution (e.g. Windows Media Server) which right now is no appliance, but could come back to haunt them some day.

Then there’s the commodity rivals, made by Asian rivals and sold by the cable and satellite TV companies. The cable companies are also promoting an alternate Video-on-Demand solution to replace time-shifting of mass-market content.

The industry has high economies of scale — particularly with the online program guide — but the competing solutions have enough scale that TiVo’s market share lead no longer servers as an entry barrier. There is also the problem that broadcast television — the content that DVRs time shift — is clearly a declining distribution medium, if not source of original content.

With all these substitutes, there really wasn't enough time to get market awareness and reduce prices to reach the mass market before the DVR product became a commodity. Yes, TiVo had a better run than Ampex, but it’s yet another example of pioneer disadvantage — the pioneer pays the cost of creating a market and doesn’t reap the benefits. (Peter Golder and Gerry Tellis demonstrated many examples of this almost 20 years ago).

Yes, TiVo got a nice stock bounce out of a favorable appeals court decision in the EchoStar patent lawsuit. But, IP or not, TiVo is boxed in on all sides, like a farmer who finds urban sprawl has paved over his formerly bucolic neighborhood.

The time for fighting the good fight on a point product is long since passed. Like other “Silicon Valley” companies (not clear if this fits), the value is created by diversified rivals who can cross-promote and integrate various complements and substitutes.

While the exit is almost certainly selling the company, I can’t predict who would want to buy at a premium to current prices: whether a consumer electronics company, settop box company, cable TV company, EchoStar, or even Microsoft (which did, after all, buy WebTV). But I think it’s time for TiVo shareholders to ask what the end game is, because the alternative to an orderly exit is a disorderly one, not living happily ever after as an independent company.

Thursday, March 25, 2010

Field of 4G Dreams

I’m not at CTIA nor have I been unable to keep up with the flurry of coverage. However, an article by Mike Freeman of the San Diego Union-Tribune caught my eye — not the least because of the skeptical tone by the news bible of Qualcomm’s home town.

Freeman refers to 4G as “the next big thing,” fixation nicely captured by the Gartner Hype Cycle or the earlier 1999 Michael Lewis dot-com chronicle The New New Thing.

Freeman quotes a number of skeptical analysts who (like me) don’t think mobile phone subscribers will increase their monthly bills (i.e. ARPU) for these new networks. Two wonderful snippets:

There’s almost a “Field of Dreams” quality behind the effort — “If you build it, they will come” — with the mantra that additional bandwidth will lead to new applications and services that will justify the investment.
and:
“What is the killer app for this stuff?” asked Michael King, an analyst with technology research firm Gartner. “Why do I need 10 megabits per second to my handset? I’m not going to pay you a heck of a lot more for streaming video.”

The business model, he said, is “to be determined.”
He also quotes Qualcomm VP William Davidson predicting the long transition period during which 3G and 4G will co-exist:
“I think I’ll be retired and we’ll still have 3G in the networks in the U.S. because the economics of covering this vast geography with a technology that is really no more efficient. … I just don’t see it happening.”
So as a useful palliative to the usual 4G hype, it’s recommended reading.

Wednesday, March 24, 2010

Slew of new Android phones

This week, LinuxDevices reported a flurry of new Android phones, including

  • Motorola i1, the first Android phone for the Sprint Nextel (i.e. Nextel) iDEN network;
  • Dell Aero, the AT&T variant of the Dell Mini 3i being offered in China;
  • HTC Evo 4G, the first Android phone to support the Sprint/Clearwire WiMax network;
  • Sprint announced it will offer the Nexus One, the first CDMA carrier to do so; and
  • Kyocera Zio M6000, the first phone from Kyocera (which made the early 6035 Palm OS smartphone). The phone is rumored to be bound for the Cricket discount cellphone service.
The proliferation of Android phones is good for Google and good for carriers, but the fragmentation of the Android customer base will make it increasingly more difficult for handset makers to earn a return on their Android investments, unless (as HTC is doing) they engage in product proliferation.

Tuesday, March 23, 2010

Smartphones for the developed and developing world

On Monday morning (Sunday night PDT) I gave a presentation on the evolution of the smartphone market via a videoconference to the Telecom Regulatory Authority of India, that country’s answer to the FCC.

For a half-day workshop organized by Rafiq Dossani of Stanford, I was one of two remote speakers from Silicon Valley (the other being Greg Rosston of SIEPR, talking about a FCC-funded study of broadband adoption). Four other speakers were live at the TRAI headquarters in New Dehli.

The talk drew upon my iPhone study, the Symbian study, and the (currently underway) study of Android. The slides are up on SlideShare if anyone wants to see them.

A few slides might be new to blog readers. I quoted Cisco’s prediction that mobile data traffic is doubling every year from 2010-2014. As smartphone share rises to 65% — according to Tim Bajarin — of new US sales (2012) and global sales (2015), networks will be straining to keep up.

As Irwin Jacobs said at CTIA on October 8, there are no significant spectral improvements coming after 3G, so remaining mobile Internet capacity increases will come from more base stations or more spectrum. The alternative is to shift traffic to Wi-Fi (as European carriers do) or reduce demand by variable-use pricing. (Paul Jacobs wants people to use MediaFLO instead). Still, it’s hard to see how the mobile bandwidth can keep up in the next decade with both the increase in home broadband speeds and the increased supply/use of online video.

In considering the big five cellphone vendors and their attitude towards Android, Motorola (#4) is clearly enthusiastic, Nokia (#1) is opposed, and the other three are in between. For now, I think Samsung’s (#2) infatuation with its own Bada platform makes it unlikely to do more than dabble in Android, while both LG (#3) and Sony Ericsson (#5) could join Motorola (and HTC) as Android promoters if it gets sufficiently popular (or their situation gets sufficiently desperate).

My conclusion was that at least four smartphone platforms will survive for the next five years: BlackBerry, iPhone, Android and whichever platform Nokia uses (Symbian S60 or MeeGo).

While researching the talk was instructive, I learned more from the questions from Dr. J.S. Sharma, and the other TRAI officials and guests. I was asked whether 4 platforms was too fragmented — to which I reiterated my earlier blog post that platform competition is a good thing. Three or four is a good number, providing competition but enough critical mass. Two (and certainly one) is not enough to engender competition.

Another question is about the spread of smartphones to India — which felt really odd, given that I’ve never set foot in the country. (Unlike Japan, China, Germany, U.K. etc.)

However, to me the issue — the open source Android — is the availability of a cheap, high-speed main CPU. The minimum for a decent Android phone seems to be about 600 MHz, so when such CPUs get down to the price of existing featurephone chips, then India-market smartphones should become common. (Will Apple chase this smartphone market? Nokia?)

The last question came from Anil Kripalani, a former TIA chairman and Qualcomm senior VP turned entrepreneur. He asked what would the impact be of the iPad and other devices upon US mobile data demand and capacity.

I had to admit that I’d not considered that. Today’s book readers (e.g. Kindle, the Android-derived Nook) don’t do full-motion video, but the iPad will. Such devices will be much more practical way for teens and young adults (and sports addicts) to watch Hulu, YouTube and other video clips. Perhaps this traffic is even more likely to be shifted to Wi-Fi hotspots. If not, it will further exacerbate bandwidth shortages in the US, given that (IMHO) any further reallocation of spectrum for mobile use is very unlikely in the near term.

Monday, March 22, 2010

Healthcare reform and entrepreneurship

For about 2 hours today, I was scheduled to appear on the local TV news to provide a commentary on the impact of Sunday’s healthcare bill upon local entrepreneurs. So while waiting for my 1 p.m. (later 2 p.m.) interview, I was doing some background reading to be up to speed.

Alas, the interview was cancelled because the reporter had his story changed, when shortly after noon Google announced that Chinese search users would get uncensored results from Google.com.hk. So my mom will have to wait a while longer to see a video of her firstborn being interviewed as an “expert” on TV.

Preparing for the planned interview, I didn’t see a single credible source on the impact of the Senate bill (let alone the planned changes) on small business — nothing equivalent to the November article in Time magazine about the House bill. The most complete factual source I saw was the Tax Foundation timeline published Sunday, but that was bullet points without links to a description of the details.

Obviously, for business, the economy, and the broader society, this massive change brings tremendous uncertainty, particularly during the period between the bill’s enactment and when the major spending is scheduled to begin in 2019. There will be two presidential and five Congressional elections between now and then — not to mention the low probability that either party will actually enact Medicare cuts that are budgeted as the major cost savings.

For California, there is also the uncertainty as to whether the Federal bill will increase or decrease the momentum behind the proposed single-payer monopoly for funding healthcare that has already passed the state Senate. Presume Arnie (and Meg) would veto such a bill, while Jerry would eagerly sign it.

There are also unresolved questions as to how the various mandates will work, and whether they will result in increased availability (and increased costs) of insurance for small and growing businesses. Even without the law of unintended consequences, I don’t think anyone has a clue as to what the net effect will actually be — even if the CBO did have enough time to do their job right.

However, there are two changes where the results are pretty easy to predict:

  • Increased taxes on the “rich”. Those making over $200k (family income $250k) will pay 0.9% more on earned income and 3.8% on unearned income (such as investments). This will cause the wealthy to choose not to realize income, which over time will reduce the pool of money available for angel investments. (How much? How soon? Who knows?) As an added benefit, like the AMT this surcharge is not indexed for inflation, so this surcharge will eventually become a middle class tax hike — particularly in high living cost areas like Silicon Valley.
  • The 2.3% excise tax on medical device makers. Why those who produce medical innovations should be taxed to pay for increased spending elsewhere is beyond me, but it will shift startups and investments away from this sector.
Both take effect in 2013, presumably to insulate politicians from political consequences until after the 2012 elections.

Anyone who understands economics knows that if you want less of something, then tax it. But then understanding economics is not a pre-requisite for law school, let alone elected office.

Friday, March 19, 2010

Economists, Competition and Regulation

The role of economics in antitrust regulation — and the political shifts in regulation under the new administration — was the topic of a great panel early Friday morning at Santa Clara University. It was hosted at the school’s High Tech Law Institute, which is wonderful (and woefully under-appreciated) in running a series of technology seminars for free. (The only other comparable venue is the SVPVS, but those sessions are often too technical for a business audience.)

The formal title was “Antitrust Policy a Year into the Obama Administration: What have we Learned? What's Next?” The main attraction was Carl Shapiro, one of five assistant deputy AGs (and chief economist) of the Antitrust Division of the Department of Justice. Shapiro — the Berkeley economist who co-authored the most famous text on standards competition — has been at DOJ for 12 months, in exactly the same job he held previously in 1995-1996 during the Clinton Administration.

The two other major speakers were (ironically) on the opposite side of the major technology antitrust case of the 1990s, US v. Microsoft. Stanford prof Tim Bresnahan (who held the same job from 1999-2000), while Greg Sivinski is a senior antitrust attroney for Microsoft. (A Santa Clara law school prof made some insightful and succinct insights into standards and antitrust, while a private attorney went on a rant about losing court efforts to destroy Rambus’ patent until the moderator finally shut him down.)

Bresnahan and Shapiro note that there are about 60 antitrust economists in both the DOJ and the Federal Trade Commission. They work alongside the many more lawyers in these divisions: as he joked, when something comes into the division, a spreadsheet goes to the economists and the memoranda goes to the lawyers. Most of the hours are put reviewing large mergers ($63+ million in sales) under the provisions of Hart-Scott Rodino.

Dating to the 1890 Sherman Antitrust Act, the whole idea of competition policy is for the law to prevent anti-competitive behavior. While the lawyers worry about evidence, law and precedent, the economists are necessary to predict whether some future action (such as a merger) will have an anti-competitive effect.

Both Bresnahan and Shapiro note that identifying anti-competitive actions is relatively straightforward for horizontal mergers between discrete 19th century-style industries (such as aluminum cans). It is much more difficult in high tech, Silicon Valley-type industries, where there is a wide range of substitutes, complements, and buyer/seller relationships.

As Bresnahan said, “In high tech industries, it’s quite difficult to figure out what the scope of competition is.” When considering whether a combination is anti-competitive, he said the core question is “if they raise price, reduce quality or innovate more slowly — would there be third firms not involved in the merger who could take that business right over.”

What was most striking about the session was the unanimous agreement that for most antitrust cases, the difference between US administrations — notably Clinton, Bush and Obama — are miniscule compared to the differences between the US and Europe. Shapiro notes that in the name of transparency (and efficiency‚ the ATD is working to update (codify) its written merger guidelines — capturing the policies used across these three administrations.

For example, Bresnahan citied a visit to early Bush-era DOJ lawyer, who complained that firms expected a dramatic switch in policy in the Bush administration. Later on, when an audience member complained about Bush settling US v. Microsoft, Bresnahan quipped: “The Gore administration would have settled it too.”

The one place where elections matter is Section 2 enforcement under the Sherman Act — anti-competitive actions not involving a merger: single-firm anti-competitive behavior. These are much more discretionary, under the control of political appointees.

On the other hand, differences between the US and EU are sizable. Some of it is about priorities and values — e.g. the 1930s era German antitrust law that favors small mom & pop shops.

However, the speakers concluded that the most significant differences are in process. US antitrust regulators are more cautious, because they know they have to convince a third party (a federal judge) that their case meets the written legal requirements. At the European Commission, the DG Competition does not face those constraints, but instead serves as judge, jury and executioner — or, as the Economist termed it last month, is an “Unchained watchdog.”

All the speakers agreed that the problem of multiple jurisdictions is only getting worse. A decade ago, it was only the US and EC, but now many other medium-sized countries think they’re entitled to assert extra-terrirorial jurisdiction over mergers and other actions by US firms. (Witness the various countries piling on to tax Intel.)

I must admit, I’ve been much more impressed with the ideas (if not the practice) of antitrust under Democrats than Republicans over the past 17 years — in no small part due to their use of some of the smartest antitrust economists around to lead their efforts. Shapiro’s boss, Assistant AG Christine Varney, is out giving speeches evangelize American ideas of process transparency, evidence-based enforcement and international cooperation in competition policy. This to me seems like a very good thing.

Illustration by David Simonds, from February 18, 2010 issue of Economist.

Red iPhone Rising?

The chairman and CEO of China Mobile demonstrated Thursday that even the world’s largest mobile phone company is not immune from iPhone envy.

At the same time CM announced that it had reached 522 million subscribers, CEO Wang Jianzhou said that he hoped the company could land the iPhone. As AFP reported:

"We're hoping we'll come to an agreement (with Apple) on the iPhone as soon as possible," he told a news conference in Hong Kong to release the company's 2009 results.


"We will continue to express our interest in the iPhone. But not just the iPhone, also the iPad."
CM hopes to build on its success obtaining promise of a BlackBerry for its non-standard 3G network. As Reuters (via the FT) reported, Jianzhou said “including TD-SCDMA is not that hard to do – RIM is doing it.” Analysts speculate that CM needs a proven handset to drive 3G data demand — as the iPhone has done elsewhere in the world.

The problem is that despite its size CM is the only carrier in the world using the homebrew TD-SCDMA. China Unicom is using W-CDMA, while China Telecom has been growing rapidly (from a small base) using cdma2000.

Last time I checked, major chip suppliers like Broadcom and Qualcomm were not providing multi-mode (or single-mode) TD-SCDMA chips (despite rumors to the contrary). Among Western suppliers, the chips are available from ST-Ericsson, the joint venture of Ericsson and STMicro, but I’ve seen no evidence of a relationship between Apple and the vendor. (However, STMicro may want to sue Apple for the iPad trademark).

The idea of the iPhone on China’s dominant carrier has been long-rumored. However, beyond the technical issues, there is the business model issue. Apple wants a premium price for its premium product, but China Mobile has been reluctant to transfer wealth to handset suppliers.

FT and the WSJ say CM’s handset subsidies will go up 30% this year — a good sign for Apple and other premium suppliers — while MarketWatch says they’re going down. An increase would suggest that CM has decided it needs to switch to the handset-centric model used by Western carriers to promote 3G adoption, a rare example of convergence between the Middle Kingdom and the Rest of the World.

Thursday, March 18, 2010

Fanciful platform predictions

As a powerful member of the blogosphere, I got an interesting emailed press release Wednesday predicting the 2010 “North American” smartphone market share:

2009Share2010Share
BlackBerry23.2M49.2%28.0M43.0%
iPhone10.9M23.1%13.8M21.3%
Android4.6M9.7%12.3M18.9%
Microsoft4.8M10.2%4.7M7.2%
Palm1.4M3.0%3.1M4.7%
Symbian1.5M3.2%2.1M3.3%
Others0.8M1.7%1.0M1.6%
Total47.2M100.0%65.1M100.0%
The forecast is by Canalys, the London-based mobile phone consultancy. Of course, as for other mobile phone market share estimates, it’s for new sales rather than installed base.

To summarize their forecast: the market is growing rapidly at nearly 38%, but Android’s sales and share will explode while RIM and Apple will lag Android and the market. Android is expected to match the (current) BlackBerry numbers by 2013. (Other numbers are intriguing: For Palm, was 2009 awful or is the broader distribution of webOS going to save Palm? The “other” residual sets a cap on the share for LiMo, Bada and other misc. Linux versions.)

This is not the only prediction for rapid growth by Android in 2010. A Goldman Sach prediction for 2010 global share has BlackBerry and iPhone flat, while Android rises to 12%.

My question: how do you predict market share without knowing the handsets to go with it? Many (including me) predicted the Nexus One would be a great hit, but it (by one estimate) sold only 174,000 units in its first 10½ weeks — 1/6 the rate of a Droid or iPhone. Perhaps it’s because it was only T-Mobile (before AT&T, Verizon and Sprint), or perhaps it’s the expensive unsubsidized price.

Still, like the AppleTV and Windows Vista, major companies do introduce products that turn out to be duds. To me, it seems like a 167% year-on-year growth — going from half the size of the iPhone to nearly the same — requires more than just a proliferation of models. It also relies on some big hits, like the Droid. And it probably relies on smartphones being sold without data plans, which I think is coming but not for another 2-3 years.

I can’t speak to share, but I think Android will do well to crack 10 million “North American” handsets this year. (How many of these in Canada? Obviously less than 10%).

Even at these levels, Android would pass iPhone here in 2011. But I’ll never again underestimate Steve Jobs’ ability to pull a rabbit out of the hat, so it’s conceivable that a 27% iPhone growth is low — particularly if the iPhone makes it beyond AT&T. If Steve doesn’t find that next rabbit, then that would be bad for the AAPL growth multiple — its P/E is around 21, above HPQ & MSFT but behind INTC & GOOG.

Wednesday, March 17, 2010

Future scientists and the future of science

One of the hats that I wear outside of my job is as a STEM volunteer and (cringe) activist. Unlike Stem Cells, it’s not a very controversial position, but it does require overcoming huge amounts of inertia.

STEM is the (fairly recent) TLA given for K-12 education in Science, Technology, Engineering and Math. The idea is that certain professions (and thus society) will be in trouble if we don’t give our children the basics, keep them interested through adolescence, and expose them to real science and the career opportunities. The NSF, Department of Education and others are making (limited) efforts to correct the problem.

As with last year (and most years since 1989), today I spent the whole day (as I have annually since 1989) as a judge for the county science fair, now known as the Synopsys Challenge after its sponsor. As an IEEE member, I was part of a team of a dozen or so judges that gave out awards on behalf of IEEE and other organizations that outsource their judging to use (due to our superior processes and scale efficiencies).

Among the junior/senior high projects I saw, perhaps the most exciting was an improved efficiency wind turbine blade where the 12th grader applied for a provisional patent earlier this year. (I also suggested a provisional patent for two Cupertino students, although they are probably more than a year away from a prototype that would justify the cost of a full patent application.)

Mostly I looked at computer science or computer engineering projects, including three by 11th grade boys. One Palo Alto student measured the robustness of the SVD algorithm at the heart of the winners of the $1M Netflix challenge. A San Jose student improved on encryption strength using a different randomization technique, and solving the indeterminacy of the decryption problem. A project from Cupertino looked at solving differential equations using Nvidia’s CUDA GPU toolkit. A fourth project introduced me to DNA computing.

Today marked the 50th anniversary of the Santa Clara science fair, which is 5 years younger than the San Diego science fair where I was once a contestant and later a judge. An interest in science fairs is a theme of my K-12 efforts, having run the science fair at my child’s elementary school for 3 years.

I also try to recruit volunteer science fair and robotics judges from among MIT’s 10,000+ alumni in Northern California as the coordinator of the MITCNC K-12 outreach efforts. My next task is to help find Silicon Valley judges for the International Science & Engineering Fair to be held in San Jose May 11-14. (Apparently some of our outreach efforts are a model for a nationwide K-12 outreach effort being launched nationally by the MIT Alumni Association).

The extracurricular stuff is fun, and recognizes the best and the best of the best. But it does little to raise the understanding of science and technology by the average college student, employee or citizen — or to increase the supply of graduates for STEM-related professions.

Last week, I went to the semi-annual regional meeting of SJSU’s Center For STEM Education, to meet local K-12 STEM educators and administrators. What I heard was troubling.

The weakness of STEM education is particularly severe in grades K-5, where one estimate is that California kids get 1 hour/week of science education. There are several root causes. While secondary ed (grades 6-12) has science (“single subject”) specialists, K-5 teachers are generalists — most with a humanities background.

Worse yet, some California principals don’t care about science achievement because it plays a very small part in the standardized performance tests — by one estimate, 5% of the school’s total points. This is an example of the tyranny of measurement — perhaps not so blatant as WorldCom sales commissions, but one that has a far broader and longer-lasting impact on society.

Last Saturday, President Obama announced his intention to fulfill promises to teacher unions to water down No Child Left Behind. But the one change that I would agree with is the idea that school districts could modify the weighting criteria to include subjects such as science.

The actual technical content of improving science education is pretty straightforward. The organizational change to make that happen in more than 15,000 districts across the country is quite daunting.

I’m not sure how to get from here to there, but apparently that’s not enough to discourage me or my fellow STEM activists from pursuing such a Sisyphean task. Perhaps efforts by other alumni organizations — and professional scientific and engineering societies — will help raise the priority and efficacy of K-12 science education.

Sunday, March 14, 2010

In praise of dumb pipes

Last week, our research-oriented faculty gave 20 minute PPT presentations about how they spent their summer support money. (Mine was about an ongoing research project comparing open, user and cumulative innovation).

On Friday, listening to a talk by colleague Subhankar Dhar about his forthcoming CACM paper on Location Based Services business models reminded me of how far the US has come in just three years both in adopting data services, and also transforming mobile phone carriers into operators of commoditized dumb pipes.

Five years ago, there was an assumption by the carriers that — unlike in the wired Internet — they controlled access to content on their networks. The “deck” (cellphone opening screen) was tightly controlled, and the only applications and content that would make it on-deck were those that made it through their 3-12 month review and paid them a big piece of the action.

This was true not just for applications like games and for streaming content, but particularly for e-commerce. Thinking about the talk on LBS, I realized that If you looked at a paper on m-commerce from the period 2001-2005, they all assumed that such operator control of handsets, pipes and monetization was an inevitable thing.

Of course, such centralized bureaucratic control was hardly a recipe for innovation. But that was where we were stuck until the iPhone came along.

Apple was highly controversial for wresting control away from the carriers of the right to determine applications and content for the mobile Internet. Now it’s clear that it has blazed a path for making the innovation (and adoption) of the mobile Internet as open as it was for the wired Internet 15 years earlier.

This success has, in turn, allowed its onetime ally (now frenemy) Google follow the same path of promoting a platform and proliferating applications for mobile Internet users. I recall that when the iPhone was announced in 2007 (with Google maps), Yahoo was then doing the most interesting stuff on mobile and the idea of Google as a mobile powerhouse was nascent at best.

As Morgan Stanley analyst Mary Meeker put it last December:

It’s notable that, after years in the backwaters of global mobile development, American companies (led by the likes of Apple, Facebook, Amazon.com and Google) are becoming mobile internet innovation pacesetters.
Of course, this wresting of control and transformation of the mobile Internet is entirely in these firms’ self-interest. And this leadership by the WWW pioneers may only be temporary.

Still, I think almost any analyst would agree we’re in a better place than 5 years ago, because these firms have forced the carriers to relinquish their desired role as tollkeepers on the mobile Internet to become operators of dumb pipes.

Monday, March 8, 2010

Cutting their way to greatness

A company in a downward spiral can never cut its way to greatness — and rarely even to survival. Yes, it should throw losing products and divisions over the side, but in the end, it will never survive unless it can find some profitable core operations — and continue to build and build upon those operations.

Two examples come to mind. During a run of miserable CEOs, Yahoo was cutting left and right but not building anything. Now Carol Bartz has defined the core focus of Yahoo as a consumer media company. Who knows, it might even work, but at least it’s a plausible shot at turning around a company that’s fallen long and hard.

The other example is HP, which made a wrenching (but successful) shift from an innovative company to a cost-cutter, as designed by Carly Fiorina and implemented by Mark Hurd.

Washington Examiner contributor (and law school professor) Glenn H. Reynolds offers a counter-example of how not to do it, using a once-storied beer brand: Schlitz.

When I began drinking in college — the pre-Jimmy Carter drinking age was still 18 — the word “Schlitz” had become synonymous with swill. The epitome of this was a fellow Baker House freshman who was so cheap and so intent on getting blitzed on weekends that yes, he’d even drink Schlitz. (Today I can’t even finish a Coors, let alone a Bud — give me a Firestone IPA.)

But apparently Schlitz was once a premium beer. Reynolds explains its self-inflicted slide into oblivion:

Schlitz was once a top national brew. But, in search of short-term gains, it began gradually reducing its quality in tiny increments to save money, substituting cheaper malt, fewer hops and "accelerated" brewing for its traditional approach.

Each incremental decline was imperceptible to consumers, but after a few years, people suddenly noticed that the beer was no good anymore. Sales collapsed, and a "Taste My Schlitz" campaign designed to lure beer drinkers back failed when the "improved" brew turned out not to be any better. A brand image that had been accumulated over decades was lost in a few years, and it has never recovered.
The rest of Reynolds’ column would probably raise hackles here in Silicon Valley — a small government criticism of the Federal government’s self-inflicted damage to its own credibility and legitimacy.

Still, Schlitz provides a great lesson illustrating a key point I teach my students about strategy: make your strategic choices internally self-consistent.

Penny-pinching for a premium brand can be done — as Apple did in the late 1990s, when it fixed its production and supply chain cost disadvantages. However, it’s always a tricky combination to pull off. The only two ways I’ve seen it work is to do what Apple did (favor quality over cost), or what HP did (accept commoditization and switch to a generic low cost strategy).

Wednesday, March 3, 2010

Economic freedom saves lives

The Chilean earthquake has been on my mind since I turned on the TV Saturday morning. It demonstrates the life-and-death benefits of a functioning economy, society and political system, conclusions reinforced by an email I received Tuesday from one of the friends I made during my 2008 trip to Santiago.

As a native Californian, the collapsed freeways in Santiago — while the rest of the city survived intact — reminded me of similar photos from the three major California earthquakes during my lifetime: 1971, 1989 and 1994.

Here earthquakes are a way of life. Just before lunch on Wednesday, my 5th floor office swayed due to a 3.4 earthquake 9 miles away. Since becoming a state, California has eight major earthquakes:

  • Ft. Tejon (LA), 1857, 7.9
  • Hayward (Bay Area), 1868, 6.8, killing 30
  • Owens Valley (Eastern Sierra), 1872, 7.4, killing 27 and leveling the town of Lone Pine
  • San Francisco, 1906, 7.8, killing more than 3,000 people
  • Long Beach (LA), 1933, 6.4, killing 115
  • Sylmar (LA), 1971, 6.6, killing 65
  • Loma Prieta (Bay Area), 1989, 6.9, killing 63 and causing $6 billion in damage
  • Northridge (LA), 1994, 6.7, killing 60 and causing more than $13 billion in damage
The major quakes in the 20th century brought dramatic improvements in California’s building codes to make new construction among the safest in the world. One place that’s comparably prepared is Japan — and the other is Chile.

The Christian Science Monitor compared California and Chile’s preparation in an article Monday. While it’s much less wealthy, Chile has had stronger earthquakes that California in the past 200 years, including the strongest earthquake of the 20th century — if not recorded history — the 9.5 earthquake of 1960.

Bret Stephens of the WSJ noted Tuesday that Chile’s 8.8 earthquake was 500 times stronger than Haiti’s but the death toll was 1/200th as large. His explanation:
Chile also has some of the world's strictest building codes. That makes sense for a country that straddles two massive tectonic plates. But having codes is one thing, enforcing them is another. The quality and consistency of enforcement is typically correlated to the wealth of nations. The poorer the country, the likelier people are to scrimp on rebar, or use poor quality concrete, or lie about compliance. In the Sichuan earthquake of 2008, thousands of children were buried under schools also built according to code.
He attributes this outcome to Chile’s economic growth, which in turn he credits to the success of Friedmanism. However, I think Chile’s successful institutions run deeper than that.

However you slice it, Chile is a unique bastion of freedom in South America if not the Western Hemisphere. In January, it became the first Latin America country to join the OECD. According to the latest WSJ study, its economic freedom is slightly behind the US and slightly ahead of the UK. According to Transparency International, corruption is nearly as low as for the US, ahead of Spain and far ahead of the rest of the continent (except Uruguay). By fiscal measures, it’s the best run country in the hemisphere.

On Tuesday night, I got an email from one of those friends in Chile — ironically, an American expat from Texas who (unlike the Chileans) had never experienced a major quake before. Nathan Young wrote:
It was definitely a traumatic experience for us all, and probably the scariest of my life. I awoke right before it hit … In a matter of seconds the bed started softly shaking. As Chile is on a major fault zone, I thought nothing of this as it is quite a frequent occurrence. However, that lighter shaking quickly progressed into violent throws. I live in a 20 story apartment building on the 8th floor and the entire complex began swaying back and forth, moaning and popping, it was deafening. Glasses were breaking, windows rattling, walls splitting, I felt the entire building was about to fall down on me. I was able to scramble to make my way awkwardly to the front door and got down the stairs to the first floor when it finally stopped.

I am amazed at how quickly the Chilean economy is getting back on track. In the top 10 for largest earthquakes of all recorded time, and yet two days later everyone returned to work, with the majority of supermarkets open for service as well.

Overall I have been very impressed with the way the government and economy itself is recuperating. Most of the city already has electricity again, though looting is taking place in some of the same older and poorer parts of town. I was at a lunch with some friends today and while we were there some hoodlums taking advantage of the chaos and began robbing many of the stores in the central part of Santiago. They shut down the majority of that sector and began patrolling with cops and dogs afterwards.

Have some friends here for a wedding that we actually ended up celebrating that same Saturday after the earthquake. Quite a bit more somber though and the reception was discontinued, though it was good to try and begin reflecting and advancing.
Far from the devastation, the FT reached a similar conclusion about the country’s resilience — first to the global recession and now the earthquake:
[Chile] withstood the global slowdown far better than many of its neighbours because of the policy of saving profits from sky-high copper prices. It has some $16bn of that cash still available – about 12 per cent of GDP – which will provide a handy reserve as Sebastián Piñera, new president, sets about rebuilding the roads, bridges, ports and 1.5m homes affected.

“Chile should have no problem financing things. It has fiscal savings and international financial institutions are ready to finance Chile, whose leverage is very low,” said one analyst at a bank in Buenos Aires who declined to be named.
So transparency, accountability, strong political and economic institutions don’t just provide economic growth — they save lives. If not from earthquakes or hurricanes, then from tropical diseases, basic sanitation, and infant mortality. Other countries that aspire to the OECD and developed country status have an excellent role model, whether they realize it or not.

Tuesday, March 2, 2010

Live by the patent, die by the patent

Apple sued smartphone maker HTC for patent infringement Tuesday in Federal district court, and filed to block imports of HTC phones with the US International Trade Commission.

PC magazine lists the phones and patents, while All Things D has the actual court filings, and TechCrunch has both. CNET summarizes HTC’s oral response. Engadget has details of the patents and what they mean.

The ITC complaint lists both Android phones (Droid Eris, Google Nexus One, the Dream/T-Mobile G1) and others (HTC Touch Pro, the Touch Diamond, the Touch Pro2, HTC Tilt II, HTC Pure, HTC Pure, and HTC Imagio.) Nearly all of the accusations are against the Android phones.

The reaction by iPhone haters is predictable outrage. When it comes to aggressive assertion of IP against heroes of the open source revolution, Apple is now where Microsoft was almost a decade ago.

To at least some degree, Apple is doing what exactly what the patent system was intended to do. Apple created a new interface for the mobile phone industry, it was very popular, and then it was copied by its competitors. I haven’t examined the specific patents and allegedly infringing products, but in principle it seems plausible.

Rationality may not be the entire explanation. Apple oldtimers — and I suspect Steve Jobs — still have a bad taste in their mouth from losing the Windows look and feel lawsuit (mainly because of a stupid decision by Apple to grant MS a license in exchange for a renewal for MS’s Basic). It appears that Apple hopes that patents will work to protect Apple from competitors where a look-and-feel copyright suit two decades ago did not.

The problem today is that Apple’s patents are mainly in UI, which are a small part of a complex portable network-connected multifunction device. In my study with Rudi Bekkers, we found that there’s a thicket of 3G patents out there, including held by four of Apple’s largest handset rivals. Among UMTS patent holders, Nokia and Ericsson were the top two with more than 240 each, while Samsung and Motorola were 5th and 6th more than 50 each. (HTC didn’t even have a single one as of 2005.)

So while HTC might be easy to sue, it could be very dangerous to sue four of the top five handset makers. A no-holds-barred patent war could easily become a lose-lose proposition for Apple, unless Apple obtains protection through a patent pass-through license to large 3G portfolios from its current (Broadcomm) or future (Qualcomm) component suppliers.

My guess is that this will slow up the Taiwanese and Chinese commodity rivals for a couple of years, but not those from Europe, Korea or North America that have generous patent portfolios. If it only works against HTC — the fastest growing Android supplier — that might be enough for while. But even if it does, in the long run Apple will need more than patents to protect it against the commoditization of its smartphone innovations.