Sunday, May 31, 2009

Ovi store fizzles

Catching up on blogging.

On Tuesday, Nokia launched its Ovi Store to sell applications for S60 handsets. The store has direct operator billing in Europe, Russia, Singapore and Australia. Support for Ovi in the US (via AT&T) is due later this year.

Apparently the website was sluggish and hard to navigate (according to Symbian Watch and TechCrunch). The Register in the UK has been particularly merciless in pillorying the European cellphone giant. Writing on Wednesday:

Ovi is Nokia's answer to Apple's iTunes, offering a broader range of content to a broader range of handsets - or at least it would if it could stay up for more than five minutes and operated at a sensible speed when it was available
On Friday, longtime Psion and Symbian fan Andrew Orlowski wrote:
It must be frustrating to sketch out a long-term technology roadmap in great depth, and see it come to fruition... only to goof on your own execution. But to do so repeatedly - as Nokia has - points to something seriously wrong.

The launch was "an utter disaster" according to one blogger, or in a more measured assessment (from Ewan at All About Symbian), "rushed, early and not fit for public consumption". Nokia accepts second-best from Ovi, which apart from Maps is second-best in every category, the company all but admitted recently. But the Ovi application store deserves a Z-grade.

It's now clear that it was simply too ambitious to roll out a store to so many territories and in particular, to so many device categories, in one Big Bang. The number of devices supported goes back six years - encompassing eight versions of Series 40 and three versions of S60.
Nokia is a company that sells more phones than anyone, manages high volume logistics, works with nearly every carrier in the world, designs complex infrastructure and fancies itself a services company rather than a mere maker of handsets. It is a complex operation, with 125,000 employees and revenues of $71 billion.

You would think that it would be a top priority to roll out its main consumer portal against rivals like RIM, Apple, Microsoft and Google. Shouldn’t a company with this sort of scale know whether or not its portal is going to work before it launches?

These sort of systems are hard to build. Perhaps Nokia should take a hint from Google and release everything as “beta” for a few years so that it can disclaim any responsibility for reliability or responsiveness.

Saturday, May 30, 2009

Mozilla and its frenemies

Following my friend Matt Asay’s tweets (a tentative foray into wasting time with Twitter that I don’t waste with NetNewsWire, LinkedIn or Facebook), he had an interesting tweet on Mozilla.

Longtime steward Mitchell Baker and her (relatively newer) CEO John Lilly were interviewed at the WSJ’s D7 conference Thursday. “All Things D” reporter John Paczkowski (formerly of GMSV) has a brief writeup, which Matt tweeted. (A few video highlights are provided by the WSJ).

I’d recommend the entire summary Paczkowski post, but let me quote the parts related to Mozilla’s coopetition with Google:

How does it feel to be competing with Chrome, Walt asks, noting that Mozilla has long had a relationship with Google. “You’re now where Google’s “don’t-be-evil bulldozer is heading. How does that feel?” Baker says relations between the two companies are still good. They are still cooperating on geolocation, for example. The next version of Firefox will ship with that and it’s a Google service. It doesn’t have to be a Google service, but Google provides it for free and as such, is the obvious source.

Lilly jumps in: As long as we build a good browser, we’re OK. We’re not without assets. “We’re not simply going to shut down because Google is entering our market.” Our point of view is that the browser can do more for you. That’s not really Google’s vision. We think of the browser as a “user agent.”

Lilly says he likes Chrome. “Really?” asks Walt. Lilly says yes. He notes that rival browsers like Chrome and Safari have made Firefox better. A nice change from competing against, IE, apparently.

What’s the value proposition for Firefox now that Chrome exists? Questioner has switched to Chrome because it runs Google Apps better (which is the way Google designed it). So why use Firefox? People like the interface, says Lilly. They can modify it. They can skin it, etc. Lots of legitimate reasons.
There aren’t a lot of direct quotes, but I like Baker’s one-liner: “If you were a business picking a space in which to compete, you wouldn’t pick one with Microsoft, Apple and Google.” So true

Friday, May 29, 2009

Year of the 'droid

From Silicon Alley Insider:

Android boss Andy Rubin told developers that Google expects 18-20 Android-powered phones on the market by the end of 2009, made by eight or nine different manufacturers.

We'll be especially interested in seeing how manufacturers and carriers differentiate the Android phones, whether Google's brand plays an increasing role in marketing them, and how cheap in the feature-phone market Android phones can get. (The cheaper, the better. Even Motorola still ships a lot of phones every quarter.)
Presumably this means that Google and its allies have solved the serious performance problems that have held up nearly all phones since the G1.

With 18-20 phones, this also appears to resolve the question of when Android moves past T-Mobile in the US and on to CDMA networks.

Joel is (mostly) right

Today I find myself agreeing with Joel on Software (not to be confused with Joel-on-open-source-software or Joel dean of Apple University). In his Inc. magazine column this month, he writes about the failure of Circuit City due (among other reasons) to incompetent sales people. (Back in January, I also listed three other factors pressuring most brick-and-mortar stores)

But Joel thinks that B&H Photo in NYC will survive where Circuit City failed due to much better selection and service. I’ve never been to B&H , but I love B&H and Adorama Camera as mail order camera stores that have been reputable dealers for 30+ years. Of course, as a clicks-and-mortar store B&H enjoys buying power and scale economies that would be unavailable to a local retailer.

However, his article shows the typical East Coastist prejudice of a New Yorker, as immortalized by the New Yorker and its infamous Saul Steinberg cover.

The electronic superstores in Tokyo's Akihabara district are the only other places where I have seen so much gear under one roof.
He obviously hasn’t been to Fry’s: I guess California is “flyover country.”

That said, his B&H sales reps seem to care about their job, which is more than I can say about Fry’s or any other electronics retailer I’ve seen (except for the Apple Store).

Thursday, May 28, 2009

App stores: early or not at all?

In the past 10 months, the Apple app store has been a tremendous success, helping to fuel the success of the iPhone. Now all the other platforms owners (Google, Microsoft, RIM) and operators are planning their own app stores.

There are some dubious assumptions that both app store owners and app developers are making about app stores.

  • Early apps were successful so we can be too.
  • Early apps were successful but it’s too late for us.
  • App stores made Apple a success and will make our platform a success too
  • All we need is the “killer app” to make our platform a smash hit.
The answer is: Your mileage will vary.

All this is a preface to a discussion Wednesday in San Mateo by two Symbian Foundation executives — “catalyst” David Wood and interim marketing VP Ted Shelton — on their own planned app store.

The app store plan for Apple is pretty simple: Apple makes the phone, Apple makes the platform, and Apple strong-armed the carriers into providing (on-deck) access to App Store apps on any iPhone. For firms that license their OS (Google, Microsoft, Symbian) the story is quite a bit messier, since both the phone maker and the operator may want to get involved.

At the Nokia Developer Conference last month, Symbian Foundation head Lee Williams announced Symbian will be developing its own app store. It appears most analysts in the US missed the story, to the point that some (like my friend Matt Asay) are growing impatient.

Shelton and Wood talked about the app store Wednesday night to a small grouping of developers and analysts. The name is not settled, but the name I thought fit best was “Symbian App Warehouse.” Rather than sell to end users, they will certify applications and distribute them to all manner of application stores (e.g. Nokia Ovi) run by handset vendors, operators. The claim was that it would be without a fee, but I (like others) suspect in the long run they will need a minimal fee to cover operating costs; Mike Mace suggested 5-6%.

Symbian is shopping for lead ISVs: 5 developers in July, 100 in October for the 2009 Symbian Exchange and Exposition (“Come and SEE the future of mobile.”) The idea is to scale slowly to work out some of the kinks — not to impose some sort of limit as to the number of applications.

So the offer from Symbian was simple: do you want to be app # 45,678 at the Apple store or one of the first five at the Symbian store? As Shelton said, “the first movers have a greater opportunity to make money because there is a lower signal to noise ratio.” Palm has been making similar claims to prospective Pre developers (but with 0% share for its new platform vs. nearly 50% for Symbian).

Shelton had a point: ceteris paribus, later is worse. When I launched my Mac software company in July 1987 I had a far harder time getting carried by the channel and getting noticed than had Silicon Beach Software 3 years earlier. (I also had less money and less compelling applications). Of course, big companies with big budgets are better equipped to enter late than small self-funded (perhaps garage-dwelling) startups.

The iPhone App Store with its high visibility (and high download rate) would be very attractive if I had a narrow niche program, such as a virtual bass guitar. Unfortunately, PocketGuitar (99¢) already does that, and there are also other electronic guitars, several pianos, drum sets and even simulated sax, flute and bugle (blow on the mike). This is among more than 1300 applications listed under the “music” category.

So the Apple app market is heavily fragmented, and with a free online course being viewed by 100,000 developers, it will only get worse. Expectations for the Android Market mean that it is likely to head in the same direction. What should a developer do?

In the long run, developers will enter with one platform and then (if their app is hot) port to everything in sight; as with videogames, a platform will have only a temporary exclusive. The web-based apps (Facebook, MySpace, Google maps etc.) are already heading in this direction. A handful of companies (like Pangea Software on the iPhone) will figure out a way to come up with a string of hits.

In the short run, young developers need an open field market entry strategy: go where the competitors ain’t. Videogame developers have done this for more than a decade, just as companies choose geographic markets that are most promising. Once all the app stores have their initial launch ISVs, it will be up to developers to figure out a way to stand out of the clutter.

We know how this story will end

Normally I pontificate about business model problems with newspapers or television. But the third leg of the 20th century mass media, radio, has exactly the same problems. Largely unnoticed by the general public — except for the Clear Channel death throes — broadcast radio is heading to the same ignominious end as awaits dead tree papers and broadcast TV.

Earlier this month, Boston Globe columnist Jeff Jacoby interviewed radio analyst Michael Harrison. Most of it was about talk radio — both conservative and liberal — but there were some important general points about radio’s viability:

Q: How big is talk radio? How does it fit within the larger radio universe?

A: Well, the talk-radio universe is affected by the economy, and the recession has been brutal to all advertising-based media. AM and FM radio are also faced with technological change taking away its monopoly on mass-appeal audio entertainment and information. You've got competition from cable television; you've got iPods and podcasts and iTunes; you've got satellite radio. And you have the Internet, which is changing everything. That being said, within the world of radio, talk has a huge following that's growing. The baby boomers grew up with radio. Radio personalities meant something in their life. They know how to use a radio.

Q: Doesn't everybody?

A: No. Many kids don't even have radios. The biggest problem facing radio is that the younger generation doesn't think of it as an institutional component of day-to-day life. And if people stop thinking of radio that way, then what's the value of owning a license to broadcast? That's why radio is in trouble.
And then to the proposal (opposed by the right wing blogosphere) to re-impose the “Fairness Doctrine”:
Q: Some people have suggested that instead of making broadcast licenses renewable every eight years, they should last only two years - that would put station owners on the spot more frequently, make them more susceptible to pressure.

A: Look, if we were coming into the golden age of radio, I would say, "Sure, owning a radio license is a privilege. It should serve the community. People should have to jump through hoops to have this privilege." But it's not such a privilege anymore. It's mired in debt, it's choked with regulation. And it's surrounded by competition that's not regulated and not in debt. Why make it even harder?
If you think about it, a broadcast license (TV or radio) was once a license to print money: a government-sanctioned monopoly (premised on RF scarcity) that was valuable in perpetuity. Now, while Internet has high entry barriers in terms of network effects and advertising costs, it has no formal regulatory barriers and thus (in principle) no upper limit on the number of entrants.

Things have changed dramatically in my mother’s lifetime (My dad died in 1995). For my parents, radio was the first mass medium — the only way that all Americans could participate in the same news during the Depression (FDR’s Fireside Chats) and WW II (Edward R. Murrow.) For people my age, radio was what you turn to after an earthquake or during a war; it also keeps me company during long drives across California. For my daughter, NYTimes.com or news.google.com will probably be all the news she ever needs — until she’s 30 and something even cooler comes along.

Indeed, for Harrison, the end is coming much sooner than people think:
Q: How many good years does talk radio have left?

A: AM/FM radio has about five good years left, if that. And what we consider to be radio today will be on the Internet. And the Internet websites will be media stations. The Internet is not only going to change radio; it's going to change humanity. That's how profound this revolution in communication will be.
If that’s true, I wonder who the surviving Internet radio stations will be. Will it be the online versions of the existing broadcast stations (who now have a huge royalty advantage)? Will it be Sirius XM and other premium services? Will it be Live365, Pandora and other web-specific aggregators? Will it be stations bundled under iTunes and the like?

Or will the idea of a stream and “station” go away? For music, Last.FM provides songs on demand. I’ve found that podcasts work much better than streaming for a medium length interviews (10-30 minutes), although (as with HTML pages) only some publishers are committed to making them available on a permanent basis.

I was recently wondering whether to retrofit an HD radio onto my 2000 pickup. I guess at this point I should plan to replace my dead iPod and just download podcasts for the long drives.

Wednesday, May 27, 2009

Academic victory for open standards

One of my major research (and blogging) interests has been on the open-ness of standards. A particular pet interest has been on semi-open standards — how firms decide which elements of openness to offer and which ones to block. (I’m also interested in how open standards relate to other aspects of innovation openness such as open source and open innovation).

A decade before I even knew there was an academic literature on standards, there were four academics cranking out the seminal work on the fundamental economic principles of standards creation and adoption — a literature known as network effects. Writing in two teams, journals like American Economic Review and Journal of Political Economy were filled with papers by Michael Katz and Carl Shapiro or Joseph Farrell and Garth Saloner. (Katz was also an FCC economist and Shapiro a Justice Dept. antitrust economist).

My dissertation is filled with references to these two themes, as well as to Information Rules, the HBS book Shapiro co-authored with Hal Varian.

One paper I did not fully appreciate until recently (because it appeared in a journal most libraries don’t carry) is a 1990 paper on open standards by Saloner. I am often proud of my chapter on open standards (in a 2006 book on the economics of standards), but it’s now clear that Saloner was the first to seriously consider openness in standards as an intentional tradeoff.

All of this is a long intro as to why I was intrigued by a Stanford press release issued Tuesday:

Economist Garth Saloner, a scholar of entrepreneurship and business strategy, will be the next dean of Stanford University's Graduate School of Business, President John Hennessy and Provost John Etchemendy announced today.

Saloner, 54, who joined the Stanford faculty in 1990, is the Jeffrey S. Skoll Professor of Electronic Commerce, Strategic Management and Economics, and a director of the Center for Entrepreneurial Studies at the Graduate School of Business. He will succeed Robert Joss, who is stepping down after 10 years as dean. Saloner's appointment is effective September 1, 2009.
In addition to his prodigious research, Saloner is credited with helping to lead Stanford’s particularly complex re-architecting of its MBA curriculum.

What I find particularly interesting is Saloner is one of the few people within GSB that seems to care that Silicon Valley can be found just outside the boundaries of “The Farm.” Rather than local problems of interest, most of the faculty of GSB are oriented towards an international disciplinary audience such as economics, sociology, psychology, or applied math. At Stanford, the greatest concentration of Silicon Valley-oriented business scholars are found in one department of the Engineering School.

Will Saloner’s appointment make the GSB (and its new Phil Knight Management Center) a new hotbed for the study of Silicon Valley entrepreneurship? Or will the institutional norms of the various fields drown out whatever preferences the dean and local alumni might have? Stay tuned.

References

Garth Saloner, “Economic issues in computer interface standardization,” Economics of Innovation and New Technology, v. 1, n. 1 (1990), pp. 135–156.

Tuesday, May 26, 2009

Balance of power in China

The Financial Times reports this morning that both Android and the iPhone are expected to officially arrive in China Real Soon Now:

Google’s Android mobile phone operating system is set to make its legal debut in China in June when China Mobile launches specially adapted handsets.

The Taiwan-based handset manufacturer HTC said China Mobile would start selling a customised version of the HTC Magic, a handset based on Google’s Android operating system, through its stores.

Analysts believe that a successful launch of a high-end handset for China Mobile subscribers could help remove hurdles to the entry of similar handsets such as Apple’s iPhone into the country, the world’s largest mobile market.

Apple has negotiated for months with China Unicom, China Mobile’s smaller rival, to introduce the iPhone to China, but industry executives say regulators have sought to hold back an agreement until China Mobile has a device that will allow it to compete for 3G customers.
Of course, as the FT reports, grey market Android and iPhone models are available in China already.

The FT notes that the three major carriers — China Mobile, China Unicom and China Telecom — are being played against each other to maximize competition. It makes sense that the government wants China Mobile to go first: it is the world’s largest mobile phone carrier, but forced to deploy China’s home-grown (and China-only) TD-SCDMA technology.

The timing seems right for both the Magic and iPhone. The HTC Magic was released by Vodafone in Europe earlier this month. It is coming to the US (in June?) as the T-Mobile myTouch 3G. It’s also coming to Japan and Canada. In Japan, the Magic (there the HT-03A) would be sold by NTT DoCoMo, which already sells two other smartphone platforms — MOAP-S (Symbian) and MOAP-L (Linux). The Canada intro is via Rogers Wireless, which is also rumored to be selling the iPhone this summer.

Meanwhile, China Unicom’s iPhone launch is conditioned on the availability of the iPhone 3.0 model(s) next month. (Presumably if Apple releases more than one model, China will pick the more affordable one). Pundits argue whether the iPhone will or won’t be announced at WWDC in 13 days. I lean towards the “will announce” camp, because Apple needs to show developers the new device and demonstrate its features. Educating developers is the whole point of WWDC, and this year’s WWDC is more iPhone centric than ever.

China Unicom is also getting an Alcatel-branded Windows Mobile 6.1 handset next month.

The laggard appears to be China Telecom. I haven’t seen any reports of a 3G smartphone for its rapidly deploying cdma2000 network, which it got from China Unicom last year in the grand reorganization orchestrated by the Ministry of Industry and Information that is enabling the huge 3G rollout by all three operators. Thus far, China Unicom has denied persistent rumors of its own BlackBerry. Its main foreign handset suppliers are Samsung and LG, which sell Windows Mobile and Symbian handsets. (I’m guessing the first Symbian CDMA handset will come from Samsung or one of the Chinese makers early next year).

The only other major CDMA smartphone handset vendor — Palm — is introducing the Pre in the US June 6, but the only overseas plans I’ve seen are with Bell Mobility (Canada’s #2 carrier) and with Telefonica in the UK (O2), Spain and Latin America. Since there will not be enough Pre handsets to go around, there’s no rush on introducing the phone to the world’s largest market.

Sunday, May 24, 2009

Saving the world before the Internet

To follow up on my posting a week ago, the season of 24 ended. It is the only one I recall where Jack has nothing to do at the end (although at the end of Season 5, he’s immobilized by Chinese kidnappers). Thus, the enhanced interrogation efforts were left to someone else.

Even if Jack can’t solve Hollywood’s business model problem, his short term contribution to Fox’s bottom line is impressive: almost $7.5 million in ad revenues per show, second (on all of US TV) only to American Idol.

But there is a question about how dated the 24 concept is. It debuted in November 2001, soon after 9/11, and for several years was reacting to the implications of that event.

In researching my last column, I came across another way that 24 is dated: its reliance on technology. The staff of CollegeHumor.com imagined a “pilot” for 24 in which Jack Bauer tries to save the world in a pre-PDA, pre-cellphone, pre-mobile Internet era. (It’s available on their website and YouTube).

While I loved the premise, I thought there were two major inaccuracies.

The first error was being too pessimistic. In the video clip, Jack (savior of the free world) doesn’t seem to have a cellphone. In October 1994 my employer bought me my first cellphone, a Motorola MicroTac (probably the MicroTac Elite), an N-AMPS phone running on the AirTouch network (built for the LA Olympics). Wikipedia* helpfully notes that the first MicroTac was released in April 1989.

The second error is of being too optimistic. While I can’t speak to CTU or the US government, my recollection of the 1990s is that nearly all documents were stored in paper form — in file cabinets, binders, library shelves. The barriers were three:
  • Capacity. Servers in 1994 had a gb or two, typical desktop with 5.25" spindles had 300-500 mb.
  • Software. The software for storing arbitrary data on disk wasn’t there. Yes, Autocad was out, but that only worked for people generating and viewing drawings in Autocad. Adobe Acrobat 1.0 was introduced in 1993 (according to Wikipedia*) with no free reader until Fall 1994.
  • Attitudes. Even if we could store documents on a hard disk, we didn’t. When I took my laptop (PowerBook 140, later PowerBook 145) to night school to take notes in fall 1993, no one had ever seen anyone type notes in realtime on a laptop. I didn’t start collecting journal articles as PDF until 2000, and dropping paper copies until 2003 or 2004.
Overall, it’s a fun satire — and, like most satires, doesn’t bear too detailed a scrutiny.

* To make a point, I'm using the dubious tertiary data of Wikipedia in lieu of tracing down real evidence.

Saturday, May 23, 2009

Crowding out

As part of outsourced economic criticism, an essay by Seattle surgeon Roger Stark from last week’s Seattle Times:

During the last presidential campaign, at least six national health-care-reform proposals were discussed and debated. Consensus on the part of the Obama administration and congressional leadership has now formed around a single, government-sponsored alternative to the private health-insurance market
...
At face value, the proposed government plan would function like Medicare and "compete" with private, non-Medicare insurance. It would offer employers and individuals an alternative to obtaining health insurance in the private market.

That seems all well and good. But in reality the government would set its tax-subsidized pricing well below private plans and "crowd out" the private insurance carriers. The government would also mandate that the private carriers provide a comparable benefit package, hence eliminating any chance for competition with different product lines.

So what are the actual numbers? The Lewin Group estimates that at Medicare rates, the new government plan would cover 130 million people. Out of that group, 118 million will be forced to join after opting out or losing their private coverage. To put this in perspective, there are currently 170 million people in the United States with private health insurance.

To believe that the government would "compete" with private carriers is naive. The government would cut rates well below the private market and make its plan look much more attractive until it controlled all health insurance. After all, it is impossible to compete against an entity that can draw on the full tax resources of the United States.

This is exactly what happened with Medicare. In 1964, senior citizens had access to a wide selection of private health-insurance policies. Medicare was passed in 1965, and by 1970, no private market existed, except for co-pays and deductibles, for the elderly in the United States.
In other words, 90% of the the subscribers to FederalCare will be shifted from existing health insurance and 10% will be those not already insured.

Historically, government intervention in the American economy has been designed to be minimally intrusive to fix market failure. Destroying 70% of the private insurance market is clearly an attempt to increase government control over health care, rather than to serve the unserved.

Friday, May 22, 2009

One way to save the auto industry

Instead of financial restructuring and bailing out failed labor contracts, perhaps the auto industry could be saved by coming up with a new way to make better products. (Could happen, right?)

A great article by Charles Mann in the June Wired quotes Henry Chesbrough as recommending open innovation. Noting the slow rate of change in the industry, Henry said

“It's as if the computer industry were still dominated by Wang and Data General and DEC, and they were still selling minicomputers.”
Despite my disagreements with its lousy PR efforts, I think Tesla Motors (a Silicon Valley car company) is well ahead of Detroit in implementing open innovation. For its electric sports car, Tesla utilizes car bodies and assembly from Lotus Cars (using external innovations), as does one competitor. Meanwhile, it has just agreed to sell its batteries to Daimler (licensing internal innovations).

The article also quotes another very smart innovation economist, Steve Klepper of CMU.
A radical reconfiguration may be the only way this vital industry can survive on these shores. "They're going to have to swing for the fences," says Steven Klepper, an economist at Carnegie Mellon University who studies industry innovation. "The only way I can see for them to win the game is to change it entirely."
In his article, Mann reveals his great love for the US auto industry and desire to see it rise again:
I should declare a personal interest here. My father worked as a Big Three executive for much of my childhood, most of that time at Ford. He left to run his own marina, but he always remained loyal to Detroit. He never bought a foreign car. I didn't buy one until after his death, and even then I felt like I was thumbing my nose at his memory. I would like to return to a US product. More than that, I would like millions of Americans—people who don't share my sentimental ties—to come back to vehicles from US companies.
Keppler is right. Ford could survive as the last man standing of a sickly US auto industry (think Renault or Fiat), but on the current trajectory the other two companies have very long odds.

Chrysler’s solution is to become a subsidiary of Fiat, which is likely to bring more of the same — a two-country vertically integrated car company (didn’t work last time). Since it has the least to lose, perhaps GM could take the lead in using external sourcing of technology.

Nationalized and free markets

The NYT report this morning on the pre-packaged GM bankruptcy makes it clear that the Obama administration is going to force the Chrysler model of cramdown on the GM bondholders:

A coalition of small bondholders protested the terms of G.M.’s offer in Washington on Thursday. Larger, institutional bondholders have also opposed the deal, which calls for them to receive 225 shares of G.M. stock in exchange for each $1,000 worth of debt.

G.M., which is subsisting on $15.4 billion in government loans, has until June 1 to meet the broad criteria for restructuring spelled out by a special presidential auto task force.

Under a plan announced last month, the Treasury Department would control at least 50 percent of the stock in a restructured G.M. A health care trust for union retirees would have about 39 percent, with bondholders getting 10 percent and current shareholders the remaining 1 percent.

Advisers to a committee of G.M.’s biggest bondholders, representing about 20 percent of the $27 billion in bond debt, have repeatedly criticized the plan as unfair and designed to fail. They have also accused the government of seeking to use them as scapegoats for a potential bankruptcy filing. Under their own proposal, G.M. bondholders would own 58 percent of the reorganized carmaker. These advisers have said that they are willing to negotiate with the company and the government but have made no headway thus far.
Bankruptcy courts have given priority to the claims of creditors, not to keeping the company running. In the 2006 bankruptcy of Tower Records, the court chose a liquidation plan over a plan to keep the company running, because its bid was 0.3% higher and thus would give more money to creditors.

Now it is clear that there are two sectors of the American economy: one where the government will use its power to impose the solution it thinks best, and one where investment and the allocation of rights is governed by the rule of law.

Investors who put their money into the nationalized sector — whether in equities or bonds — have to realize that their rights as investor/lenders will be subordinated to the national government’s industrial policy. The business school term for this is “political risk,” normally referring to third world countries where the rule of law is not yet established.

In this case, people who bought Chrysler or GM bonds assumed they would be given preference in liquidation, but the rules changed with the new administration. Knowing what they know now, any Chrysler or GM bondholder should have sold their bonds on November 5 or even earlier.

There is no reason to think this is the last example of political risk under the new government’s policies. The intervention is normally justified in the name of helping struggling industries and saving jobs — which means autos and banking are nationalized sectors. Other industries are in deep trouble — newspapers, Hollywood, real estate — will these be nationalized too?

Where will the government draw the line? Energy is a central part of the administration’s new economic policies — will it be controlled the same way as the automakers and the TARP banks?

For every seller, there’s a buyer. Anyone buying such debt is hoping that there will be political pushback that stops the administration (unlikely) or that the economy has bottomed out and no more firms will be facing bankruptcy (also unlikely).

Thursday, May 21, 2009

A failure to align incentives

Somehow I missed this earlier: the UAW plans to dump its shares in Chrysler.

The Obama Administration ignored prior bankruptcy law and forced bondholders (including nonprofits) to take 29¢ on the dollar for $6.9 billion in Chrysler debt. At the same time, the administration gave unions ownership of 55% of Chrysler in exchange for eliminating its $5 billion (unsecured) liability for retirement medical coverage.

The best argument for giving unions equity in a bankruptcy situation is to keep the incentives aligned: the company needs the full cooperation of its workers to succeed, and thus they should have a financial incentive to do everything they can to make the company succeed.

The union claims financial exigiencies in its plans to dump the stock. As the London Times reported

the Voluntary Employee Beneficiary Association (VEBA), which provides healthcare benefits to Chrysler retirees, would be forced to sell its stake in the company in order to keep funding the trust.

"As soon as the VEBA’s in a position where we can sell stock, we will be required to sell stock in order to keep the benefits going," [UAW President Ron] Gettelfinger said.

He said that the VEBA would start reducing benefits for retirees from July 1 because the trust is already struggling with funding.
Perhaps the union leaders are not that financially sophisticated, but this is buy low, sell low. Right now, the market sees Chrysler’s chance of survival as being low, and thus Chrysler equities are as low as they are going to be. Or does the union know something that the markets do not — that the chance of Chrysler surviving is not good, and thus they want to dump the shares while they’re still worth something? Or that once they dump the shares, they can go back to the UAW’s traditional role — a zero sum fight to transfer money from customers and shareholders into workers’ pocket?

Sometimes employee ownership does a good job to align interests, but my impression is that it only works when there are good labor relations to build upon. Building on its strong corporate culture and positive labor relations, Southwest Airlines has long used employee stock ownership as a motivation tool. However, when an ESOP was used in a United Airlines bankruptcy, the employee-management conflicts were brought to the fore (as a 1998 paper showed), and employees eventually filed a class action lawsuit when the stock price plummeted after 9/11.

This issue will come up again with the Federalization of GM. So if employee shares are intended to keep employees motivated, it’s a nice idea but probably won’t work given poisonous UAW-Big Three labor relations. If it’s a settlement of an unsecured claim, then the issue of who deserves which payment is one best settled by a judge — following the rule of law — in bankruptcy court.

Wednesday, May 20, 2009

Single party politics and economic stagnation

As part of outsourcing economic criticism in these hard times, I quote from Arnold Kling (on the libertarian economists' blog EconLog) about the possibility of a one-party America and the third-world economy it would bring (emphasis mine):

I am in the middle of reading Violence and Social Orders, by Nobel Laureate Douglass North, John J. Wallis, and Barry R. Weingast (NWW). The theme of the book is that political and economic development is part of the same process, which they call the social order. The developed world enjoys an open-access order, in which both politics and economics are highly competitive. The rest of the world is in a natural state, in which only the members of the governing coalition are fully free to own property, participate in the political process and—most importantly—form durable organizations.

The United States is currently taking a giant step backward in the direction of a natural state. NWW would say that we are still an open-access order. However, the importance of the rule of law is declining, and the importance of political connections to the elite is increasing. I think we will see this trend emerge much more strongly over the next decade, as it becomes clear that the Republican Party is not going to win another national election. Interest groups will lose hope in competitive elections, and instead they will focus on accomodating the Democrats, which in turn will consolidate the power of the ruling party.

In economics this leads to stagnation, as we shift from an economic system dominated by competition and change from the bottom up to a system of rent-seeking and centralized management. There will be less creative destruction and more redistribution.
His fellow EconLog blogger Bryan Caplan disagrees about the likelihood of America becoming a one-party state, but not the consequences if it did happen.

Monday, May 18, 2009

A problem even Jack Bauer can’t solve

Jack Bauer is having a really, really bad day. This has happened six other times (since 24 first aired in 2001). This particular day, Jack lost one of his best friends (Bill Buchanan) to terrorists and found out the other (Tony Almeida) was alive, but a double agent.

Still, Jack is a survivor, now on his 7th president: one truly great, one truly evil, and the rest (as with the last 20 years) of varying degrees of mediocrity. Having run out of the likely threats — Islamic fanatics, the Chinese, the Russians — the enemies get more improbable ever season.

Jack fights at all costs to protect a society that he can never fully join, much like Batman or Dirty Harry. As the author of several 24 books summarizes it:

Jack is the sheep dog, the terrorists the wolves. Although the sheep fear the wolves and are guarded by the dog, the dog — with its fangs, claws and willingness to kill — has more in common with the wolves than with the sheep he protects. Despite the dog’s role as protector, he possesses the same predatory instincts and violent tendencies as the wolf, so he can never be a part of the flock. Jack is estranged from his daughter, constantly robbed of a normal life, admired by the audience but alienated by much of the fictional world he inhabits.
As with every season, Jack saves the day. Presumably being infected with an incurable virus will be solved in tonight’s final two episodes — while at the same time, some dear friend will die, or some deadly villain will skate free.

While Jack is a creation of Hollywood, one problem he has not been asked to solve is the impending destruction of its failing television business models. It probably doesn’t fit his skill set, since it can’t be done by pointing a gun or blowing something up.

The core problem is that the 20th century concept of mass media is approaching its end — in this case, the idea of one-to-many prerecorded entertainment. Jack Bauer and 24 are the last gasp of a dying breed, the network television hit show that is watched by a wide enough audience to become part of the popular culture.

The demographics are all wrong for Hollywood. The middle aged and geriatric set will continue to watch TV, but even the CEO of one of the major motion picture studios admits that coming generations are lost. In an (online) interview posted earlier this month , Howard Stringer of Sony said
Children today don't watch that much TV. Take my 16-year-old son, for example. Apart from watching some sports, he almost never watches TV with the rest of the family; instead he spends most of his at-home leisure time communicating via the social networking site, Facebook.

It's clear that customer preferences are changing, and I think this fact indicates what the next steps in TV evolution are likely to be. We'll never recapture our customer's hearts by merely offering better color or higher resolution.
Allowing for this new generation, 24 is available online at the Fox website, or on a limited basis at Hulu. Because I’ve been teaching Monday nights, I’ve watched most of this season in a tiny window on my desktop computer: a lousy experience, but a convenient (and free) way to catch up on the plot twists.

The problem is, TV will never monetize as well online as it did in the 60s, 70s and 80s. As with Internet “real estate” vs. physical real estate, there are no natural limits for online video the way that 3 or 5 or 7 local broadcast TV stations created a scarcity (and thus premium pricing) for TV advertising. The market will be more fragmented — with both domestic and overseas content providers — and the per-viewer prices lower than during the 20th century era of the great broadcast networks.

Some will argue that Google, Amazon and others demonstrate that the Internet can create great wealth, and thus the same will happen for online video. Certainly companies that previously had nothing (cf. YouTube and its three founders) can enjoy explosive growth in demand and market cap (if not revenues). But for every winner there are losers. Advertisements placed with Google are ones not placed with newspapers, books purchased through Amazon are not purchased at the corner bookstore.

Internet media (first print, now video) is the classic commodization of a fat, dumb and happy industry that Clay Christensen (in his 1997 book) termed a “disruptive innovation.” The new technology is cheap and at first appears to be a toy, but it finds a new untapped market and eventually gets “good enough” so that most existing users switch over. New entrants are happy, because they enjoy explosive growth. Consumers get what they want at a lot lower price.

But for Hollywood, an assumption of higher volume, lower margin will never work: its volume has long since peaked. In its heyday, All in the Family drew 20 million households on a Sunday night in a country of about 65 million households. Except for the occasional man walking on the moon or the Super Bowl, there isn’t anything that networks can do to grow this penetration rate — so a show like CSI has less viewers in a country with 50% more people.

If Christensen’s maxims hold true, then the old media will be unable adapt to the Internet threat, because it will be too fixated on protecting existing margins to embrace the online world. So far Fox (like Hulu) hasn’t made this mistake: my free episodes of 24 have an occasional pre-roll ad, but certainly less ads than if I spent 60 consecutive minutes in front of the idiot box on a Monday night. Giving it away virtually free today builds market share, but (as newspapers have found) makes it hard to monetize later.

Even if Hollywood does emerge the winner in the online media, it will be a Pyrrhic victory: both the volumes and unit revenues will be lower than in the late 20th century. Being a TV star won’t be as lucrative as it once was, and unlike rock stars, there’s no alternative revenue model to make up for it.

More urgently for the Hollywood media companies, being a TV producer or studio executive won’t be as lucrative as it once was. I’m guessing this will spark a frantic round of related diversification (videogame companies?) that prove as successful as the AOL Time Warner merger.

Taxing their way to greatness

As part of ongoing cost reduction via outsourced economic criticism, here are excerpts from a WSJ op-ed this morning by Arthur Laffer and Stephen Moore.

Soak the Rich, Lose the Rich
Americans know how to use the moving van to escape high taxes.

With states facing nearly $100 billion in combined budget deficits this year, we're seeing more governors than ever proposing the Barack Obama solution to balancing the budget: Soak the rich.

Here's the problem for states that want to pry more money out of the wallets of rich people. It never works because people, investment capital and businesses are mobile: They can leave tax-unfriendly states and move to tax-friendly states.

And the evidence that we discovered in our new study for the American Legislative Exchange Council, "Rich States, Poor States," published in March, shows that Americans are more sensitive to high taxes than ever before.

[W]e found that from 1998 to 2007 … the no-income tax states created 89% more jobs and had 32% faster personal income growth than their high-tax counterparts.

We believe there are three unintended consequences from states raising tax rates on the rich. First, some rich residents sell their homes and leave the state; second, those who stay in the state report less taxable income on their tax returns; and third, some rich people choose not to locate in a high-tax state. Since many rich people also tend to be successful business owners, jobs leave with them or they never arrive in the first place. This is why high income-tax states have such a tough time creating net new jobs for low-income residents and college graduates.

States aren't simply competing with each other. As Texas Gov. Rick Perry recently told us, "Our state is competing with Germany, France, Japan and China for business. We'd better have a pro-growth tax system or those American jobs will be out-sourced." Gov. Perry and Texas have the jobs and prosperity model exactly right. Texas created more new jobs in 2008 than all other 49 states combined. And Texas is the only state other than Georgia and North Dakota that is cutting taxes this year.
There is an assumption by California and New York politicians that taxes can be raised indefinitely, because certain high-income residents (Silicon Valley, Wall Street) have no choice but to live here. Given that assumption, there is no incentive to cut costs and improve efficiency.

That philosophy almost destroyed NYC in the 1970s. I will be curious to see whether these high income residents prove more mobile than anticipated.

Sunday, May 17, 2009

That old house

The Mercury News and Fortune have been covering Steve Jobs efforts to tear down the 1926 mansion on his Woodside property. Steve bought the 17,250 square foot Spanish colonial (and land) in 1984 — long before he got married — lived there briefly, but is now raising his kids in Palo Alto

Jobs got approval Tuesday to demolish the mansion. (Fortune reports that Jobs was too weak to sit through a many-hour meeting).

The Merc notes that the old mansion would cost $13.3 million to restore — $5 million more than building a newer, smaller, more energy efficient home.

Not surprisingly, the Merc gives lots of air time to demolition critics — who have done nothing to purchase an economic stake in the future of this private property, but seek to tell other people what they can do with their property.

What I found even more troubling, however, is that both the Merc and an earlier Fortune photo spread blatantly advertised how they trespassed on Jobs’ land in search of a “story.” I guess it’s long past time for Jobs to have to hire security guards to keep sightseers from wandering through the old house (and then suing him if they fall through a rotted floorboard).

Note to overseas readers: “This Old House” is an American TV show (and media empire) in which the celebrity remodeling team shows up to lovingly restore an aging home to the delight of the grateful homeowner.

Saturday, May 16, 2009

iPhone success: browsers, then apps

Last month, I visited the Quello Center for Telecommunication Management & Law at Michigan State University. I was invited to speak by center co-director Steven Wildman, who I met last summer while presenting at a USC-sponsored telecommunications conference.

We debated what I should present. In the end, I chose to present iPhone paper I’ve been working on with Mike Mace, because it’s almost done and the visit would act as a forcing function.

It was a great choice: we had 35 people in the room — a few faculty, but mostly students. I was told was the most ever for a Friday lunchtime talk (but perhaps because some students were behind writing up seminar reports.)

I’ve posted my slides at SlideShare.net — my first posting there ever. (I joined the site after I saw speakers use it at the O’Reilly Web 2.0 conference in March.)

Since readers can see the slides, let me just summarize the argument in short form. Most people think of the iPhone as a success because of the app store. However, the app store was part of iPhone 2.0, and the success of iPhone 1.0 was based on a simple core idea: deliver the “real Internet.”

There are plenty of anecdotes to show that the iPhone succeeded in changing how people think about mobile browsing. Clearly iPhone users browse more than owners of other smartphones (at least in North America), as Google discovered in December 2007, and as AT&T is finding as it seeks to keep “all you can eat” from destroying its 3G network capacity. We are trying to come up with more systematic data.

I gave the talk the day after Apple reported that it had achieved 1 billion downloads at the app store. For my talk, I tried to briefly classify the most popular applications, but I was tentative because it not clear whether Apple’s “top paid apps” and “top free apps” were worldwide or US. (Tech Crunch has Mobclix data that is a little more useful here).

Clearly the top 20 paid apps are all games or other forms of entertainment. The iPhone/iPod Touch is a hot gaming platform with has many satisfied developers. The iPhone scores points both for an easy-to-use SDK and also for its convenient distribution channel. This is mostly the “kill a few minutes” casual gaming audience — such using mom’s iPhone as a video pacifier. But the units are rapidly gaining on the Sony PSP if not the Nintendo DS.

Some are concentrating on the direct revenues to Apple, i.e. from paid apps. A lot of estimating the number of paid downloads depends on the assumptions of the ratio of paid to free downloads, as the Apple 2.0 blog at Fortune noted last week.

(I thought I saw an article around April 23 that noted actual unit sales for some of the top 20 apps, but I have been unable to find the article. Does anyone have such an article.)

However, what I found interesting was the free apps. Sure, there are some freemium offerings in game and entertainment. But there were also iPhone versions of some of the most popular wired Internet apps — Facebook, MySpace, Google Earth, the Weather Channel.

If the iPhone is heavily used for the same thing as the wired Internet, that means it will make progress on substituting for the wired Internet. I’m not ignoring all those motion-sensitive games (or location based services) designed just for the iPhone — only concentrating on evidence where the iPhone is compelling enough to get people to drop (or ignore) their PC.

Friday, May 15, 2009

Cutting your way to greatness

The Big Three auto companies certainly aren’t selling as many cars as they used to. As such, they need to reduce their fixed costs to more closely match their current revenues.

Towards this end, the two sickest “US” automakers are shedding dealers. Thursday, Chrysler gave notices to 789 of its dealers (about 25%) that their contracts end June 9. On Friday, GM gave 18 month notice to 1100 dealers, and is expected to dump another 500 later this year. (GM is also hoping to sell its Saab, Saturn and Hummer divisions, which have their own dealers).

Both claim to have targeted their least profitable dealers. GM said that 18% of its dealers cut account for 7% of sales, while Chrysler says 25% of its dealers account for 14% of sales. GM hopes to cut 42% of its dealers by the time it’s all over. Meanwhile, Ford is also reducing dealers, but in a less confrontational way.

The ailing automakers are in a tough spot. Having too many dealers means that its sales are spread across too many dealers, making it hard for the others to survive. It’s like throwing the sickly out of an overcrowded lifeboat in hope that the rest will survive.

The dealer associations question the logic of the cuts (as in these commentaries in NJ and LA). Dealers are the distribution network for the car companies — more dealers means more sales, less dealers means less sales.

Cutting dealers makes certain assumptions about the substitutability of demand through alternate distribution channels. In some cases, shifting demand is plausible, as when a dealer is closed in a metropolitan areas where the buyer can drive another 10 miles to another franchise.

However, in some cases, substitutability is highly suspect. For example, Chrysler is cutting its only dealer in El Centro, a farming town in Southeast California (population 40,000). Those residents won’t drive an hour to Yuma (or 90 minutes to San Diego) to buy their next Dodge truck: they’ll buy a Toyota or a Ford (or a Chevy if that dealership stays open).

The other problem is that many (or most) of the dealers that GM and Chrysler want to kill aren’t interested in dying. As autonomous economic actors, they are going to find other ways of making money.

Some may become independent used car lots, repair shops, or other auto-related businesses that build on their loyal customer pays and service their previous customers. But clearly some are going to sell new cars for someone else. Multi-brand dealership will just push their other brands; former single-brand locations will be looking for another brand to carry. For example, the (PBS) Nightly Business Report Thursday interviewed one axed Chrysler dealer who’d already signed up to sell Kias.

It’s impossible for a firm to cut its way to greatness: closing dealerships and plants isn’t going to solve the underlying problems. GM, Chrysler (and to a lesser degree, Ford) still have to figure out a way to make cars people want to buy.

At the same time, longer-lasting, high-quality cars mean that per capita auto sales may never return to the levels demonstrated in the 1990s. US sales (for all makers) peaked in 2000 at 17.4 million; early predictions for 2009 were for 11 million units.

Thursday, May 14, 2009

MIT, Stanford and Silicon Valley

On Sunday, the Merc published a long article claiming that Stanford has for 100 years been the center of entrepreneurship in the Bay Area if not the whole universe. I immediately started writing a rebuttal, but when I was done I decided to post it to my blog on engineering entrepreneurship.

Stanford and the Silicon Valley are unique in the world, and it’s understandable why so many people look here for a model as to how to encourage tech entrepreneurship. However, I think the reporter exaggerated the case for effect’s sake — although local boosters are also prone to exaggerating too.

To boil down my rebuttal arguments, the first point was that there were not a lot of significant Stanford tech spinoffs until the banner year of 1982 (Cypress, EA and Sun). Stanford didn’t even have an entrepreneurship policy until the 1950s, and that was driven by a need to raise money. Yes there was HP, but one company does not a trend make. (I’m grateful to Silicon Valley historian and native Stephen Adams for helping me with the details).

The second point was that there was an acknowledged home for electronics-based entrepreneurship in the US for more than 50 years: it's called MIT (yes, my alma mater). In the early 20th century, MIT created some of the most durable models for industry-university collaboration, as documented by Henry Etzkowitz. MIT also provided the founders for companies from Raytheon and TI to 3Com and Qualcomm, as well as educating some of the key technologists who created Silicon Valley. (OK, now who’s sounding like a booster?)

If you believe the rankings, MIT is still the top electrical engineering program in the country — a position it has held for a century. (This year it’s tied with Stanford and Berkeley in one ranking, which certainly seems plausible given the strength of all three schools).

What’s different is the environment: Massachusetts was a good place for creating tech startups in the 1960s, but it fell dramatically over the succeeding decades. On the Left Coast, there’s no disputing that the transformation of the Peninsula over the past 30 years has made the Valley a much better environment for launching a high-tech business.

So to a large degree, Stanford benefits from what’s in its backyard — the firms and infrastructure created by the industries that grew here in the 1960s-1990s. Some of these firms were by Stanford alums, some were not. In fact, MIT alumni have been taking jobs in the Valley for 30 years, and today I’m among MIT alumni coaching other alumni who want to launch tech startups here.

When did Stanford pass MIT as the top hub of tech entrepreneurship? I’m guessing the evidence would point to sometime in the 1980s, when many successful Silicon Valley firms were created and IPOs demonstrated the advantages of working for a startup.

Since the data is too messy come up with an exact date, any further efforts to nail it down is fodder for a two-beer argument. Anyone want to share a pitcher and come to a definitive answer?

Rule of law

As part of outsourced economic criticism, from Wednesday’s WSJ:

Chrysler and the Rule of Law
By TODD J. ZYWICKI

The rule of law, not of men -- an ideal tracing back to the ancient Greeks and well-known to our Founding Fathers -- is the animating principle of the American experiment.…

Fleecing lenders to pay off politically powerful interests, or governmental threats to reputation and business from a failure to toe a political line? We might expect this behavior from a Hugo Chavez. But it would never happen here, right?

Until Chrysler.

The Obama administration's behavior in the Chrysler bankruptcy is a profound challenge to the rule of law. Secured creditors — entitled to first priority payment under the "absolute priority rule" — have been browbeaten by an American president into accepting only 30 cents on the dollar of their claims. Meanwhile, the United Auto Workers union, holding junior creditor claims, will get about 50 cents on the dollar.

The absolute priority rule is a linchpin of bankruptcy law. By preserving the substantive property and contract rights of creditors, it ensures that bankruptcy is used primarily as a procedural mechanism for the efficient resolution of financial distress.

Chrysler -- or more accurately, its unionized workers -- may be helped in the short run. But we need to ask how eager lenders will be to offer new credit to General Motors knowing that the value of their investment could be diminished or destroyed by government to enrich a politically favored union. We also need to ask how eager hedge funds will be to participate in the government's Public-Private Investment Program to purchase banks' troubled assets.
Zywikci puts it more clearly than anyone else has. The US government has now introduced political risk into the enforcement of US contracts, where decades of legal precedent can be arbitrarily supplanted by the whim of those in power. The treatment of Chrysler’s lenders is the sort of thuggery we’d expect from Chávez, Castro or the Chicago mob — not the POTUS.

Wednesday, May 13, 2009

Ganging up on iTMS

Microsoft is running a new TV ad with “Certified Financial Planner” Wes Moss, promoting the $15/month Zune Pass as being cheaper than buying thousands of songs at $1/each from the iTunes Store.

As Ars Technica writes:

Moss compares $30,000 for iTunes to $15 for the Zune Pass. So where does Microsoft get the $30,000 number? Well, seeing as the 120GB iPod appears in the ad, I'm thinking the company is estimating each song at about 4MB, which really isn't much of an exaggeration. Of course, it's not exactly $15 versus $30,000. The $15 is a monthly fee, so you're likely going to be paying more if you plan on playing music for more than a month. That said, it would take you 166 years and 8 months to shell out $30,000 for the Zune Pass; many of us won't be living that long.

As of November 2008, the Zune Pass allows its users to keep any 10 songs per month. In other words, if you wanted 30,000 songs for keeps, just like the iTunes Store, you would have to wait 250 years. The cost would be a whopping $45,000, however. In other words, it's only really worth it if you're OK with the fact that you have to keep paying the monthly fee to keep access to the songs that you don't yet own. Otherwise, iTunes (or any other à la carte model) is the way to go.
Good Morning Silicon Valley (the Merc) observes:
The commercial doesn’t even mention the Zune itself; a non-techie viewer could be forgiven for thinking that the Zune Pass was an alternative service for iPod users. But going head to head on hardware probably isn’t the best play for the Zune anyway. The first step is to try to introduce doubt about committing to the Apple ecosystem by arguing the benefits of subscription over ownership. It’s a legitimate argument, depending on your needs, but one that hasn’t helped other subscription services like Real’s Rhapsody slow down the iTunes juggernaut. There’s no particular reason to think that Microsoft’s attempt to make the case will fare any better.
Whether I agree with their conclusions or slant, Microsoft is making legitimate comparisons between its offerings and Apple’s. That’s competition, and if it starts to have an impact, Apple will have to stop acting like a music monopolist.

What I found remarkable is that the Beast of Redmond is being more honest in its attack on Apple than is Sony.

In an interview posted this month by Nikkei Electronics Asia, Sony CEO Howard Stringer has suddenly become a convert to open standards. Stringer laments the failure of Sony’s late proprietary efforts to control consumer music libraries:
Q: In your keynote speech at the 2009 International Consumer Electronics Show (CES), you said that open technology is important today. Is that feeling based on the needs of customers?

A: That's right. Customers will refuse to accept it unless the technology is open. Youth in particular really dislikes closed technologies, closed systems and the like. …

Sony hasn't taken open technology very seriously in the past. Its CONNECT music download service was a failure. It was based on OpenMG, a proprietary digital rights management (DRM) technology. At the time, we thought we would make more money that way than with open technology, because we could manage the customers and their downloads.

This approach, however, created a problem: customers couldn't download music from any Websites except those that contracted with Sony. If we had gone with open technology from the start, I think we probably would have beaten Apple Inc of the US.

There was a time when it made sense to divide the market with closed technology, and monopolize a divided market, but that's just not an effective strategy any more. In the Internet universe, there are millions of stars - millions of options that have been created through open technology.
Every CEO is entitled to his woulda, coulda, shouldas. However, after years of being Sony going its own way on proprietary standards, I will believe Stringer’s deathbed conversion to open standards when the Memory Stick in every Sony camera is replaced by the de facto industry standard, Secure Digital.

However, Stringer was flat out lying seriously misinformed in his next paragraph:
Apple's iTunes Store uses its own proprietary DRM called FairPlay. I think this gives Sony a chance to provide something that Apple can't. And we have to move ahead and grab that opportunity before Apple begins to provide support for other hardware and blocks us out.
Apple announced four months ago that it was going DRM-free. Today, the FAQs on the iTunes Store are very clear:
iTunes Plus Frequently Asked Questions

What is iTunes Plus?
Now all songs on the iTunes Store are iTunes Plus songs. That means every song is available in our highest-quality 256 kbps AAC encoding (twice the former bit rate of 128 kbps), making for a sound that's virtually distinguishable from the original recordings. Plus, All music on iTunes is available without digital rights management (DRM). There are no burn limits and iTunes Plus music will play on all iPods, Mac or Windows computers, Apple TVs and many other digital music players.

Can I still buy music encoded at 128 Kbps with Digital Rights Management (DRM)?
All songs on the Store are now available in iTunes Plus, so tracks are no longer available as 128 kbps and with DRM.
Steve Ballmer, the honest competitor? I guess that goes with him being the nice one too.

Graphic credit: Joy of Tech.

iPhone scofflaws continue

Some 23 months after the introduction of the iPhone, users (or at least hackers) are still rebelling about Apple’s control over what is and is not allowed as a third party app. The NYT notes the ongoing fight between Apple, hackers who have done a series of “jailbreak” efforts on the iPhone, and the legal supporters of the jailbreakers. As with other circumvention efforts, Apple is citing the Clinton-era DMCA to shut down the hackers.

The controversy for some apps is understandable. Given AT&T’s 3G bandwidth problems caused by the iPhone, it’s not surprising that it doesn’t want the iPhone used for even more download bandwidth by “tethering” a connected laptop. Thus, Apple tossed such an application last summer. T-Mobile/Google had the same reaction to a tethering app developed for Android.

Other choices of what’s verboten are more surprising. The Times mentions a videocamera app, which seems a fine way to sell iPhones with more flash RAM.

If the hackers fear being locked out (after jail-breaking), their converse fear is that Apple itself will provide the missing functionality in a future release (possibly for free). For example, one November CNET report said that the iPhone 3.0 will support tethering this year.

Apple’s decisions of what to exclude have at times seen arbitrary and rarely explained. That is certainly the inherent problem of having only one (authorized) distribution channel with a near-monopoly on reaching customers.

Apple solved most (but not all) of the demand for the original jailbreaking by providing its own SDK and a distribution channel. Let’s hope that the current strictures are only a transitory issue en route to a fully extensible device.

Tuesday, May 12, 2009

Carrier app stores still alive and growing

The assumption was that the iPhone App Store harkened a tidal shift to phone (or operating system) centric app stores. Many assumed that this also rendered obsolete attempts by mobile network operators to create their own walled gardens.

Apparently two of the four major European operators, Orange and Vodafone, didn’t get the message. The Orange Application Shop (announced last month) and the unnamed Vodafone application store (announced Tuesday) are two examples. (Apparently the Vodafone store will also apply to Verizon Wireless in the US.) Both T-Mobile and Telefonica also have app store experiments.

Each of the app stores has its own APIs, own billing systems, own markets. Of course, this is in addition to the app stores from Apple, Google, Microsoft, Nokia, RIM, Samsung and others — each with its own APIs, devices, billing and so on.

The proliferation of app stores reminds me a lot of music stores: Apple created the iTunes Music Store and everyone wanted to have their own store. Now many of them are gone.

However, there’s actually a better argument for multiple music stores than multiple app stores. Once upon a time, we had thousands of LP (later CD) stores. If the online music stores were all distributing DRM-free MP3 files, then consumers could buy songs from different stores and different days and have them all work together.

However, a fragmentation of MP3 distribution would increase buyer power and thus increase competition (and price competition) between suppliers. This competition could commoditize MP3 distribution — putting the less efficient dealers out of business — or put pressure on music publishers to increase promotions or cut prices.

Unreported MSM decline

Everyone knows newspapers are in trouble: newspapers report it, online websites report it, even TV reports it.

However, the NY Times (a newspaper) reported Monday that the newscasts of the major TV networks are also losing viewers, but TV is not reporting that:

“The television networks have basically not been very interested in talking about television’s problems,” said Michael X. Delli Carpini, dean of the university’s Annenberg School of Communication and one of the study’s authors. The authors combed through reports from 2000 through early 2009 from 26 major newspapers, the evening news broadcasts of ABC, CBS, NBC and PBS, and the prime-time lineups of CNN, CNBC, Fox News and MSNBC.

In the newspapers, they found 900 articles about the drop in newspaper circulation and 95 about the shrinking audience for the broadcast networks’ newscasts. The TV news shows had 38 reports on falling newspaper readership and only 6 about the falling audience for national news broadcasts.
The Times reports that the newspapers (big and small) have lost 16% of their readers since 2000, but the major broadcast networks had lost 28% of their viewers in the same time.

Of course, this is not exactly an apples and apples comparison: because about 2.5 million TV viewers shifted from broadcast news to cable, total primetime TV news viewership is only down 19%.

There is no solution yet to the business model problems of what critics (right and left) call the “mainstream media.” As NYT reporter Richard Pérez-Peña drily observed, “all media have new — but not very lucrative — audiences online.”

Monday, May 11, 2009

But making it up on volume

Martin Peers of the WSJ’s “Heard on the Street” column reports today that data suggests that Cingular AT&T is losing money on iPhone customers:

Users of iPhone download games, video and other Web data at two to four times the rate of other smartphone users, according to comScore. Yet AT&T charges iPhone subscribers the same fee of $30 a month for data that it levies on other smartphone customers. And aside from restricting certain activities, like file sharing, AT&T doesn't limit how much data can be downloaded.
Peers estimates that iPhone 3G users (from July 11 to March 28) are 7.5% of all AT&T subscribers.

While everyone knows that iPhone users love to surf the web, the bad news is that (based on data compiled by Lucent) web browsing (surprise!) takes 16x the bandwidth of email. Web browsing is 32% of the usage but 69% of the bandwidth, while for email it’s 30% and 4% of the bandwidth. (Web browsing is 1.9x data intensive per minute as P2P). Or, as they say, AT&T is losing money on every unit, but making it up on volume.

While the supply of iPhone applications is a classic software positive network effect, use of the network is a negative externality. As my friend Rudi Bekkers wrote in his book on 3G networks:
Negative network externalities, for instance, when a telephone or computer network becomes congested or overloaded and the value of that network for an individual users decreases.
Peers argues that for AT&T and Verizon Wireless, the only solution is to abandon “all you can eat” data plans for cellphones, the way they have for laptops. The only problem is that there are cracks in the oligopoly:
With competition, the temptation to discount will be hard to avoid.
That competition will be from Sprint (trying to stem its sliding market share) and T-Mobile (still trying to gain share).

American consumers are used to “all you can eat.” They long enjoyed it for wireline voice communications, and forced it upon AOL for dialup ISPs. The wireless voice business is moving in this direction, with $50 unlimited service plans niche carriers like Metro PCS and Leap, as well as Nextel’s seven-year-old prepaid division, Boost Mobile.

All-you-can-eat (possibly with some sort of reasonable cap) is the only way that American consumers will adopt the mobile Internet. The iPhone users have shown there’s a pent up demand for mobile web browsing, but if it means the risk of $100 data bills, they won’t do it: instead, they’ll wait until they get back their wireline Internet.

So if the Big Two aggressively price data services, they may get too many users (shades of AOL’s excess demand for unlimited dialup services). If they charge too much, they’ll lose customers to smaller carriers, or to WiFi hotspots, or people will stay with their wired Internet.

One possibility (as suggested by GigaOM) is congestion pricing: give away megabytes of download bandwidth only when it’s unused, and charge a premium when everyone wants it. In the extreme, it would be like the cellphone (voice) pricing strategy of the 1990s: free night and weekend minutes, but expensive minutes on weekdays and at rush hour.

In the short term, the numbers don’t work for 3G unlimited data plans. In the long run, plans to build 4G networks assume high levels of usage, and proponents claim that LTE networks are 2x-4x more efficient than their WCMDA counterparts. I’m not sure that even that is cheap enough bandwidth to support all-you-can-eat.

I think it’s long past time for American carriers (as do European like Orange and T-Mobile) to embrace WiFi as a complementary service to their mobile networks. Much of the Internet browsing occurs in coffee shops and similar locations, so now that people are starting to embrace the mobile Internet, there’s no reason why phones can’t be programmed to prefer the high-capacity (and easily expanded) hotspot over the scarce 3G bandwidth.

Here, AT&T and T-Mobile are well positioned with their existing hotspot networks, with AT&T growing its network last year when it purchased Wayport. Meanwhile, Sprint has been selling hotspots, and presumably it hopes that its WiMax network will obviate need for 802.11 hotspots. To catch up, Verizon would have to buy Boingo; rumor has it that it will soon announce a partnering agreement.

Sunday, May 10, 2009

Nyet to philosopher kings

A key axiom of capitalism is that the distributed, decentralized consumers and entrepreneurs are more effective at optimizing an economy than any attempt at centralized control.

As Adam Thierer (formerly of Cato) recounted on Friday, cybervisionaries like George Gilder and Nicholas Negroponte predicted that the a wired world would enable decentralized empowerment. (Sound familiar?)

However, Thierer notes that Stanford (now Harvard) law prof Larry Lessig was far more pessimistic in his book Code and Other Laws of Cyberspace. Thierer summarizes the various reasons for the failure of Lessig’s predictions:

Had there been anything to the Lessig’s “code-is-law” theory, AOL’s walled-garden model would still be the dominant web paradigm instead of search, social networking, blogs, and wikis. Instead, AOL — a company Lessig spent a great deal of time fretting over in Code — was forced to tear down those walls years ago in an effort to retain customers, and now Time Warner is spinning it off entirely. Not only are walled gardens dead, but just about every proprietary digital system is quickly cracked open and modified or challenged by open source and free-to-the-world Web 2.0 alternatives. How can this be the case if, as Lessig predicted, unregulated code creates a world of “perfect control”?
Thierer reacts to an earlier essay (also at Cato) by Declan McCullagh of CBSNews and CNET, also critiquing the Lessig book.

McCullach attacks the philosophical basis of Lessig’s critique:
Lessig goes out of his way to assail libertarianism and “policy-making by the invisible hand.” He prefers what probably could be called technocratic philosopher kings, of the breed that Plato’s The Republic said would be “best able to guard the laws and institutions of our State–let them be our guardians.” These technocrats would be entrusted with making wise decisions on our behalf, because, according to Lessig, “politics is that process by which we collectively decide how we should live.”
I’ve heard Lessig speak a few times. He always struck me as a smart guy. However, his work never struck me as scholarship, merely opinion couched as advocacy. His presentations tended to assume the audience agreed with him, rather than using empirical evidence to support his positions. (As with open source, a lot of preaching to the faithful.)

Compared to Lessig, I was always more impressed by the work of Pam Samuelson (a Berkeley law prof). Both took similar positions on key copyright issues, but Samuelson’s positions were based on real evidence. Maybe that’s while Samuelson is still an IP expert while Lessig has abandoned IP to move on to the next crusade.