Wednesday, December 30, 2009

Productive publishing period

[HornTooting]
In terms of quantity, 2009 was my most productive year ever for academic publishing. After publishing three journal articles in 2008, in 2009 I published five: one about telecommunications, one about standards, two about telecommunications standards and one about open innovation.

The first two articles were based on a four-year collaboration with my now-friend Rudi Bekkers, looking at patents in W-CDMA. One paper focused more on the case study of standardization, while the second paper looked at the quality and timing of essential patents as laid against the standardization process. The latter paper was published in Telecommunications Policy, the leading academic journal on, well, telecommunications policy.

Building on Rudi’s earlier pathbreaking papers on GSM patenting, we noted several shifts from the 2G to 3G era in European mobile phone standards. The number of essential patents increased eightfold and the number of claimants increased threefold. Equipment makers retained about the same proportion of overall patents, but the network operators virtually disappeared, replaced by component suppliers (notably Qualcomm) and technology licensing firms (notably InterDigital).

These were our conclusions:

The sources of UMTS patent proliferation have often been ascribed to IPR-focused companies outside the ETSI process, particularly Qualcomm and InterDigital. However, this study shows that the largest numbers of patents are held by two firms (Nokia and Ericsson) centrally involved in the UMTS standardization, and the timing of their patenting suggests that they used their knowledge of the standard’s development for anticipatory patenting—further contributing to patent proliferation.

Still, a cozy oligopoly of four main UMTS patent holders might have produced a manageable IPR regime comparable that to the five major holders of GSM patents. However, the number of firms claiming at least a one patent has grown threefold, increasing the risk of holdup, transaction costs and royalty stacking for firms implementing the newer standard. This uncertainty is magnified by the self-determination of essentiality: while it is virtually impossible to determine how many of the 1227 patents are actually necessary to implement UMTS, at the same time other parties may fail to provide an itemized list of essential patents.
This was not even the longest collaboration of the papers. One paper was based on an eight-year collaboration with my friend Scott Gallagher, which began when we met during the bubble era conference of the Strategic Management Society (2001) in San Francisco.

Then as now, the goal was to re-examine, critique and extend the traditional view of positive feedback in the adoption of standardized goods. We brought together a number of observations that (when we started) were somewhat novel, although the field has not stood still during that period. After many delays (including other projects, work and life), the paper was published in September in the Journal of Engineering and Technology Management, a respectable journal that has attracted papers from some of the top names in the field

I’ve already mentioned on my other blogs the two other papers published in 2009.

In April, I published the cover article in the Journal of San Diego History, based on my research into the origins of the San Diego telecom industry. The paper was entitled “Before Qualcomm” to make it more relevant to the general readership, and traced the early round of spinoffs of Linkabit, the region’s seminal company. It also included discussions of the role of Qualcomm co-founders Andy Viterbi and Irwin Jacobs in applying Claude Shannon’s to space communications, drawing on my 2008 article in the Journal of Management Studies.

The fifth paper is the first in what I hope will be a series of papers that contrast open innovation with user innovation and related theories. Published in the Washington University Journal of Law and Policy, it (not surprisingly) focuses on policy issues related to open, user (and cumulative) innovation.

I’ll be glad to send out a PDF of the published version of any paper to anyone who’s interested.

This morning I got email notice of acceptance of my first paper for 2010, a paper on the success of the iPhone that will be published by Telecommunications Policy. Michael Mace have been working on this paper since 2007 — actually before the iPhone shipped — although our understanding of the phenomenon has shifted significantly since then. Additional details as they become available.

[/HornTooting]

References

Rudi Bekkers and Joel West, “Standards, Patents and Mobile Phones: Lessons from ETSI’s Handling of UMTS,” International Journal of IT Standards & Standardization Research, 7, 1 (January 2009), 13-34.

Rudi Bekkers and Joel West, “The Limits to IPR Standardization Policies as Evidenced by Strategic Patenting in UMTS,” Telecommunications Policy, 33, 1-2 (Feb.-March 2009): 80-97. DOI: 10.1016/j.telpol.2008.11.003

Scott Gallagher and Joel West, “Reconceptualizing and expanding the positive feedback network effects model: A case study,” Journal of Engineering and Technology Management 26, 3 (Sept. 2009): 131-147. DOI: 10.1016/j.jengtecman.2009.06.007

Joel West, “Before Qualcomm: Linkabit and the Origins of the San Diego Telecom Industry,” Journal of San Diego History, 55, 1-2 (Winter/Spring 2009): 1-20.

Joel West, “Policy Challenges of Open, Cumulative, and User Innovation,Washington University Journal of Law & Policy 30 (2009): 17-41.

Thursday, December 24, 2009

Stopping others from Christmas evil

One of the functions of free markets is to provide private governance. Self-regulating markets reward good products and services and punish the bad.

However, sometimes buyers don’t have enough information to make good choices. In response, entire companies arise to correct this lack of information — stereo magazines, camera magazines, Consumer Reports, etc. etc. Intermediaries are also supposed to play this role. Reputable retailers, wholesalers and distributors select reputable products and stand behind them.

Of course, this is all fine in theory, but often breaks down in practice. The self-regulators (like government regulators) get lazy, corrupt, or just make a mistake.

And then we have advertising. TV and radio stations accept ads for weight loss programs, male enhancement herbal supplements, and all sorts of products where the “too good to be true” probably is.

All of this being a roundabout way of asking: How much of an obligation does Google have to reject fraudulent ads? Does its promise to “do no evil” require it to avoid complicity in the evil of others?

Do we expect more or less out of a search engine than a TV station, TV network or the New York Times? Does its market dominance give it special obligations?

To me, the Google business model makes it uniquely vulnerable to this problem. Its primary ethos of making its business scalable with no human intervention — and thus no human judgement — seems to be devoted to doing as little governance as possible.

From what I’ve seen thus far, it’s Insulted — and fights back— if the SEO crowd games its algorithms, going so far as to misappropriate the e-mail term “spam” to tar such efforts. It has also taken steps to block searches that lead to malware sites.

However, it seems to be less intent on blocking companies that pay for ads, and then use the traffic generated by those ads to perpetuate age-old examples of deceptive business practices.

All this came up earlier this month when I was Christmas shopping (for myself). My digital SLR is almost 10 years old. My wife and I have been talking for several years about replacing it because the CPU is too slow to take bursts of pictures of our daughter at sporting events.

Back in February, I’d identified the Nikon D60 as the likely replacement, and so I google’d “Nikon D60”. This gave me a lot of paid links to firms offering to sell me a camera, and links to sites offering me pointers to the best prices on a camera.
510E-Icdlcl. Sl160
Virtually all of the sites that I found directly and indirectly were dishonest to some degree: they had no intention of selling me a D60. (The one exception was Amazon, which would be glad to sell me one today).

The problem with the search is that since my research in February, Nikon discontinued the D60 and replaced it with the D3000. So when Target bought an ad for my search, they were gambling that when I came to their site, I’d buy another camera. A few other reputable companies did the same thing.

After that, what was left was companies that had no intention of selling me a camera. Some of these advertised directly, some were listed by a site called Compare247.us and some were even listed by a shopping.yahoo.com search that showed up.
Going to these sites gave me many examples of “too good to be true” prices — because they were. This reminded me of college days when some of the ads in the back of Popular Photography were for reputable mail-order camera stores like Adorama, B&H, Executive and 47th Photo, but among the remainder were companies that were not selling what they advertised or would out-and-out rip you off. (Pop Photo had specific policies for reporting such problems, and over time seemed to weed out the worse offenders).

It turns out that a UC Berkeley alum named Dave Michael (possibly a pseudonym) runs a blog devoted to rooting out fraudent camera website “deals.” (Alas, to monetize all the traffic he’s getting, he created a separate blog plugging good camera prices that he sees on Amazon.)

So when I went to investigate what was known about these “too good to be true” prices, I kept finding Dave’s website, with articles talking about why these sites are dishonest and full of user comments about their own bad experiences. This included postings on Supreme Camera, SmartChoiceCameras.com, Thunder Cameras, and Need4Digital.com. All of these were mentioned by Google or comparison sites linked by Google.

Although Google is the biggest offender, it’s not as though its competitors are blame free. Yahoo Shopping also sent me to Supreme Camera and Need4Digital.com. In fact a Yahoo search for “Nikon D60” this morning (Dec. 24) found another paid ad for Need4Digital.com.

With its billions, Google can’t claim they don’t have the resources to investigate complaints. However, their AdWords complaint process seems to worry about other types of problems. For example, if a competitor is clicking on your page to inflate the commission you pay, Google offers advice on how to report “invalid clicks.” In fact, Google’ing “fraud” on the AdWords site talks about this so-called “click fraud,” not AdWords sites that are fraudulent.

Dave’s site has apparently come to the attention of state and local regulators who have been using it to shut down the most obvious frauds. He has also asked Google why they are not doing more. On October 28, he posted this plea on his blog:

Dear Google: Despite the State of New York’s crackdown, the bait and switch websites continue to pop up, and they use Google Adwords to lure unsuspecting Internet users into their fraud. With that in mind, I have an offer. Why don’t you flag all new applicants to your Adwords program that plan on advertising either cameras or camcorders, and then do some research. If they are brand new, put them on probation. Heck, send me their names and I’ll research them for free. In the long run, it’s better for Google not to let these guys use your service to commit fraud.
So in the end, what responsibility does Google bear for the dishonesty of its advertisers? How much effort should it exert towards rooting it out?

I think it can and should do more. If that means paying actual human beings to investigate the most egregious cases, then so be it.

Wednesday, December 23, 2009

Favorite OS X utility of the month

Without AppleTalk, OS X does a reasonable job of auto-detecting and auto-configuring network printers that support the Bonjour family of TCP/IP services (based on the IETF Zeroconf effort). I miss AppleTalk, but when I was listening to Apple pitch this idea back in 2000-2001, it seemed like the most reasonable way to make TCP/IP usable.

However, today I found a really cool freeware utility that makes Bonjour more usable: Bonjour Browser from TildeSoft.

Here’s how I used it: in a strange office, I wanted to know what printers are available. Apple’s default “add printer” feature will do that, but it hides the IP addresses from me.

If I use Bonjour Browser, then I can get the IP address for any or all of the printers. And if they’re HP LaserJet printers, I can throw the IP address into a web browser and thus browse all the useful printer information: status, configuration, installed accessories, etc. etc.

Of course it has other uses. And the price is right.

Monday, December 21, 2009

Possible bottom for California higher ed?

It’s no secret that California has a terrible budget situation, with the “perfect” storm of a dysfunctional legislature, unsustainable spending and wildly fluctuating revenues (tied to ordinary income taxes on the sale of personal homes and stock options).

One of the most impacted has been state funding of higher education, which has decreased 17% in the past two years. Both the 10-campus University of California (research universities) and the 23-campus California State University (teaching schools like SJSU) have put faculty and staff on involuntary furloughs with pay cuts of 4-10% and 9.3%, respectively — and made up the rest of the cuts with hefty tuition increases and class size increases. (The 2-year community colleges have also been hurt, but it’s harder to generalize because each set policy at the local level.)

I’ve held off commenting on this because it’s obviously impossible to write dispassionately about one’s own pay cut — or that salaries were below market even before the latest round of budget cuts. However, having seen many budget problems in the UC/CSU in the past 20 years, I think this has more potential than any to do permanent damage to both institutions.

The reality is that if the UC and CSU don’t get money, they have to either raise prices or cut costs (or both). Short-term cuts are hard to make with fixed costs (like campuses) and quasi-fixed costs (tenured faculty). Drastic action — like closing newer, weaker campuses in the UC and CSU system — is unlikely to happen in a three-way stalemate between politicians, administrators and the employee unions.

Certainly self-centered students (like high school and college students from every era of the past 40 years) have gone around protesting “do more for me.” In a telling indictment of our failed K-16 educational system — or the students’ narcissism — they have been blaming trustees of both systems for raising prices and canceling classes in the face of massive budget cuts. Economic ignorance is terrible thing, but of course students are not the only segment of society so afflicted.

What i find encouraging is that even if the kids are still clueless, some of the grownups are waking up. A report in the LA Times Monday says that parents of current and prospective college students have noticed the worsening conditions, and are vowing to do something about it.

Cal State Chico parent Bob Combs summarizes this view:

"We elect these officials, we donate money and we are the voice of our children," said Combs, a real estate agent. "If one of us is calling an Assembly member or Congress person, we're certainly much stronger if 10 of us are calling."

"The reason California's public higher education system has been so successful is that it provided a good education for a reasonable price, but that is changing," said Combs.
Combs is exactly right: in a republican form of government, the voters should tell the politicians what their budget priorities are and hopefully they’ll listen.

Sacramento doesn’t have magic buckets of money sitting around to give to higher ed, and indeed, most administrators expect the CSU/UC budget situation to be much worse in 2010-2011 than 2009-2010. Still, higher ed has taken a higher proportion of budget cuts than other state spending, probably because our unions are weaker than other public employee unions.

Zero-based budgeting would be nice, e.g. the state cutting out things it didn’t do 30 years ago because it can no longer afford to do so. Alas, a more likely outcome will be for legislators to hone their job-killing skills by raising taxes.

Saturday, December 19, 2009

Deceptively viral Nano

The conventional wisdom is that the iPod is on its way out. Units sales in Apple’s 4th quarter were down 8% YoY, and the fall iPod refresh was disappointing. One could easily conclude that Apple has given up on the product line and was milking it until end of life.

This is certainly evident at both ends of the product family. The classic iPod — hard disk based — is a tiny niche that is going away. The Shuffle is already about as small as it’s going to get, and there are limits as to how many people want a player that offers no control over what you hear.

The iPod Touch is deliberately crippled — if it’s too attractive, it will cannibalize iPhone sales. Perhaps that is why Apple didn’t update the iPT this year — except for larger memory sizes — despite pent up demand for a new model. It could also be (as rumored) there were production delays on a planned new model.

From a strategic standpoint, however, I think the iPT is in a decline. My interpretation is that the main goal of the iPT is to expand the platform installed base to include people who won’t sign up for a carrier (such as AT&T) or won’t switch carriers. (Since almost every sentient human being in the developed world has a cellphone, the latter is a larger category.)

However, Apple now has multiple carriers in Europe and Latin America. This leaves mainly the US as the market for the iPT for the majority of the market that doesn’t or won’t use AT&T. (Apple also has single carriers in Japan and China, but the lukewarm reception for the iPhone does not suggest a lot of pent up demand to shunt to the iPT). So I think over time the pool of people who want the iPhone but won’t/can’t buy them is continuing to shrink.

For the iPod product line, that leaves the iPod Nano. I must admit, I underestimated the potential for the iPod Nano 5G, which seems like an overpriced Shuffle with a screen. In particular, the video camera seems like it offers a ubiquitous recording device for those carrying a music player but not a cellphone.

In short, the product is well-targeted to tweens and young teens from families that will spoil them with a one-time $150 purchase but not an $800/year iPhone service plan that doesn’t even include many text messages. Enough Nanos are out there that (affluent) kids know about it and how fun it is. I’m getting pressure at my house from a 6th grader, pressure we hope to resist.

An additional push is coming from the world’s largest retailer. Starting today, Wal-Mart is selling the $150 Nano for $145 with a $50 iTunes gift card. (The same Nano is $134 at Amazon without a gift card). I can’t tell if this is an aggressive effort to build traffic — and Apple to support iPod sales — or it’s Apple dumping inventory that otherwise wouldn’t be selling.

Given this, I expect most of the iPod sales reported in quarter ending Dec. 26 to be the Nano. Will iPod sales be as good as last year? Certainly not in dollar terms — when more iPod Touch models were sold — and I’m guessing not even matching the 22.7 million iPods sold last Christmas.

Still, as a starter iPod (gateway drug), the iPod Nano will add new young customers to the iTunes Store and iPhone/iPod ecosystem. Building brand and product loyalty among pre-teens is clearly crucial for Apple as iPhone challengers continue to flood the market.

Monday, December 14, 2009

Paul Samuelson, 1915-2009

From the front page of Monday’s New York Times:

Paul A. Samuelson, the first American Nobel laureate in economics and the foremost academic economist of the 20th century, died Sunday at his home in Belmont, Mass. He was 94.

His death was announced by the Massachusetts Institute of Technology, which Mr. Samuelson helped build into one of the world’s great centers of graduate education in economics.

In receiving the Nobel Prize in 1970, Mr. Samuelson was credited with transforming his discipline from one that ruminates about economic issues to one that solves problems, answering questions about cause and effect with mathematical rigor and clarity.
Samuelson was a key JFK economic advisor and won a National Medal of Science from Bill Clinton. He built a braintrust of (mostly) Keynsian economists at MIT, including seven other winners of the Sveriges Riksbank Prize in Economic Sciences. Samuelson won only the second “Nobel” prize in economics, starting a domination of the field by Americans (and more specifically Cambridge, Mass. professors).

I knew he had the best-selling economics textbook, but the NYT said it was the best-selling textbook of any kind for almost 30 years. In the late 1990s, it was selling 50,000 copies a year — clearly one of the most lucrative textbook franchises of all time.

The NYT portrays him as the anti-Friedmanite, but one who apparently recognized some of the limits of his Keynseian worldview:
The experience of nations in the second half of the century, he said, had diminished his optimism about the ability of government to perform miracles.

If government gets too big, and too great a portion of the nation’s income passes through it, he said, government becomes inefficient and unresponsive to the human needs “we do-gooders extol,” and thus risks infringing on freedoms.
One thing I found extremely encouraging from the NYT account is that he kept egos in check at MIT, leading by example with his personal humility. Based on my experience of the last 15 years, humility is in scarce supply among academics — particularly successful ones.

Sunday, December 13, 2009

Not so secret SV leaders

The lead story on the Merc Sunday blared

10 Silicon Valley Superstars*


*who you’ve never heard of
[by] Chris O’Brien
The ten names:
I liked the premise of the story — there are some movers and shakers that most people haven’t heard of, and that people should know more about them. However, I don’t think they’re quite so unknown. Fortunately, the online version has the same content without the misleading headline.

I’ve met three of them. I can’t imagine anyone who’s done anything in open source who hasn’t heard of Tim O’Reilly and his publishing house; he also popularized the term “Web 2.0.” While I’ve heard his speeches, we only met briefly at a social event after an open source trade show (probably OSBC 2004 or an early LinuxWorld).

I also can’t imagine there are many active in the tech industry who haven’t heard of Vish Mishra and TiE. In my case, he’s a friend of a friend (one of my co-workers), but actually I first met him in 2005 when he took my picture with then-CEO Irwin Jacobs of Qualcomm.

OK, the third one is a fluke. I happened to sit at the same table as Kevin Surace earlier this fall, and when I got home, the more I learned about his company the more serious it appeared to be. I’ve since blogged on the company and interviewed Surace for my study of local cleantech companies.

Overall, I thought it was an interesting list, and O’Brien communicates a few interesting points about each in a small amount of space.

However, as someone who spends most of his days with students and college professors — not wandering Silicon Valley — I’d bet that most of the Merc’s readers who work in the tech industry have met 2 or 3 of these “never heard of” types. But as a former reporter, I’d be inclined to blame the copy editor for the exaggerated headline, since O’Brien makes no similar claims in the body of the article.

Saturday, December 12, 2009

A great CSU success story

Until this morning, I didn’t realize that aeronautical engineer Burt Rutan was a graduate of Cal Poly San Luis Obispo, the flagship campus of the California State University system (my employer).

At SJSU, we are proud of our alumni who achieve success in sports, business, and the arts; our greatest claim to technical success is that Intel co-founder Gordon Moore spent 2 years here (where he met his wife) before transferring to Berkeley. The distribution of famous alumni is similar at San Diego State, our sister campus and closest rival. The breakdown of famous Cal Poly alumni are also broadly similar, although they have two astronauts.

So Burt Rutan is a clear outlier. I learned a lot more this morning about Burt Rutan, the man, who is in the news as the technical brains behind Richard Branson’s efforts to create a commercial space travel industry.

Rutan came to fame designing the Voyager, which flew around the world without refueling during nine days in 1986. It’s one of five Rutan-designed planes held by the Smithsonian. Another is VariEze, an earlier kit plane that revolutionized home-built airplanes.

Also in the Smithsonian is his first spacecraft, SpaceShipOne — funded by Microsoft billionaire Paul Allen — which won the Ansari X Prize for flying to 100km twice in two weeks in 2004. As intended, the X Prize award provided a stepping stone for commercial space development — in this case Sir Richard and the SpaceShipTwo unveiled Wednesday. (It was modestly termed the VSS Enterprise, presumably to suggest a fleet of 40-50 Virgin Space Ships to be operated by Virgin Galactic.)

Because of Sir Richard, the FT thought it newsworthy to publish an interview Saturday with the greatest aeronautical engineer of our generation. It alludes to his modest roots in central California, and his efforts designing and testing model planes as a kid.

Such modest beginnings are part and parcel of the California State University system, the largest university system in the USA. We take students who don’t have the grades or the money to go to the more prestigious University of California system, as well as those who can’t leave home to attend college. (My dad commuted from home to SDSU for the first two years before transferring to UC Berkeley; my mom started at SJSU before transferring to Berkeley a few years ahead of Gordon Moore.)

Rutan got his B.S. in aeronautical engineering from Cal Poly in 1965. His senior project won the national student paper competition from the AIAA, the industry professional society.

Apparently the mothership for SpaceShipTwo — called WhiteKnight Two — bears the Cal Poly logo. In a 2005 speech at his alma mater, he encouraged students to get involved in creating the next round of space exploration. He also called for students to demonstrate their passion to make things happen.

As a kid, I barely heard of Cal Poly, perhaps because most of its alumni end up living in the Bay Area or the Central Coast. I only knew of it at all because my older cousin left Gilroy to go to Cal Poly before moving to Alaska to build the first Alaska pipeline.

Today, I know from other parents that today Cal Poly admissions are tougher than several UC campuses. Cal Poly is one of the top US engineering program among teaching schools (i.e. those without a PhD program) — along with the service academies and the vastly under-rated Harvey Mudd (east of LA).

In six years, I plan to encourage my daughter to apply to Cal Poly along with the UC campuses, Harvey Mudd and perhaps the Air Force Academy. (This assumes she continues to be a math whiz who likes to make things — and thus an ideal candidate for engineering school). Let’s hope the campus continues to turn out bright engineers for California’s economy, despite the legislative mismanagement that is giving the CSU system a death by a thousand cuts.

Friday, December 11, 2009

Jim Rogers: Tim Geithner & Tiger Woods

From a CNBC interview Friday with professional investor Jim Rogers:

Treasury Secretary Timothy Geithner … "is a very smart person," but "he's been wrong about everything for the last 15 years," Rogers said.

"Why are we listening to any of those guys down there? They're making our situation worse," he said. "They said in writing yesterday the solution to our problem is to spend more money … that's what got us into this problem: too much debt."

"That's like saying to Tiger Woods, 'you get another girlfriend and it will solve your problems' or 'five more girlfriends and you will solve your problems,'" he said.

"We're all going to pay the price for this in, one, two, three years," Rogers added. "The next time that we have problems in the economy, which will not be too long, we don't have any bullets left. We've shot everything we had to solve our problems."

"What are they going to do, quadruple the debt again? Print more money? We don't have any trees left. We're running out of trees."
The latest in outsourced economic criticism during this time of cutbacks.

Thursday, December 10, 2009

Ganging up on the enemy of my enemy

Two of my MBA students presented a project last night on the Kindle and e-readers. One quick takeaway was that everyone else in e-books — Barnes & Noble, Google, Sony (and probably Apple) etc. — wants to ally against Amazon to keep it from taking over the world.

Meanwhile, for music, everyone wants to gang up on Apple to keep them from continuing to dominate the world — particularly content providers. And for video, it’s YouTube which inspires this enemy-of-my-enemy-is-my-friend mentality to create Hulu and Vevo.

I’m not saying it’s wrong, but these ad hoc decisions do make for incoherent strategies and strange bedfellows. And if an ally that formerly weak becomes strong, will the other rivals (or suppliers) then ally with someone even weaker? Go it alone? Ally with their original enemy?

I was reminded of this when trying out Vevo this morning. I decided to watch a video from the Elvis Presley Christmas Duets album — continuing the raised-from-the-dead trend that started with the 1991 “Unforgettable” duet between Natalie Cole and her late father Nat “King” Cole.

Is this because Vevo’s owners prefer Amazon to Apple? Or is it because Amazon pays a referral fee? An advantage Amazon has in seeking alliances is that it pays a commission on click-through sales so that people and firms (like me) will send traffic to their site.

However, some states are seeking to use these payments as a justification for taxing Amazon sales into their state — and in a response, Amazon is dropping the program in those states. So if this kills the associates program, will Amazon have less allies in its music efforts?

Wednesday, December 9, 2009

Viva Vevo

Tuesday was the North american launch of Vevo, the music video site owned by Universal, Sony and money from Abu Dhabi Media

I find it encouraging that the record labels are taking this step. Faced with the decimation of their decades-long revenue model — the sale of tangible music discs — they are making pro-active experiments. If the plane is losing altitude, at least they’re trying to pull up rather than the controlled flight into terrain of so many other industries afraid of cannibalizaiton.

Of course, this is a very mild step — stealing back unmonetized content views from YouTube in (probably forlorn) hopes of gaining meaningful revenues. Still, if it crashes and burns, it’s better than the controlled flight into terrain that most of Hollywood is on.

It’s interesting to see what it does to YouTube, since, — as Wired notes — most of the most popular videos are music related. By my count, before Vevo launched 14 of 24 the videos with 60+ million views were owned by labels or artists.

As the LA Times notes, this is the music industry’s version of Hulu — i.e. yet another Universal anti-YouTube site. The Times is skeptical about claims made Tuesday about how revolutionary it is:

Bono's opening remarks were quoted by Billboard: "Friends, we are gathered here today to mourn the passing of the old model that was the music business."

Perhaps Bono has some inside information on what Vevo ultimately will become. In a quick summary, Vevo offers on-demand streaming of music content with advertisements. YouTube offers the same, without the ads, and more content.
I agree with the Times that the business model is suspect: it doesn’t work all that well for YouTube at monetizing, even if Vevo has less of the drek cluttering up the site (like all the fake videos.)

For now, it also doesn’t have videos from my favorite bands on WEA (or anything else from Warner Music Group.) Perhaps the record cartel has a plan for the firms to take turn swimming against the industry tide to make it seem less cartel-like.

Of course, some of this is correcting the original MTV mistake — giving away music videos as advertisements for the real content, rather than treating them as valuable in their own right. But then that horse escaped the barn nearly three decades ago.

Will the labels offer their content on reasonable terms to other online channels? Or will the other channels decide that the numbers can’t be made to work, long before the labels throw in the towel?

Tuesday, December 8, 2009

End the TARP slush fund

From John Gapper of the Financial Times blog, on plans to redirect TARP funds for other purposes:

The suggested use of the Troubled Asset Relief Programme to support US job creation rather than simply to address the financial crisis strikes me as morally hazardous.

As the FT reports, some Democratic members of Congress are pressing for some of the $700bn Tarp fund to be used for tax relief for job creation, or for state spending programmes.

Earlier this year, Congress became extremely angry with hindsight at the success of Hank Paulson, the former Treasury secretary, in getting approval for Tarp and then changing tack on how the money was used. Instead of being used to buy assets, its primary use became to invest directly in banks.

Now, Congress seems to want to change the rules itself and use the Tarp money for its favoured projects, in the same way that stimulus funds were divided among states and districts.
Gapper worries abuse of TARP funds for pork barrel spending this time would kill chances of creating another TARP during the next financial market crisis. (He thinks that would be a bad thing.)

The Ludwig von Mises Institute notes the problem with claims of tax-funded stimulus efforts. It cites Henry Hazlitt and his example of a tax-funded bridge project:
Two arguments are put forward for the bridge, one of which is mainly heard before it is built, the other of which is mainly heard after it has been completed. The first argument is that it will provide employment. It will provide, say, 500 jobs for a year. The implication is that these are jobs that would not otherwise have come into existence.

It is true that a particular group of bridgeworkers may receive more employment than otherwise. But the bridge has to be paid for out of taxes. For every dollar that is spent on the bridge a dollar will be taken away from taxpayers. If the bridge costs $10 million the taxpayers will lose $10 million. … Therefore, for every public job created by the bridge project a private job has been destroyed somewhere else.

But then we come to the second argument. The bridge exists. … It has come into being through the magic of government spending. …

Here again the government spenders have the better of the argument with all those who cannot see beyond the immediate range of their physical eyes. They can see the bridge. But if they have taught themselves to look for indirect as well as direct consequences they can once more see in the eye of imagination the possibilities that have never been allowed to come into existence. They can see the unbuilt homes, the unmade cars and washing machines, the unmade dresses and coats, perhaps the ungrown and unsold foodstuffs.
Or, as the Foundry nicely summarized:
Washington repeatedly fails because it refuses to admit that almost without exception government actions do not increase employment on net. Only the private sector in pursuit of opportunity can create jobs on net. The best we can hope from government is that it keeps to a minimum the jobs it prevents and the income and wealth it destroys.

The good news is that recoveries happen and they happen because an economy that enjoys flexible markets and flexible prices adjusts. It adjusts in good times and in bad. It adjusts to whatever mistakes led to the recession. It adjusts to whatever damage is done during the recession. To get the unemployment much below 10 percent over the next year will require policies that help the economy to heal itself, to adjust more quickly.

Monday, December 7, 2009

NYT stalking horse for trial lawyers

The cell phone industry made the front page of the New York Times Monday, and it’s definitely not a good thing. Here’s the web headline:

Driven to Distraction
Promoting the Car Phone, Despite Risks
By Matt RIchtel

Long before cellphones became common, industry pioneers were aware of the risks of multitasking behind the wheel. Their hunches have been validated by many scientific studies showing the dangers of talking while driving and, more recently, of texting.

Despite the mounting evidence, the industry built itself into a $150 billion business in the United States largely by winning over a crucial customer: the driver.

For years, it has marketed the virtues of cellphones to drivers. Indeed, the industry originally called them car phones and extolled them as useful status symbols in ads, like one from 1984 showing an executive behind the wheel that asked: “Can your secretary take dictation at 55 MPH?”
The way the story was framed, it read like a PR campaign for a class action suit against deep pocket cellphone makers and service providers. (When I read it, I wondered whether John Edwards was coming out of retirement to pay his child support.)

Sure enough, a few paragraphs later, after quoting the US industry’s spokesman (a former GOP congressman), the story legitimates its harshest critics:
Critics of the industry argue that its education efforts over the years provided a weak counterbalance to its encouragement of cellphone use by drivers and to its efforts to fight regulations banning the use of cellphones while driving, or at least requiring drivers to use hands-free devices.

The critics — including safety advocates, researchers and families of crash victims — say the industry should do more, by placing overt warnings on the packaging and screens of cellphones.
And, in fact, the lead article promotes a sidebar (also in Monday’s paper) entitled “A Victim’s Daughter Takes the Cellphone Industry to Court.” And the web version helpfully provides a copy of the complaint filed against Sprint Nextel and Samsung. WIth newspaper reporters, nine times out of ten this means that the main story was written to legitimate the sidebar, rather than seeking out the sidebar to illustrate the main story.

For now, I want to leave aside the bias of the “muckraking” newspapers that always side with litigants against big bad corporations. The article endorses a parallel to the great muckraking talisman of the 20th century, i.e. Watergate:

Clarence M. Ditlow, executive director at the Center for Auto Safety, a nonprofit advocacy group, was invited last month to speak about distracted driving by the Federal Communications Commission. He told the audience that the cellphone industry was selling a product consumers can use dangerously — without properly warning them or providing safeguards.

He added: “The only questions are: what did they know, and when did they know it?”

On a related problem, I’ll set aside the chronic error by reporters (as well-chronicled by John Stossel) of worrying about the wrong risks as members of the “Fear Industrial Complex.” I’m also not offering judgement on specific advertisements, or the (seemingly implausible) claim that drivers wouldn’t know that this is distracting unless the industry warned them.

Instead, I want to confront the ahistoric ignorance of the premise of the story. The reason that mobile phones were promoted as car phones in 1983 is because that’s what they’d been for the previous three decades.

One snippet (and photo) from the story examines the October 13, 1953 press conference marking the first “official” US cellphone call, made on the Chicago Bell-operated network by a Bell Labs engineer.

Beginning in 1996, I studied the history and pre-history of the cellphone industry — in US, Japan, Germany, Sweden, Finland, and published papers in 2000 and

The reason that 1st generation cellphones were marked as carphones because they were designed as carphones — the successor to 30 years of carphones since the first one in St. Louis in 1946. To relieve chronic capacity problems, AT&T tried for 20 years to get FCC permission to launch a cellular system, and ran a test system in Chicago for six years before the “official” launch. (NTT’s 1979 official launch looked a lot like the AT&T field test.)

The Times reporter dismisses this as not relevant — that the niche phones of the 1960s (aka IMTS) had nothing to do with the mass market phones of the 1980s.

In 1983, everyone (except for a few crazy guys at Motorola) were assuming they were carphones. If you read the Bell Labs design explanation for AMPS — as reported in the January 1979 issue of the Bell Labs Technical Journal — it describes a system with a small control unit on the transmission hump and a big, power-hungry radio in the trunk. In my research, I spoke to an early entrepreneur who made money installing these $2000+ systems in rich folks’ cars.

In fact, as I showed in a 2002 article, AT&T thought so little of the cellphone business opportunity that it gave away the business to the local Baby Bells. Talk about mistakes. Remember that PacBell was bought by Southwestern Bell because — after spinning out AirTouch — it was sickly and dying. Similar, AT&T was bought by Southwestern Bell after long distance went away as a viable business.

Of course, AT&T thought so little of cellphones because it had a McKinsey consulting report predict that the US would have a total of 1 million cellphones by 2000. (The actual number was 97 million).

So again, this says nothing about what was or was not appropriate advertising in 1990 or 1995. But cellphones were initially marketed by the Baby Bells as carphones because that was all that was technically feasible, that was what they’d been selling for decades, and (at least initially) that’s all they thought it would be.

Friday, December 4, 2009

Humorless zealots

An op-ed column from today’s FT by former New Zealand prime minister Mike Moore:

In Berlin, I inadvertently almost reduced a young green to tears of anger when I questioned a green commandment: to buy local food and use “food miles” to determine energy use. She did not think it right that flowers were flown into Europe from Africa. Pointing out that European flowers received unhealthy energy and fertiliser subsidies and Kenyan flowers used less energy, plus that her approach would cost Kenyan workers some of the best jobs in their country, was dismissed.

Trade based on unsubsidised competition is about efficiency, and efficiency is another word for conservation. If their brave new world is to be a world without borders, a new sisterhood of man, why bring back tribal boundaries just for trade?

It is right and proper that politicians and businesspeople face a sceptical media who scrutinise them, hold them to account, and expose their flaws and contradictions. The same should apply to the green agenda, which is all too often accepted at face value because it claims to have the planet’s interests at heart, unlike grubby politicians and greedy businesspeople.

There needs to be scepticism, everywhere, and much more of it. After a long life in public affairs I have a rule of measurement: the sacred law of humour. If someone cannot laugh at the absurdities of life, I get nervous. The enemies of reason throughout history, convinced their way is the only way, usually end up burning books or killing sparrows. Even worse, they do not laugh or blush. Serious environmentalists need to be ready to laugh at their mistakes.
Of course, Moore has a self-interested pro-trade bias: he is the former director-general of the WTO and speaks on behalf of a country that exports agricultural and wood products to pay for essential imports of machinery and energy. But he’s right about totalitarian movements stifling dissent.

Thursday, December 3, 2009

Schmidt is the fox promising to help chickens

The posturing this week over dying newspaper business models has been entertaining. Unfortunately for the papers, it only confirms that most of the industry will be gone a decade from now.

On Tuesday, Google billionaire† (#40 on the Forbes 400) Eric Schmidt pretended (once again) that the Monster of Mountain View has the newspapers’ interests at heart in a WSJ op-ed modestly titled “How Google Can Help Newspapers.”

A mere 20 years late, Schmidt channelled John Sculley in his utopian vision:

It's the year 2015. The compact device in my hand delivers me the world, one news story at a time. I flip through my favorite papers and magazines, the images as crisp as in print, without a maddening wait for each page to load.
Hmmm.. My hand holds an iPhone pretty well, and since day one the New York Times has been sold as one of its killer apps. But wait, there’s more:
With dwindling revenue and diminished resources, frustrated newspaper executives are looking for someone to blame. Much of their anger is currently directed at Google, whom many executives view as getting all the benefit from the business relationship without giving much in return. The facts, I believe, suggest otherwise.

Google is a great source of promotion. We send online news publishers a billion clicks a month from Google News and more than three billion extra visits from our other services, such as Web Search and iGoogle. That is 100,000 opportunities a minute to win loyal readers and generate revenue—for free.
Wow! Isn’t that generous of Google?

It’s not clear whether Schmidt had planned the op-ed for a long time, or if he was reacting to the report released Monday by a newspaper coalition showing Google makes more money than anyone else off of “unlicensed” use of newspaper content. The study said Google enabled 53% of the monetization of these articles.

The Fair Syndication Consortium summarized its findings:
  • During a 30-day period (October 15 – November 15, 2009), 75,195 Web sites reused at least one U.S. newspaper article without a license.
  • On these sites, 112,000 near-exact unlicensed copies of articles were found.
  • Among the top 1,000 sites reusing the most articles, blogs represent less than 10 percent of the total.
  • In addition to the 112,000 full article copies (defined as more than 80 percent of the original article and more than 125 words reused), an additional 163,173 excerpts were found (defined as less than 80 percent of original article and more than 125 words).
These results exclude any articles found on Google News.
Of course, there are some problems with the study, since under fair use provisions of copyright law, a “license” is not required. Fortunately, the Consortium is willing to let this continue “to start.”

This week, Google also announced they would allow less free access to Google-indexed paid content, in an existing program called “First Click Free.” This effort is intended to mollify the few publishers (WSJ, FT) who charge for content.

But in the end, Google’s going to keep its core business model: organizing the world’s information without having to pay for any of it, continuing to siphon off the revenues that once kept newspapers alive. Absent a gun to their head (e.g. in Europe), they’re not going to share any significant amount of the billions they’ve accumulated from that information and the stickiness it creates on their sites.

Thursday, newspaper bible Editor & Publisher quoted yet another prediction that newspapers will try and fail to institute paywalls:
Fitch is fully expecting that many newspapers are going to try and charge for content next year, only to realize it was a mistake. A handful of properties, notably the Wall Street Journal, the New York Times and smaller local papers will be able to or have pulled off an online pay strategy, according to Fitch, but don’t expect a widespread trend.

Simonton and Rizzo explain that for the rest of the lot, the competition has become too fierce. Furthermore, that competition with free content only will pull in readers, thus gaining share and the attention of advertisers.
In other words, we know how this story will end. The national category killers will be able to charge, while small hyperlocal papers operating on a shoestring will make money either online or in print-only publications. The rest are toast.

The end of the 20th century means the end of the traditional big-city newspaper, with the inevitable conversion to online-only meaning layoffs for more than 80% of the news staff. Bloggers will rule the day, supplemented by headlines from local TV stations.

So what we’ll end up with is a world with more convenient, lower quality news — indexed free by Google but little of it gathered by professionals. Today’s teenagers will say “so what?” while old codgers will lament the loss of intelligent discourse.

Personally, I shudder to think what the lack of (relatively unbiased) in-depth news coverage will mean for municipal elections — probably even greater success for machine politics, vote fraud and family dynasties.

† With my frequent posts on Total World Domination, I figure I have already made it on Schmidt’s blacklist.

Wednesday, December 2, 2009

Is mobile innovation slowing down?

A provocative posting Wednesday to Infoworld:

Has mobile innovation come to an end?
Eerie parallels to the desktop PC's history suggest that smartphones have reached boring sameness -- or completeness of capability -- even faster

By Galen Gruman

In June 2007, the iPhone instantly obsoleted all previous smartphones (the BlackBerry and Palm families), finally approaching the promise that carriers and device makers had been making about the mobile future for a decade: Real Web access. A touch UI -- that rotates. Accelerometer and location detection. E-mail and instant messaging. Photos and music. A year later came the App Store and the tens of thousands of apps -- from games to time-wasters to serious business tools -- that also made the iPhone into a computing device.

Since then, there's been an ever-increasing number of competitors, but nothing fundamentally game-changing. Apple continues to refine the iPhone and iPod Touch, adding capabilities such as a compass, Exchange e-mail support, and video capture -- but the last round of devices didn't pioneer anything significant. Both Palm and Google delivered their own iPhone-inspired OSes (WebOS and Android, respectively), but did nothing significant beyond adding (very welcome) support for multiple simultaneous apps to what the iPhone had already brought to the table.

Is there no more innovation to be had in mobile? Has mobile matched the PC in becoming a stable platform where innovation happens slowly and mainly around the edges? After all, what does a PC in 2009 do that a PC in 2000 couldn't do -- even if not as fast -- beyond using different ports?
In other word, changes are incremental of of degree rather than disruptive and transformational.

What I find intriguing is that both as an observer and a participant, I think tech industries consistently underestimates the maturation/commodization of their respective segments.

I think there are a few more revolutions left so that smartphones will supplant laptops (or desktops) for more applications:
  • large screen (HDTV 1080p) display, e.g. via goggles
  • portable keyboard, whether via fold-out, virtual laser, chording or some other.
  • voice input with reliable dictation with arbitrary speakers
Of course, the most interesting radical innovations are unanticipated. It may be software and platform innovation is slowing down, but there are still some hardware improvements possible.

Monday, November 30, 2009

Markets for IP and innovation

In discussing open innovation with some visitors today, we got to talking about markets for innovation — which are similar to (but not the same as) markets for IP.

There are two modes of open innovation, outbound and inbound, and the outbound mode depends on being able to monetize the innovation. Maybe if you’re IBM you can rely on indirect monetization (NB: IBM Global Services) but most companies need more direct monetization. Inbound OI works as long as there is a supply of innovations, which might be motivated by money or by non-monetary incentives.

So I was asked, how do firms find innovations? In other words, how are markets organized? We know from economics that markets play a number of important roles, including search, matching buyers and sellers, providing feedback/constraint on claimed quality/performance and price-setting.

From easiest to hardest, I think there are three types of innovations that might be sourced via open innovation:

  • components, such as semiconductor chips
  • IP, such as non-exclusive (or exclusive) patent rights
  • custom innovations, such as Threadless or other user-generated content
Search can be difficult, but is greatly improved due to the Internet. As with any market, matching a price to quality/features is the hard task, particularly for new or thinly-traded goods.

To me, the advice given for one of these markets for innovation would not necessarily apply to the others, because they are sufficiently different that lessons from one might not transfer to the others.

Similarly, there are two (or three) different IP business models. For some companies (such as Dolby or Qualcomm) IP licensing is the primary business model and thus revenues have to cover all IP development costs. For others, the IP revenues are incidental or supplemental, and “success” here means incremental revenue but not necessarily enough to justify the R&D in the first place.

A subset of the incidental case (or perhaps a separate case) is the salvage case, as when Xerox is unloading the Xerox PARC patents to boost the bottom line because they never figured out what to do with them in the first place. A salvage operation is particularly misleading as a role model, because usually the IP is being sold for a fraction of its original cost on the theory that some revenue is better than nothing.

So a word to the wise: don’t believe a consultant if he tries to sell you a one-size-fits all innovation market (or IP licensing) strategy. One size does not fit all.

Saturday, November 28, 2009

We deserve better commodity information

It’s no news that Wikipedia, with all its flaws, is the default information source of a generation of skulls full of mush. If this wasn’t obvious enough from my college students, it was brought home a week ago when interviewing FLL robotics contestants (ages 9-14), when nearly all said their project “research” consisted of Google and Wikipedia. (One team said Google and Yahoo).

However, since then, Wikipedia’s problem has been a front page Wall Street Journal story Monday (and blog entry) on how Wikipedia is losing volunteers, specifically 49,000 in Q1 2009. The Telegraph had the most comprehensive follow up stories although the Times of London had good coverage (including a great article on the four sources of error.)

The impetus for the original WSJ article was the academic research of Felipe Ortega, who is part of a group studying open source software but actually did his Ph.D. dissertation on Wikipedia (a related but quite different species). He’s been tweeting to offer his comment on the current news coverage. While the bulk of his research hasn’t gone through the peer review, the abstract suggests he’s taken seriously all the research design issues.

After all the articles and the academic study, the official Wikipedia response is pretty unsatisfactory. It changes the subject, arguing that while the tide of new volunteers roughly matches the ongoing losses, at least the site traffic and number of articles continue to grow.

However, none of this relates to two inherent problems in Wikipedia that the current management is unable to solve, plus the third (and potentially catastrophic) outcome of WIkipedia’s commoditization of information.

The first problem is that Wikipedia publishes content by persistent idiots. Now that there are dozens or thousands of individuals trying edit articles on almost any topic, there are chronic edit wars with rival editors taking out each other’s changes in edit wars.

Competition is healthy — if there’s a selection mechanism based on quality or performance. Wikipedia has no such mechanism. Instead, what gets published comes from people who whine and bitch and moan, who win out over people who know what they’re talking about but have better things to do with their life. This works well for chronicling Simpsons episodes but not for summarizing academic research or major historical controversies. (Yes, I know that there are capable contributors, but in every battle between idiots and experts, the idiots are winning.)

I tried to sell this angle to a reporter I spoke with Monday, but I guess he thought it was just the griping of a snobby college professor who gave up years ago after watching his work be mangled by twits. However, in the 27 comments (thus far) to the official Wikipedia response were these five comments:

  1. Well, I have taken hours editing and polishing a biographical article about a scientist. There is nothing in the article now that is under dispute, yet it is probably going to be taken down and deleted as one editor is exercising his or hers petty power-plays.
  2. My most recent experiences have been quite negative: edits reverted with no reason, pages tagged as grammatically terrible when they were no such thing, or tagged as “not up to WP’s standards” when they were stubs and in some cases *still editing*. These taggings tended to be “drive-by” in the sense that some other editor dropped the tag onto the page or made their reversion but then failed to respond to explanations on the talk page for days.
  3. I used to spend a lot of time writing for Wikipedia, amending entries and creating new articles. Now it seems that a small number of self-appointed editors run the site. If I create new articles then they are nearly always deleted. If I correct information I know for a fact is wrong, it is reverted back and I am warned by the small sub class of elite editors
  4. I took on editing the Albigensian Crusade page a while back, a fairly simple job because what’s known about it comes principally from three contemporary chronicles dealing with the specific subject. A chronicle is self-indexed by time, therefore it should have been adequate to simply point readers in the direction of the sources, but no, that was inadequate, full references please. I got started, went so far, and checked if this was right. The %*$^^@# responsible refused to take the time to feedback, and was quite rude about it, so I stopped. Other appeals to administration went nowhere, and I concluded this is a system full of chiefs who can’t be bothered to get their hands dirty actually editing,
  5. I, for one, am one of those professional contributors who left Wiki in disgust. After spending a lot of time creating pages or adding a lot of content, some amateur came along and dumbed down the content and added fictious pictures that were purported to be of the creatures listed. It became a waste of my time to provide a lot of information that could be cut-and-paste into term papers, dissertations, reports, etc., and have some arm-chair contributor wreck it all.
This is a problem I’ve known about since soon after I joined Wikipedia in November 2003. The entire production process would have to be ripped up to fix this. Even Amazon has a way of providing feedback on user contributions so that readers know whose comments have been useful, even if it (and other processes) is fatally broken for highly polarized topics like politics.

One problem I didn’t see coming was the inevitable shift from original writing to maintenance mode. I started my main burst of Wikipedia contributions (2003-2004) by creating 11 new articles, from venture capitalists Eugene Kleiner and Tom Perkins to adding two missing campuses (CSULB, CSUSM) of the 23-campus CSU system.

Today, thanks to the law of large numbers (and the long tail) are very few significant articles left to be written. (Yes, Wikipedia has an article on only one Joel West — and it’s a lame one — but I don’t consider that a major omission.)

This reminds me of what I experienced in my first few years as a professional programmer: it is so much more more fun to write new code than maintain someone else’s code. In fact, as I became a manager I learned this is a major recruiting and staffing problem — even when you pay people, let alone when they’re volunteers. Over and over again, I saw that the manager or other “stuck” (high switching cost) programmers had to take the scut work so you could offer the new exciting stuff to attract the best talent.

Clearly, at Wikipedia existing volunteers don’t want to do the scut work, nor do the newcomers. If it’s de minimus, then (to use an analogy) perhaps good citizens will just pitch in and pick up the candy wrapper, but nobody’s going to spend a weekend clearing up the trash along the highway just for the fun of it.

Wikipedia is running out of good jobs to hand out. If you can’t give out fun work, how are you going to attract people? What I didn’t see six years ago was that inevitably Wikipedia’s content base would mature: first in English and eventually in all the major languages. When this happened, the opportunities for adding new content would mainly be limited to current events like new hurricanes or those Simpsons episodes.

However, I find hope in Wikipedia’s current troubles, as they suggest a solution WIkipedia’s most invidious problem: the commoditization of human knowledge. Monopolies are bad, even if they are for free goods. When I was interviewing open source leaders, the Apache (and most “open source” types) seemed to get this, while the free software types (Linux, OpenOffice) did not.

Competition is inefficient, but it provides choice. Monopolies at best mean benevolent dictators, and few benevolent dictators remain benevolent forever.

The mind-numbing ubiquity of WIkipedia is teaching a generation of kids to be lazy and uncritical consumers of information — whether it’s truth or merely wikitruth. They take what shows up on the first page of Google or in Wikipedia and assumes it’s true, even when it’s not.

When I was a kid, I would do my 5th grade reports using World Book, Encyclopedia Brittanica, usually one other encyclopedia like Collier’s or Compton’s, and also the Information Please Almanac. (If the report was important, I would also try to find a real book or two.) This wouldn’t make me an expert, but at least I would get multiple perspectives.

Today, Wikipedia’s commoditization of information means that Encyclopedia Britannica is struggling and its previous nemesis (the Encarta CD-ROM) is gone. At least a five-year-old version of the Columbia Encyclopedia survives as Reference.com.

Once upon a time, I assumed that the network effects meant that nothing would ever compete with Wikipedia. This week shows that in less than a decade it’s possible to create a significant body of knowledge with volunteer labor. None of the existing rivals have yet succeeded, whether Citizendium, Conservapedia, Liberapedia or Knol. However, with this large body of existing (or potential) body of would-be Wikipedia labor becoming available, they are certainly trying.

I will be curious to see if we can achieve success from volunteer organizations that focus on the quality rather than the quantity of contributions. In this direction, Citizendium (by WIkipedia co-founder Larry Sanger) is using a somewhat modified version of the Wikipedia process, while Google’s Knol is heading in a different direction by emphasizing authorial integrity over cumulative production.

Given the almost total lack of competition, anything that provides a viable alternative to WIkipedia is a good thing. It will be a good thing if a decade from now we have three or four online encyclopedias to choose from, much as today we can choose from three or four cellphone carriers.

It’s likely that one of these alternatives will be Wikipedia. Perhaps if its leaders take its current problems seriously, it will still be the most popular alternative out there and will be able to meet its current modest fundraising goals.

Thursday, November 26, 2009

An ally in demanding Climategate accountability

While nearly all of the Climategate outrage and ridicule has come from global warming skeptics, an ally of the embarrassed scientists has finally argued that it’s time for fellow environmentalists to come clean.

Environmental blogger George Monbiot wrote Wednesday in the Guardian:

I have seldom felt so alone. Confronted with crisis, most of the environmentalists I know have gone into denial. The emails hacked from the Climatic Research Unit (CRU) at the University of East Anglia, they say, are a storm in a tea cup, no big deal, exaggerated out of all recognition. It is true that climate change deniers have made wild claims which the material can't possibly support (the end of global warming, the death of climate science). But it is also true that the emails are very damaging.

The response of the greens and most of the scientists I know is profoundly ironic, as we spend so much of our time confronting other people's denial. Pretending that this isn't a real crisis isn't going to make it go away. Nor is an attempt to justify the emails with technicalities. We'll be able to get past this only by grasping reality, apologising where appropriate and demonstrating that it cannot happen again.
Monbiot likens the loss of credibility by CRU head Phil Jones to the expense padding scandal that has wracked the British parliament recently:
Most of the MPs could demonstrate that technically they were innocent: their expenses had been approved by the Commons office. It didn't change public perceptions one jot. The only responses that have helped to restore public trust in Parliament are humility, openness and promises of reform.

When it comes to his handling of Freedom of Information requests, Professor Jones might struggle even to use a technical defence. If you take the wording literally, in one case he appears to be suggesting that emails subject to a request be deleted, which means that he seems to be advocating potentially criminal activity. Even if no other message had been hacked, this would be sufficient to ensure his resignation as head of the unit.

I feel desperately sorry for him: he must be walking through hell. But there is no helping it; he has to go, and the longer he leaves it, the worse it will get. He has a few days left in which to make an honourable exit. Otherwise, like the former Speaker of the House of Commons, Michael Martin, he will linger on until his remaining credibility vanishes, inflicting continuing damage to climate science.
Monbiot particularly faults the university for stonewalling and denial, “a total trainwreck: a textbook example of how not to respond.”

Monbiot is trying to save the credibility of his global warming allies. As an academic, I want to save the credibility of academia and the scientific process (beyond just meteorology). However, the argument goes beyond just trying to save a cause.

This is a far more important societal problem. If our institutions are to have credibility, then failure must have consequences — a rule to be applied to government, business, academia, churches, charities and all other institutions.

Ethical leaders will demand consequences for their friends, not just their enemies. Alas, this is an all too rare sight nowadays (NB: partisan cover-up by both parties for Congressional corruption within their ranks.)

Hat tip: Chris Morrison, BNET.

Tuesday, November 24, 2009

Climategate and academic accountability

Due to inadequate computer security, 156 megabytes of embarrassing emails by the leading British climate research unit have surfaced in an incident now called “Climategate.” This is reminiscent of the embarrassing Bill Gates emails about crushing competitors (or Richard Nixon and his tapes): people will make sure that there are no records of dishonest behavior, rather than ending the behavior.

However, it seems unlikely to significantly change the debate over anthropogenic global warming, as pundits, politicians and the public will filter the news through their existing biases. Those who believe in AGW will say the issues are minor or isolated, while skeptics will say it proves a vast left-wing conspiracy. (This also sounds a lot like 1974).

Critics point to snippets in which researchers talk about cherry picking data, exaggerating claims, padding results. Although many of these things seem like a bad idea, there are a lot of gray areas and such exaggeration happens in many scientific endeavors. As economics blogger Meg McArdle notes, “ it means we need to be less romantic about the practice of science.”

However, one passage should be setting off alarm bells in every research university in the world. A March 2003 email by Prof. Michael Mann of Penn State — quoted by the Wall Street Journal — advocates a blacklist against an existing journal:

In fact, Mike McCracken first pointed out this article to me, and he and I have discussed this a bit. I've cc'd Mike in on this as well, and I've included Peck too. I told Mike that I believed our only choice was to ignore this paper. They've already achieved what they wanted—the claim of a peer-reviewed paper. There is nothing we can do about that now, but the last thing we want to do is bring attention to this paper, which will be ignored by the community on the whole...

There have been several papers by Pat Michaels, as well as the Soon & Baliunas paper, that couldn't get published in a reputable journal. This was the danger of always criticising the skeptics for not publishing in the "peer-reviewed literature". Obviously, they found a solution to that--take over a journal!

So what do we do about this? I think we have to stop considering "Climate Research" as a legitimate peer-reviewed journal. Perhaps we should encourage our colleagues in the climate research community to no longer submit to, or cite papers in, this journal. We would also need to consider what we tell or request of our more reasonable colleagues who currently sit on the editorial board...
The 2003 paper, written by Willie Soon of Harvard and Sallie Baliunas of the Mount WIlson Observatory, questions whether the 20th century is the warmest of the past 2000 years. (According to objective measures, Climate Research is ranked 29 out of 48 meteorology journals — which would mean it’s reputable but not prestigious.)

This is not how science is supposed to be conducted. You might disagree with other interpretations of the data, but the way you win is by producing better studies. You don’t blacklist journals — for contributions or citations — because they sometimes print contrary points of view. The process of science is supposed to be about the evidence, so trying to destroy the careers of those who disagree with you is beyond the pale.

This makes clear that at least some of the researchers are no longer conducting scientific research — in search of the truth — but only want evidence that supports their position. Is this because their reputations will suffer? Is this because they are part of a social movement? I have no direct knowledge either way.

Legitimate institutions apply the rules equally to friends and enemies, as captured by the old saying “rule of law, not of men.” If the academic institutions valued their legitimacy, they would reprimand those intent on violating the norms of scientific research. Since the Royal Meteorological Society and American Meteorology Society are strongly lobbying for political action to reverse global warming, this would require criticizing those whose beliefs they share.

Absent strong external pressure, institutions are notably reluctant to admit their own failings. The institutions (like any under attack) will probably ignore the controversy and hope it goes away. I think this will work: unlike Nixon’s efforts at stonewalling, there are no Woodward and Bernstein dogging every lead in this scandal.

Odds are, the professional societies will close ranks rather than sanction clear ethical violations. The offenders will continue to be rewarded with journal publications and generous government support for their research.

Free markets and free societies depend on feedback mechanisms to catch and deter transgressions. Failures of accountability in academia reinforce a pattern that I’ve observed over the past decade: non-profit institutions are less accountable than companies, because there are no shareholders who can throw the bums out after they’ve flown a plane into the side of a mountain.

Monday, November 23, 2009

Now they tell us

Even the official spokespeople for American liberalism, the Gray Lady herself, admits that the US has a spending problem. From the front page (above the fold) of this morning’s New York Times:

Federal Government Faces Balloon in Debt payments
At $700 Billion a Year, Cost Will Top Budgets
for 2 Wars, Education and Energy

By Edmund L. Andrews

WASHINGTON — The United States government is financing its more than trillion-dollar-a-year borrowing with i.o.u.’s on terms that seem too good to be true.

But that happy situation, aided by ultralow interest rates, may not last much longer.

Treasury officials now face a trifecta of headaches: a mountain of new debt, a balloon of short-term borrowings that come due in the months ahead, and interest rates that are sure to climb back to normal as soon as the Federal Reserve decides that the emergency has passed.

With the national debt now topping $12 trillion, the White House estimates that the government’s tab for servicing the debt will exceed $700 billion a year in 2019, up from $202 billion this year, even if annual budget deficits shrink drastically. Other forecasters say the figure could be much higher.

In concrete terms, an additional $500 billion a year in interest expense would total more than the combined federal budgets this year for education, energy, homeland security and the wars in Iraq and Afghanistan.
Of course, the NYT has to put on the most favorable pro-administration spin on the story. The US had “decades of living beyond its means” and the excess spending this year is “is widely judged to have been a necessary response to the financial crisis and the deep recession.”

The article has only indirect mention of where the borrowing is coming from. No mention that the US budget deficit was $1.4 trillion for 2009 — 205% higher (3x) the previous record of $459 billion. Nor does the NYT mention that the Obama administration plans to add another trillion to the deficit during FY 2010.

Nor is there mention of the trillion dollar increase in spending if either the House or Senate version of health care reform becomes law. As former CBO head Douglas Holtz-Eakin analyzed the bills Saturday:
First and foremost, neither bends the health-cost curve downward. The CBO found that the House bill fails to reduce the pace of health-care spending growth. An audit of the bill by Richard Foster, chief actuary for the Centers for Medicare and Medicaid Services, found that the pace of national health-care spending will increase by 2.1% over 10 years, or by about $750 billion. Senate Majority Leader Harry Reid's bill grows just as fast as the House version. In this way, the bills betray the basic promise of health-care reform: providing quality care at lower cost.

Second, each bill sets up a new entitlement program that grows at 8% annually as far as the eye can see—faster than the economy will grow, faster than tax revenues will grow, and just as fast as the already-broken Medicare and Medicaid programs. They also create a second new entitlement program, a federally run, long-term-care insurance plan.

Finally, the bills are fiscally dishonest, using every budget gimmick and trick in the book: Leave out inconvenient spending, back-load spending to disguise the true scale, front-load tax revenues, let inflation push up tax revenues, promise spending cuts to doctors and hospitals that have no record of materializing, and so on.
So instead of spending cuts, the NYT series seems oriented at preparing well-to-do elites for punitive tax increases, rather than restoring spending to the % of GDP where it was 2 or 3 years ago (still not a small number). By one calculation, government spending hit 37.4% of GDP in FY2008 and 45.3% of GDP in FY2009. Excluding the peak of World War II (1943-1945), both are the highest numbers ever recorded in US history. And — unlike World War II — most of the spending is locked into the base budget, to continue indefinitely whether revenues increase or not.