Tuesday, September 30, 2008

iPhone Flash due Real Soon Now

Apparently Adobe has finished the long-rumored Flash for the iPhone, and is awaiting Apple approval to distribute it. Apple front man Steve Jobs has been diss’ing Flash for months. The case for Apple to endorse Flash for the iPhone is no more compelling than it was eight months ago.

It seems inevitable that Flash will come to the iPhone, unless of course Apple is waiting to establish the SproutCore web application framework as a Flash substitute.

Of course, SproutCore is based on Apple’s idiosyncratic Cocoa APIs, instead of Adobe’s Flash APIs. So clearly Apple prefers programmers to develop skills for their technologies rather than Adobe’s competing proprietary (cross-platform) solution.

Nokia CTO MIA

At the beginning of the year, to great fanfare Nokia installed its first American as a CTO in Palo Alto, and the first C-level exec to be located outside the mother ship in Finland. Now Bob Iannucci is gone, with all that the news reports will say is that the former DEC and Compaq executive resigned for “personal reasons.”

There have been some small warning signs for a while. He started a blog in June with two postings and never continued. Iannucci was a Nokia research executive since 2005, but a Google news search doesn’t show many press mentions after June.

The problem is, “personal reasons” is often a euphemism for “I made a mistake in taking this job and want out” or even “they’re forcing me out and all they gave me is this fig leaf.” So when people do actually resign for personal reasons, then people are unfairly suspicious because of all the other times that the phrase is used to intentionally mislead people.

I guess we’ll know better once we see who Iannucci’s successor is and where he (or she) is working. If he’s replaced by a Finn located in Espoo, then that suggests that having an overseas CTO was a doomed experiment by the Finnish cellphone giant. In that case, one could presume that the American was an alien body rejected by an otherwise highly Finland-centric culture and organizational structure.

Monday, September 29, 2008

World's biggest WiMax bet

Other than Intel (which has poured in some of its excess billions), no firm in the world has placed a bigger bet on WiMax than Sprint. The falling (if not failing) wireless carrier is praying that bringing WiMax to market now will allow it to steal market share on its three major US rivals, who are all waiting for the WCDMA-derived LTE.

Today Sprint’s XOHM service (soon to be combined with Clearwire) rolled out its first market in Baltimore. As various analysts have noted, having only a single market limits its value to business travelers until it can build out its national network. Sprint is also waiting for wider availability of WiMax “modems” for laptop and desktop computers.

The pricing models are quoted at $25 for home and $30 for “on the go” service (these are the “temporary” discounts). Not being in Baltimore, I’m not clear what the household price is for multiple devices: we have four computers at home, so can we share one modem between them, as we do with our cable modem and before that with our DSL “modem.”

Beyond the capital for the rollout and the people’s unwillingness to join due to network effects (i.e. limited markets), it seems to me that Sprint faces three big issues:

  1. Is it fast enough, compared to wireline substitutes. With claimed download speeds of 2-4 Mb/sec, they should be able to beat wireless alternatives until LTE, but their revenue model suggests they also want some of the last mile residential business.
  2. Will their prices be aggressive enough? Home broadband is a commodity, so people won’t pay a premium for it. They will pay extra for combined mobile/home service, but I suspect the pool of end consumers who want to pay a lot (out of their own pocket) for mobile broadband is pretty small.
  3. Will their networks have enough capacity? Obviously congestion has been a huge problem for the rollout of new networks since the earliest days (cf. 1984 Los Angeles AMPS rollout), and “all you can eat” wireless plans encourage consumers to use a lot of bandwidth. If they provide poor service they will lose customers, and if they have to build more infrastructure than planned they’ll lose money.
I’ve been a big WiMax and XOHM skeptic. Now that they’ve launched, my opinion doesn’t matter — what matters is whether they can get enough customers and serve them cost-effectively before LTE shows up.

Sunday, September 28, 2008

IBM's stand for open standards

Last week, IBM announced that it was going to exit some standards bodies. The policy is the outcome a summer online discussion with various standards experts.

Analysts attribute IBM’s move Anto being upset at Microsoft’s success packing various standards bodies to win approval of OOXML, a less open format than the IBM-Sun ODF alternative. Of course, experts in standardization (including IBM’s) have long know that some Standards Setting Organizations (SSOs) are more open than others — whether in process or outcome. To say that politics influences SSO (or SDO) outcomes is like Claude Rains being shocked to hear there’s gambling in Casablanca.

IBM has published recommendations from its “independent, forward-thinking experts across the globe,” which seem to subsume best practices that have (in some cases) been debated for a decade.

Not surprisingly, the proposals took aim at the impact of patents upon standardization. Carl Cargill of Sun has been complaining about patents in standards for five years, and of course this has been a source of ongoing conflict with the open source community.

Is this more than just a press release? We all assume that IBM is hoping that making a dramatic statement will de-legitimate some standards bodies and thus their influence. But will others follow along?

Also, IBM calls for better standards education and research, but will it give universities any money up to make it happen? Will it start asking for standards education in the bench engineers that it hires?

Friday, September 26, 2008

What is Web 2.0?

SwansonToday at Temple University in Philadelphia, I’m listening to a talk about Prof. Burt Swanson, an expert on how organizations adopt IT, who’s (literally) a greybeard of the UCLA information systems department.

Swanson noted that recently there were 4 “Web 2.0” conferences at UCLA, two organized for industry types and two organized by students. (I’m sympathetic, having talked about Web 2.0 at USC in August). He noted that half of each conference seemed to focus on trying to define the term (which I suppose might resemble nailing jello to the wall).

What struck me was an early slide that contrasted four key comments on Web 2.0:

1. In 2006, Tim O’Reilly offered his most compact definition ever:

Web 2.0 is the business revolution in the computer industry caused by the move to the Internet as platform, and an attempt to understand the rules for success on that new platform.
2. Earlier in 2006, the Web 1.0 inventor Sir Tim Berners-Lee was interviewed as saying:
I think Web 2.0 is of course a piece of jargon, nobody even knows what it means.
3. Writing in the April 2007 Atlantic monthly (“The Web 2.0”), Michael Hirschorn said
In the Web hype-o-sphere, things matter hugely until, very suddenly, they don’t matter at all. … Really cool people now like to talk about Web 3.0.
4. Finally, the man who did more to deliver Web 1.0 to the masses, Marc Andreessen, wrote in his blog:
In the beginning, Web 2.0 was a conference. As conferences go, a good one -- with a great name. … From there, it was easy to conclude that "Web 2.0" was a thing, a noun, something to which you could refer to explain a new generation of Web services and Web companies.
Swanson’s conclusion? Web 2.0 is not a thing, instead it’s an “organizing vision” as he defined the term a decade ago in his oft-cited paper. The money quote from that earlier paper:
[A]n interorganizational community, comprised of a heterogeneous network of parties with a variety of material interests in an IS innovation, collectively creates and employs an organizing vision of the innovation that is central to decisions and actions affecting its development and diffusion. That organizing vision represents the product of the efforts of the members of that community to make sense (Weick 1995) of the innovation as an organizational opportunity.
Swanson even has a plot of how the length of the Wikipeida entry evolved from February 2005 to February 2008, accreting over time. During Q&A, asked why he used Wikipedia, Swanson replied: “We kinda liked the irony of trying to stay within the [Web 2.0] system.”

Of course now I’m using a blog to talk about a web 2.0 irony built upon a wiki that uses Web 2.0 processes to debate what Web 2.0 means. And members of the workshop will be using a gated wiki to discuss the implications of Swanson’s presentation for our research stream.

Thursday, September 25, 2008

Warning: blog delays ahead

Notice to readers: That the gPhone has been out two days without a blog posting shows how swamped I’ve been with my day job. If nothing else, it’s now past time for me to retire “Open Vaporware Alliance” since the Google-led Open Handset Alliance is only days away from shipping its first product.

The load is less than this time last year (when I disappeared for a month), but still heavy. I hope to be able to blog at the end of the week and on weekends, but (other than pre-scheduled posts) I probably won’t be blogging much on Mondays and Tuesdays.

Tuesday, September 23, 2008

Inside the Googleplex

Attempts by mere mortals to understand Google’s march towards Total World Domination has been hindered by its legendary secrecy. At trade events, Google reps make sure not to tell the audience anything that isn’t already announced, and all employees (my friends tell me) are under strict orders not to divulge real numbers.

Randy StrossNaturally this secretive juggernaut attracts journalists and other authors like moths to a flame. One book is out this month, two are due in 2010, and I’ve been corresponding with the author for a fourth.

First out of the chute is my coworker Randy Stross, whose new book Planet Google went on sale today. In addition to the audiobook versions of PG, Randy is also the author of six other books, including his biography of Thomas Edison published only 18 months ago.

Planet Google has already won favorable reviews by Fortune and the Wall Street Journal. Subtitled “One Company’s Audacious Plan to Organize Everything We Know,” Publisher’s Weekly describes it this way:

In this spellbinding behind-the-scenes look at Google, New York Times columnist Stross (The Microsoft Way) provides an intimate portrait of the company’s massively ambitious aim to “organize the world’s information.” Drawing on extensive interviews with top management and the author’s astonishingly open access to the famed Googleplex, Stross leads readers through Google’s evolution from its humble beginnings as the decidedly nonbusiness-oriented brainchild of Stanford Ph.D. students Sergey Brin and Larry Page, through the company’s early growing pains and multiple acquisitions, on to its current position as global digital behemoth.
For Bay Area readers, Randy is doing a 1-hour talk on his book next week (Sept 30) here at SJSU. The one hour talk will begin at noon Tuesday, and will be held on the second floor of the MLK Library. Parking is available at street parking meters or at the 4th street municipal garage.

Update Sept 29: For readers that miss that talk, he’ll be presenting his book at the Google Author Series (inside the Googleplex) on Oct. 7 but the talk is only for Googlers (Google employees). He will present again in Menlo Park on Oct. 15.

Sunday, September 21, 2008

Cultural impacts of LDC cellphone adoption

This “comic” is normally more concerned with pontificating than with being funny, but today’s strip is relevant to OpenIT readers.


Maasai tribesmanIn this case, the comic is actually based on reality, as reported by the London Telegraph and Guardian.

Graphic credits: Comics.com and The Telegraph.

Friday, September 19, 2008

A Clever Rickroll

Unless you’ve been living under a rock, you probably have heard that Electronic Arts has released a videogame called Spore, developed by Will Wright, the author of The Sims, SimCity, etc. etc. etc. Among other features, Spore uses a clever approach to a user-generated content to attract volunteer labor that makes the value of the game higher for prospective buyers (and to increase add-on sales).

The game has an obvious creation vs. evolution tension. Despite its evolutionary tendencies, some Christians think it can also be interpreted as consistent with Intelligent Design. After all, if the game buyers are intelligent (perhaps debatable?) and designing creatures, that would be intelligent design if not Intelligent Design.

But what was funny this morning was to discover the term “Rickroll” and how it was used by a nominal Spore critic. Rick Astley was a 1987 one-hit wonder, and “rickrolling” is an Internet joke that quotes from that hit, often by using the YouTube video of that hit, “Never Gonna Give You Up.”

The website Antispore.com (complete with Google adwords) claimed to be a Christian response to the alleged heresies of the EA game. In the Sept. 11 posting, the author quoted from the Astley lyrics before adding an even more direct tipoff (emphasis and hotlink added)

But the Bible teaches us that God was not done with man. For we were His creation and He then spoke to Noah in Genesis 8:21-27 after the flood.

“21. The LORD smelled the pleasing aroma and said in his heart: “Never again will I curse the ground because of man, even though every inclination of his heart is evil from childhood. And never gonna give you up. 22. “Never gonna let you down.” 23. “Never gonna run around and desert you.” 24. “Never gonna make you cry.” 25. “Never gonna say goodbye.” 26. “Never gonna tell a lie and hurt you.” 27. “Never truly believe anything you read on the Internet. There will always be cases of Poe’s Law.

It’s these teachings that I’ve spent my life learning, believing and becoming, that have made me the woman that I am today.
Not exactly what my copy of the NIV Bible says for Genesis 8:21-22 (there is no 23-27).

Somehow, a few of the 2,252 comments on the article did not appear to get the joke, but one of the more amused readers wrote:
Best spoof site I’ve ever seen; you’ve succesfully proved atheists are bigger whiners and every bit as petty, venal, abusive, aggresive, malicious and generally unpleasant and enormously f***ing stupid as the God squad. And you’ve proved it in such a short time.
Mr. Astley is now 42 and apparently doing the British equivalent of the County Fair circuit. Still, this is the sort of immortality (or at least 15 minutes of fame) that no one can buy, unless perhaps your first name is Paris. Meanwhile, a few more people have learned what “Rickroll” means.

Thursday, September 18, 2008

Even Seinfeld can’t save Microsoft

Microsoft has given up on Jerry Seinfeld as the savior of their tarnished brand image, and will find other ways to spend their $300 million.

It seemed like an ill-conceived plan in the first place.

Publicity Yahoo doesn’t need

By now, everyone knows that Governor Sarah Palin uses Yahoo for her e-mail, thanks to the anonymous group (named “Anonymous”) that broke into her e-mail account and shared it with the world. While Federal agents are investigating the invasion of privacy, and pundits filter the revelations through their pre-existing opinions (either for or against), to me it was remarkable how banal the revelations were.

But from a business standpoint, what seemed important was the reaction by security experts that of course no enterprise should use free webmail services for official business. As someone who used to plan security policies for his (small) firm’s IT infrastructure, my initial reaction was that this was just snobbery on the behalf of these “experts” to sell their expertise. (And, of course, the obligatory slew of press releases by firms seeking to capitalize on the revelation).

Since remotely-accessible e-mail systems are only as good as their passwords, the one key issue for any service is how facistic the password security algorithm is. If the guv used “ToddTrig” as her password then anyone could have guessed it with a hacking attack — whether the mail was hosted by yahoo.com or state.ak.us. If it requires a number and a letter and rejects things that are too easy, that would be better. However — as any CS-educated user will tell you — if they require changing the password every 6 months, all that means is that people will write down their passwords (a no-no) and it would provide no security at all against this attack.

(Many organizations require a VPN for remote access to any corporate information, which used to seem like overkill but today does not. However, requiring a VPN means that people will say “use my personal e-mail account” when business associates want to contact them on vacation).

The one line of argument that did seem persuasive is the area of password recovery:

Password recovery procedures are an area where the balance between security and usability is so blurred that most times the security aspect is non-existent, despite appearances. The leading theories about how the breach to Sarah Palin's account came about were that it was through the password recovery options associated with the Yahoo webmail interface.

Even if a user has selected non-standard secret questions, or has linked other email accounts, this sort of information isn't going to take a determined hacker very long to dig up, especially if the target is already someone in the public eye.
Having recently had to reset the password for one of my online banking services, it is quite clear that some firms do a much more serious job than others at coming up with password reset systems. My bank required a series of questions — and doesn’t use the same questions all the time, so someone sitting behind my shoulder might not know what to do last time. They also show me a secret picture to discourage “man in the middle” type attacks.

I just tested the password retest mechanisms at Yahoo and Google, and (today) both seemed better than most. Both use a captcha to prevent automated attacks. Yahoo gave me my custom challenge question, one where I won’t forget the answer but it’s so obscure no one will know the answer (although they could mechanically try to guess it). After L’Affair Palin, perhaps I’ll pick a different obscure question with an even more obscure answer.

Google refused to let me reset it online, but instead forced me to use my secondary email address. If I don’t have access to it, then I have to wait:
If you don't have a secondary email address, or if you no longer have access to that account, please try the 'Forgot your password?' link again after five days. At that point, you'll be able to reset your password by answering the security question you provided when you created your account.

To prevent someone from trying to break into an account you're actively using, the security question is only used for account recovery after an account has been idle for five days. The Gmail team cannot waive the five day requirement or access your password under any circumstances.

If you're unable to answer your security question or access your secondary email account, we regret that the Gmail team cannot provide further assistance. If you're concerned about the security of your account, please visit our Security Center.
Certainly this delayed gratification approach seems like it would prevent hacking of an actively used account.

Even so, this is the sort publicity that Yahoo (and Google and Hotmail) don’t really need, particularly when large bureaucratic IT departments start to ban the use of webmail accounts. Even famous people without IT departments will (not unreasonably) think twice about using such services for their mail.

Update Thursday 2:30pm. The Associated Press reports (on the Yahoo News site) that the hacker claimed to have guessed the answer to easy password challenge questions to get onto Palin's account:
The hacker guessed that Alaska's governor had met her husband in high school, and knew Palin's date of birth and home Zip code. Using those details, the hacker tricked Yahoo Inc.'s service into assigning a new password, "popcorn," for Palin's e-mail account, according to a chronology of the crime published on the Web site where the hacking was first revealed. …

Palin's hacker was challenged to guess where Alaska's governor met her husband, Todd. Palin herself recounted in her speech at the Republican National Convention that the pair began dating two decades ago in high school in Wasilla, a town near Anchorage.

"I found out later though (sic) more research that they met at high school, so I did variations of that, high, high school, eventually hit on 'Wasilla high'," the person wrote.
This is clearly an argument for individuals to choose their own challenge questions, and make sure the answers are obscure enough to protect against identity fraud.

Tuesday, September 16, 2008

SV quaking in its boots

Regular readers know I’m skeptical of claims of Silicon Valley exceptionalism. Nothing lasts forever: the sun set on Pittsburgh, Detroit, and the British Empire, among other places.

So at first I was sympathetic to the arguments made on GigaOM last week about “5 Reasons to Move Your Startup Out of Silicon Valley.” The oft-digged (and trackbacked) article lists some reasons why SV types shouldn’t be complacent and other cities should try to recruit startups. The 5 points.

1. The weather sucks in some of these towns (not Tallahassee) so your people will actually work instead of bugging out at 5:15 to train for a marathon, triathlon or Ultimate Frisbee.
There’s a longstanding hypothesis in cultural anthropology about Northern Europeans being more industrious than Southern Europeans due to the winter weather. This arguments seems to work for Nokia and its ecosystem companies, at least until you get to August when everyone takes the whole month off (though it’s not any worse than Mediterranean France).
2. You can recruit better outside the fishbowl. Every technology company hits the wall — some multiple times. In the Valley your employees will bail at the first sign of trouble and jump to a better job in the next parking lot. That means you will have to spike salaries to rebuild your team. Other places in the world aren’t quite so spoiled - or they come to you already cynical and stay through the rough times.
When we were building software companies (and the SDSIC) in San Diego, we used to make this argument. However, recent college graduates are wise to this argument and choose regions where they have multiple career options.
3. You won’t get lost in the startup maze. In the Valley, every VC has a portfolio company in each flavor - their own LP’s can’t tell them apart.
OK, but conversely in most cities you will have a so-called “VC” who’s never invested in your industry.
4. In my experience, other startup communities aren’t as pre-occupied with the “exit” as Da Valley. SV VC’s have attention spans measured in picoseconds and will sell/merge your company at the first sign of trouble. I can say that in Boston, at least, we are used to gutting out long “winters.”
The excessive fixation on flipping companies and making a YouTube-type exit is a problem, although VCs will eventually have to correct. However, Boston VCs may hang onto their losers too long, because they have more of them.
5. Academics make great board members. Each of these cities has a rich educational environment and are great places to recruit sartorial advisors. And unlike at Stanford, you wont have to give up 1 percent of your equity just to put the provost’s name on your board!
Sure, you get the provost, but where in the US do you get Stanford-quality engineering or science professors? For engineering you could go to Boston, Pittsburgh (CMU), Atlanta (Georgia Tech), downstate Illinois (UIUC) but not in Florida or Philadelphia.

While some of the arguments seem plausible, the recommendations are so lame as to undercut the credibility of the arguments:
My top non-Silicon Valley cities are: Boston; Pittsburgh; Philadelphia; Austin; Research Triangle, N.C.; Minneapolis; Tallahassee; Toronto; and Basking Ridge, N.J.
This is apparently someone who’s never set foot on the West Coast (ever hear of Microsoft? Qualcomm?) and rarely ventures West of the Mississippi. As Saxenian noted a decade ago, Boston (once home of the Massachusetts miracle) got stomped by SV due to societal attitudes toward entrepreneurial activity — cultural attitudes that all of the West Coast (and much of the Southwest and Mountain West) share to some degree with the Bay Area.

Of course, the author is more than a little biased towards Boston:
Howard Anderson is a founder of The Yankee Group, a cofounder of Battery Ventures, and a professor of business at the MIT Sloan School of Management.
My list of places outside the Bay Area would be equally biased in the other direction:although obviously some regions are better than others for specific industries such as software, telecommunications, semiconductors, biotech, biomedical or cleantech.

Also, too much is made of the anchor role of colleges, which (outside life sciences) mainly provide talent rather than technology. Thus, Seattle is a hotspot for IT because of the engineering talent pool among Microsoft alumni, not the faculty housed within the Paul G. Allen Center for Computer Science and Engineering.

All this speculation is mostly hot air, anyway, because investors put their money where their mouth is. Of the $7.4b of VC invested in Q2, LA/Orange County got more VC than Texas and Philadelphia/NJ combined. San Diego (population 3 million) got the same VC as five Midwestern states, and more than twice as much as the six North Central states (which includes top schools like Minnesota and Wisconsin).

Of course Silicon Valley took 40% of the national total — or more money than the combined take of the next six regions (comprising all or part of 20 different states). So I don’t think the entrepreneurs and their local infrastructure here in Santa Clara County or the Peninsula are lying awake at night worrying about Tallahassee or Basking Ridge.

Monday, September 15, 2008

Airlines discover supply and demand

United has increased its second bag fee from $25 to $50 — blaming falling fuel prices. Together American and United had raised bag fees from $0 to $80 round trip, and now United wants to take that to $130 round trip for those bags (unless a lot of flyers complain).

Not surprisingly, adding fees has cause travelers to change their behavior. USA Today reports Tuesday

American Airlines, the nation's largest carrier, says the average number of bags checked per passenger has dropped since it began imposing fees ($15 for the first bag, $25 for the second) earlier this year. Prior to introducing the fees, an average of 1.2 bags were checked per passenger. Now, it's slightly below one, spokesman Tim Smith says. "The biggest percentage drop is in the second bag (checked). It was more noticeable."

United Airlines, which said Monday that it will raise the fee on the second checked bag to $50 from $25, has also seen a decline in the average number of bags checked per person since February, spokeswoman Robin Urbanski says.
Southwest is still leading the industry in low fees, with the Big Four carriers (AA, UA, USA and Delta) dead last. Until people switch their flights, the airlines will continue to tack on these surcharges, making for an unpleasant carryon baggage experience for all.

Sunday, September 14, 2008

WEB 2.0: GOOG UGC N ROK

On Thursday, Google bought Korean blogging software company TNC (Tatter and Company). Along with an improvement to its earlier Blogger software, it’s one of two developments this week that puts Google on a collision course with MySpace and Facebook.

One of the most interesting reports from the Korea Times, the notoriously nationalistic Seoul business publication.

Google said the takeover will help it expand localized products, as well as provide advanced services to Korean users. …

The current Web search market is almost completely dominated by local firms, with Naver.com atop and its rivals, including Daum.net and Nate.com, following closely behind.

Portal Web sites in Korea have been deeply engaged in the business. The service has now become the biggest meal ticket for them in combination with advertising.

Google, the world's biggest search engine, has failed to flex its muscle here, as it has a relatively small base of user-produced contents compared to its Korean competitors.

Since the success of Jishik-iN, a Q&A database service, it was contributions from active users, which lifted Naver to the no.1 spot.

Industry officials said that is the reason for the acquisition. Google will spur development of user-oriented contents using the blogging tool, which will contribute to strengthening its search ability as well, they said.
Chang-Won Kim, the acquired CEO also argues that TNC will help Google overcome a fundamental difference of the Korean market:
Speaking of Google in Asia, one piece of fact that my American friends have really hard time perceiving is that Google is an underdog in this part of the globe. Korea is the world's sixth largest market in terms of internet users, and yet Google has a market share that can only be described as "minor" in Korea.

Why? Korean web users mostly use Yahoo-like "portal" services and never really venture out. Part of the reason for that is, Korean portals are so good. But portals have built too thick of a comfort zone for Korean web users, leaving little room for startup innovations. Hence less motivation for startups, hence less diversity and more portal domination (in this age of de-portalization, that is), and so on and so forth - the cycle goes on.

Now as a part of Google, TNC will try to better the situation. We will commit ourselves to increasing Google's market share in Korea. Of course, Google isn't entitled with God-given right to become #1 in every region it operates in, just because it's Google. It's actually more about the Korean web industry than about Google.
Even more interesting than Google’s continuing push into user-generated content, Chang links the acquisition to Google’s ongoing efforts to expand the role of social media in its business model:
While other blog services seem to be exploring the idea of integrating social networks with blogs only lately, our new blog service Textcube (link in Korean) had already implemented the feature much earlier.
Which brings me to a second and related Google Web 2.0 development last week. The official Google blog reports the introduction of the concept of a “Follower” to Blogger (the service I use for my five blogs). The blurb by Blogger product marketing manager Mendel Chuang explains:
At Blogger we're passionate about helping communities form around blogs. To further that goal, we've introduced a new feature that lets you easily follow your favorite blogs and tell the world that you’re a fan. To follow a blog with the Followers' Gadget, simply click the “Follow This Blog” link. You can show your support for the blog by following it right from your Blogger Dashboard or in Google Reader. …

We’re also in the process of integrating with Google Friend Connect to add even more engaging social features to Blogger.
In other words, Blogger will infer community from self-identified “Followers,” all integrated through Google’s single-signon “OpenID”. Through the OpenSocial APIs, it brings along orkut (owned by Google) as well as LinkedIn, Facebook and Plaxo, among others. Updated Sept. 15: Although once outside the OpenSocial network, MySpace has subsequently provided its own implementation.

As part of Total World Domination, Google is aggressively pursuing social media, too. Given that Google has a business model and Facebook and MySpace (like most other Web 2.0 startups) do not, this is even more evidence that the days of standalone social media plays are ultimately numbered. MySpace still has its sugar daddy, but I wonder whether Mark Zuckerberg will rue not selling more shares to Microsoft last year (even if he was wise to turn down the earlier Yahoo offer).

Saturday, September 13, 2008

Omnigoogle supplants Microsoft

As part of the celebrations this week for Google’s 10th birthday, a number of commentors have been doing retrospectives on the firm.

One of the more interesting retrospectives is by Nick Carr, the former executive editor of Harvard Business Review. Carr is best known for arguing that “IT doesn’t matter”, i.e. IT is a commodity that rarely provides opportunity for competitive advantage.

In his blog posting at the beginning of the week, Carr makes a number of important observations. One is that Google’s ad-based model encourages their plan for perpetual betas. In other words, if they were selling software they couldn’t sell a beta, but if they’re selling eyeballs to advertisers then there’s no disincentive against betas.

More fundamentally, Carr (correctly) draws the parallels between Google and Microsoft. Both make money off of cost reduction and expanding the supply of complements:

As more and more products and services are delivered digitally over computer networks — entertainment, news, software programs, financial transactions — Google’s range of complements expands into ever more industry sectors. That's why cute little Google has morphed into The Omnigoogle.

Because the sales of complementary products rise in tandem, a company has a strong strategic interest in reducing the cost and expanding the availability of the complements to its core product. It’s not too much of an exaggeration to say that a company would like all complements to be given away. If hot dogs became freebies, mustard sales would skyrocket. It’s this natural drive to reduce the cost of complements that, more than anything else, explains Google’s strategy. Nearly everything the company does, including building big data centers, buying optical fiber, promoting free Wi-Fi access, fighting copyright restrictions, supporting open source software, launching browsers and satellites, and giving away all sorts of Web services and data, is aimed at reducing the cost and expanding the scope of Internet use. Google wants information to be free because as the cost of information falls it makes more money.
Google’s business model is just Microsoft’s, only updated and done better:
Just as Google controls the central money-making engine of the Internet economy (the search engine), Microsoft controlled the central money-making engine of the personal computer economy (the PC operating system). In the PC world, Microsoft had nearly as many complements as Google now has in the Internet world, and Microsoft, too, expanded into a vast number of software and other PC-related businesses - not necessarily to make money directly but to expand PC usage. Microsoft didn't take a cut of every dollar spent in the PC economy, but it took a cut of a lot of them. In the same way, Google takes a cut of many of the dollars that flow through the Net economy. The goal, then, is to keep expanding the economy.

God or Satan? When you control the economic chokepoint of a digital economy and have complements everywhere you look, it can be difficult to distinguish between when you're doing good (giving the people what they want) and when you're doing bad (squelching competition). Both Google and Microsoft have a history of explaining their expansion into new business areas by saying that they're just serving the interests of "the users." And there's usually a good deal of truth to that explanation - though it's rarely the whole truth.

Friday, September 12, 2008

Yet more evidence of Cingular mediocrity

Regular readers know that I’ve been singularly unimpressed with Apple’s choice of Cingular (later AT&T) as the exclusive US carrier for the Jesus Phone. It’s not just the idea of a five-year exclusive (which made sense under the old business model but not the new one), but also the mediocre quality of the Cingular’s US mobile phone network.

The iPhone 3G was supposed to take advantage of AT&T’s wonderful new HSDPA network. Promoters of this UMTS (W-CDMA aka 3GSM) technology claimed it would deliver downloads at “8-10 Mbps”. AT&T invites prospective customers to “Download and surf on the nation’s fastest 3G network.”

At the same time, iPhone 3G user are unhappy with their network performance. Wired asked its readers around the world to run a test to report their actual download speed, to distinguish between iPhone performance problems and network performance problems. Here is what they found:

  • Tests in Germany and the Netherlands achieved 2,000 Kbps.
  • Tests in Canada delivered 1,330 Kbps
  • AT&T provided an average speed of 990 Kbps
  • The only carriers that were worse were two Australian carriers, with an average speed of 390 Kbps
It gets better:
In some major metropolitan areas that are supposedly 3G-rich, 3G performance can be very slow. For example, zooming in on San Francisco, you'll see that 10 out of 30 participants reported very slow 3G speeds — barely surpassing EDGE.
The hypothesis is that AT&T didn’t buy enough 3G radios in the cities where the iPhone is most popular, and thus the network is getting overloaded.

As skeptical as I am about WCDMA and “wireless broadband” in general, AT&T here may have a slight advantage over its US rivals. On the wireline side, they finally have a solution that beats all DSL (although not a cable modem or FiOS or uverse). On the wireless side, neither of their EVDO rivals (Verizon, Sprint) do much better, claiming only 600-700 Kbps — although a July review of a Sprint modem measured an actual speed of 966 Kbps for its EVDO service.

Thursday, September 11, 2008

Microsoft as the enemy of my enemy

Briefly blogging while working to catch up on my day job.

During the 1990s, the Clinton Justice Department and the European Commission saw their #1 antitrust goal as preventing Microsoft’s rush to Total World Domination. Today, Google is tops in every regulator’s sights, but lingering suspicion of Microsoft remains (and not just among the open source crowd).

Thus I find tremendously ironic this week’s announcement that Nokia is licensing Microsoft’s ActivSync technology, to allow Nokia smartphone owners to receive push e-mail from Microsoft Exchange servers. A free download will be available for all 43 of its current Symbian/S60 phones, or an installed base of 80 million users worldwide, and future phones will have it built-in.

In embracing ActivSync, Nokia joins Apple in embracing Exchange, which is the world leader in enterprise mail servers (at 37%). For many activists — particularly US-based open source fans — Exchange is still the big evil monopolist to be fought. So Nokia — which helped create Symbian 10 years ago to keep Microsoft from dominating mobile phones — is now embracing Microsoft to help it grab business customers.

Of course, Microsoft, Nokia and Apple are allied against the other main push e-mail solution, Research in Motion. Right now RIM has more smartphone sales in North America than Nokia, Apple or Microsoft’s licensees — hence they share a common enemy in trying to slow increasing BlackBerry penetration.

The other irony is that Nokia had previously been using the BlackBerry Internet Service as its push e-mail solution. The Nokia E61 (and its ATT-lobotomized E62 variant) won widespread popularity because of Nokia phone engineering, the BlackBerry-style keyboard and the BlackBerry Connect service (also available for 10 other E-series and 9000-series phones). When the Nokia E71 was introduced without BlackBerry Connect, Nokia E61 owners refused to upgrade and lose their e-mail solution. Now the other shoe has dropped, and Nokia’s shift in strategic alliances is clear.

Update 10pm: As someone pointed out to me today, RIM is also the only push solution with a Lotus Notes gateway. So that's another set of customers that Nokia loses by switching to Exchange.

Keeping trade secrets secrets

Deven Desai on Madisonian (one of the blogs listed to the right) has an amusing update on the steps KFC is taking to move its secret formula for fried chicken while assuring that the formula remains secret.

Perhaps because he’s writing for lawyers, Desai does not mention the number one rule for trade secrets: if you don’t keep it a secret, you lose all protection.

There is some gray area about how long the genie can accidentally be out of the bottle — ask a lawyer for the relevant case law — but the risk is if he gets out of the bottle, you may never be able to get him back in. In that case, all the lawsuits for injunctive relief or damages will be unable to unring that bell.

Wednesday, September 10, 2008

Where's my free iPod?

The not dead yet Steve Jobs introduced new iPod models Tuesday in San Francisco. Most of the attention was devoted to the new iPod Nano — shaped like the old iPod Mini — with a bigger video screen and more capacity in two models from $149-$199. It did nothing with its iPod Shuffle line, priced at $49-$69.

It’s interesting to compare how far Apple has come since it introduced the first iPod back in 2001.

Model
Original iPod
iPod Nano (4th generation)
Introduction date
Oct. 2001
Sept. 2008
Mass storage
Hard disc
Flash RAM
Capacity
5 gb
8 gb
Height
102 mm
90.7 mm
Width
61.8 mm
38.7 mm
Depth
19.9 mm
6.2 mm
Weight
185 g
36.8 mm
Screen
160x128 b/w
320x240 color
Content
Sideloaded music
Sideloaded music, purchased music and video, rented video, podcasts
Price
$399
$149

This is certainly a good, clear trend line for the pace of electronics miniaturization over those seven years.

However, it does raise the question: when will iPods be free? If Apple is pursuing a razor and razor blade model, why not give away the iPods (or at least an entry-level model)?

A preview article Monday in Forbes noted the declining importance of iPod sales
Apple's thriving digital content business gives Steve Jobs & Co. plenty of room to slash the price of the iPod to keep digital music and movie sales growing, and to use the company's increasingly powerful digital content business as a way to segue into sales of tablet computers and other gizmos, as it has with the iPhone.

"With iPod price cuts, Apple is choosing revenue over unit cannibalization," Credit Suisse analyst Bill Shope wrote in a research note Monday.

To be sure, Apple's lineup of digital music players could use a boost. IPod sales are up just 7% from the year-ago period. In part, that's just the law of large numbers at work. However, fresh designs, coupled with a price cut, could reignite demand for the stylish gadgets and keep customers rolling into Apple's increasingly lucrative iTunes store.

That's a business with a strong future ahead of it, even as the hardware business that launched it slows. Apple reported that sales of "other music-related products and services"--chiefly iTunes content--jumped 34.7% to $819 million for the quarter ending in June from $608 million during the year-ago period.
So if the trend of iPods is cheaper and a smaller portion of the company’s revenue stream, why not give one away?

As a practical matter, it will probably never happen, because the cross-subsidy is imperfect. The closest thing we have to a perfect cross-subsidy is the videogame console, where Sony or Microsoft or Nintendo capture royalties on every videogame to pay back the subsidized console. Even so, a few hackers figure out how to use an Xbox as a Linux box rather than a royalty stream for Redmond.

To give away an iPod, it would have to be useless except for playing iTunes content, and that’s not likely to happen. The company could sell an iPod for $100 with a $100 iTunes store gift card, but if it wasn’t locked to that iPod then the buyer would just sell the card on the open market. And there’s always the problem of multiple freebies per person, which seems to be why razor blade handles are no longer free.

There’s also the fact that Apple sees itself as a premium brand, and you never cheapen the brand. The closest they’ve come is to give away nearly stale iPods (i.e. a month before they become obsolete) with back-to-school laptop sales, which is more of a bundle than “free.”

Instead, Apple is holding its price points while the rest of the industry commoditizes, and is intent on proving what a good price discriminator it is, squeezing the maximum revenue out of every sale. Moving up from 8gb to 16gb will cost you $50 for an iPod Nano but $70 for an iPod Touch. Is the memory more expensive? No, people will pay more.

Still, the $400 price point became $150 after about 5 years, and I suspect it won't be long that there will be an under $100 device that plays video. So for most teenagers and their parents, under $100 is closed to an impulse buy.

Tuesday, September 9, 2008

Kindle-killing vaporware

When I was at MIT, our journalism advisor was Ed Diamond, who was (briefly) named editor of Newsweek. I didn’t have much interaction with him, but there was a story that stuck with me for a long time. Quoting from 16 months ago:

As I recall the story, he asked students to devise an information carrying device that could convey 10,000 (100,000) words with color pictures, be used in a variety of locations including under a tree, on a plane or in a bathtub, and mass produced and sold for only a dollar or two. His prediction has held up 30 years, and it could be another 10-20 years before e-book readers really become a practical replacement (except for the bathtub).
Clearly I was unduly pessimistic. With its magazine subscriptions, the Kindle has the content, although the current product is too heavy. Still, I should have extrapolated the trends to see what miniaturization will bring us in the next five years.

Monday at the Demo conference in San Diego, startup Plastic Logic demonstrated a product that is much closer to Diamond’s ideal. As the local paper reported:
The device, which has not been given a name, has roughly the same cover dimensions, thickness and weight of a typical issue of Newsweek. And like the magazine, it can store hundreds of pages of content.
In addition, the device is flexible — it can be bent (or dropped) like a magazine.

Not surprisingly, some are calling it a Kindle killer (or merely “thinner, less ugly” as Wierd put it).

It can be used on an airplane if not a bathtub. The marginal cost will be comparable to a magazine, even if the reader is hundreds of dollars. I’d be curious to ask Ed his thoughts about the new technology, but alas he’s been gone for 11 years.

Still, it has no distribution and no content. As with any vaporware product, the world will change between now and when it ships. It’s a cool technology, but a long way from being a product.

Monday, September 8, 2008

Best of bad options

I normally study IT not GSEs, and thus my posting yesterday was argued from first principles rather than original research.

However, the FT's economics corespondent Martin Wolf has given it far more thought. Here are a few choice quotes:

As a result, US housing finance has been brought under direct government control and, in the process, the gross liabilities of the US government, properly measured, have increased by $5,400bn (€3,800bn, £3,000bn), a sum equal to the entire publicly held debt and 40 per cent of gross domestic product.
...
Was there an alternative to such measures? I am talking here not of the precise details, but of the broad decision. The answer is No, for two reasons.

First, the institutions were unable to raise the capital they needed to offset the losses on their lending in the collapsing US housing market. ... Second, the liabilities of these enterprises were held widely abroad, particularly by central banks and governments.
Agreeing with yesterday's point — what goes up must goes down — he is very caustic in his conclusions:
What, finally, are the lessons, beyond the obvious one that it is idiotic to believe that the prices of any asset class can only go up? It is that the US unwillingness to recognise that socialised risk demands public control has created not just a scandal, but a gigantic mess.

The US public has ended up with an open-ended guarantee of the liabilities created by supposedly private entities. It is a bad place to be. As Mr Paulson says: “There is a consensus today that these enterprises pose a systemic risk and they cannot continue in their current form.”

Amen to that. At some point, they will have to be broken up and sold off. Given the state of the housing market, that happy day is a long way off.
The one issue that Wolf doesn’t talk about — perhaps because it’s merely a political matter on this side of the pond — is how many hundreds of millions of taxpayer money was transferred to politically connected executives and lobbyists seeking to protect their lavish paychecks. Of course, those hundreds of millions are chump change compared to the mismanagement (if not deliberate fraud) wiping out $70b in shareholder equity and sticking taxpayers with $5 trillion in debt.

For my RSS reader friends

Some people read blogs through an RSS reader rather than a web page (I’m one of them). For them, I want to point out postings on Spore (the video game) and two clever DSP applications posted to my other blogs. (One can safely ignore the coconut oil posting).

I still haven’t decided what the appropriate allocation of time and content is between these blogs, and if push comes to shove, how much attention each will get. So readers who want to follow me (rather than the topic) should probably RSS subscribe to all of them — the others are only weekly blogs so the traffic is much less.

Sunday, September 7, 2008

What goes up

One of my blog readers wrote me this morning:

Just read your articles on luxury goods and Steve Jobs.

What prompted you to write the luxury goods article? How about the FNM and FRE take over by the Feds this morning?
OK, I’ll bite.

Today Feds finally revealed how they plan to clean up the Fannie Mae and Freddie Mac mess. As government bailouts, free market interventions and rewarding losers go, it’s relatively encouraging. The managers are getting fired and the shareholders who enabled their failures are also getting wiped out. If the Feds can turn the firms around, perhaps it will enjoy the lion’s share of the financial rewards.

Perhaps next time there’s a government guarantee, the employees won’t assume they have a lifetime right to feed at the trough — or shareholders will ask for an honest accounting and accountability out of the top management.

However, there was one line in this morning’s NYT preview story that demonstrated remarkable economic ignorance, even by newspaper standards:
But the plan to bail out the firms will probably do little to stop home prices from falling further. And foreclosures are almost certain to rise.
The problem is, any financial investment has risks. They go up and they go down. If housing prices overshot due to cheap money and recent speculation — or pressure from Congress to make more loans to risky borrowers — then prices still have a ways to fall before the market corrects itself. As someone who’s long in California real estate, this is not a result I seek, but it’s a realistic outcome given the circumstances. And if housing prices have risen too much, then a market correction will make houses more affordable for first-time buyers.

Two types of people buy volatile assets: long term investors and speculators. If there are temporary drops in housing prices, it doesn’t matter to long term investors, because over time — in most parts of the country — housing is a reasonable, tax-advantaged investment.

Some people buy a house hoping for a quick kill — expecting rapid short term appreciation to bail them out of a purchase they can’t afford. Or they take risks they can’t afford to take. So if next time, perhaps people who can’t afford risky real estate investments — or brokers or lenders who should know better — will make decisions more consistent with such risk.

Another bubble about to pop

At the beginning of the year, I gave up on the dead tree Wall Street Journal because it a) I thought I didn’t use it and b) it cost too much; to save money, I went online-only. A few weeks later, I quietly went back: it turned out I didn’t read the online version, so when I found the right discount code I reinstated the paper edition for $100/year (plus tax).

On Saturday, Dow Jones launched the first issue of its new luxury magazine called WSJ (originally intended to be called Pursuits). The cover used a clever photo in which a model was wearing the Wall Street Journal.

A copy was included with the newspaper delivered to my driveway Saturday. According to the NYT, not everyone got one:

The Journal starts with a major advantage in that it can offer advertisers the wealthiest readership of any American newspaper. An even more affluent subgroup of subscribers will receive the magazine, Mr. Rooney said, with an average household income of $265,000.

Out of The Journal’s domestic Saturday circulation of about 2 million, 800,000 copies — those sold by subscription or at newsstands in 17 large markets — will include WSJ. In addition, 160,000 copies will be distributed on Fridays overseas. The magazine begins as a quarterly, with plans to go monthly next year.
Apparently WSJ newspaper households have an average income of $253k, so this “more affluent subgroup” isn’t that much more affluent.

Still, whoever does their demographic targeting clearly needs to be demoted or fired. Our household is probably in the bottom income quartile for the newspaper; our middle class tract home might appear to be expensive, but it’s near the median for Santa Clara County. The last page of this month’s magazine talks about “Technologically advanced, great-looking watches,” but I don’t have any use for a $5,000 dive watch, let alone a $17,000 one.

Another dad from my daughter’s school brought his brand new Porsche Carrera to a pizza dinner we were at Saturday night, so I guess such income is in our neighborhood somewhere. Still, this shows the limits of targeting ads based on demographic averages; Amazon (or Comcast or the DMV) probably has a better idea of our income than the WSJ.

The new magazine also reminds me a lot of the Forbes FYI luxury insert (now called ForbesLife). Both the WSJ and Forbes tend to be more free market-oriented than Business Week or Fortune, so perhaps their readership skews Republican and have more plutocrats.

WSJ is also reminiscent of Forbes FYI in how it represented the surreal conspicuous consumption of the late 1990s, just as the dot-com wealth era was coming to an end. To me, such consumption is the sign of a market top, and apparently I’m not the only one.

Earlier this year, a blog about “the lives and culture of the wealthy” included an article about “The Luxury-Magazine Bubble”. Drawing an analogy to the ad-heavy tech magazines during the dot-com era, the blogger observed:
In the latest wealth boom, luxury magazines became the new Red Herrings. My desk is piled high with dozens of lush, glossy mags that serve as catalogs for the super-rich. …

I’ve said for months that the wealthy are getting hit by this recession just like everyone else. What financial markets giveth, they also taketh away. …

The luxury market and luxury magazines have been built on a myth: that there’s an endless supply of new jet-setters dying for glossy spreads telling them how to spend their millions. David Arnold, an exec at Robb’s parent company, is right to say in the Times article that “if you have assets of 30, 50, 100 million dollars, even a 5 percent loss doesn’t really impact their lifestyle.”

But how many Americans are worth that much? Maybe 50,000 at most? Surely not enough to support 20 new luxury magazines and all the mass-luxury companies that fund them. …

You can’t blame the luxury-magazine executives for putting on a happy face during the coming shakeout. That’s their job. But it doesn’t mean we have to believe them.
That blog entry was written back in March, but it clearly applies today. Who wrote it?
The blog is written by Robert Frank, a senior writer for the Wall Street Journal and the author of Richistan: A Journey Through the American Wealth Boom and the Lives of the New Rich, published in June.
The entire commentary can be found at blogs.wsj.com.

Saturday, September 6, 2008

Dell catches a 10-year-old trend

On Thursday, Dell’s 10-Q report suggested that it’s getting ready to dump some or all of its factories. The development was noted in a front page Wall Street Journal story on Friday and picked up by the NYT and FT this morning.

The irony of Dell outsourcing the manufacturing that has formed its historic source of competitive advantage is hard to understate. To quote from the 10-Q:

We were founded on the core principle of a direct customer business model which included build to order hardware for consumer and commercial customers. The inherent velocity of this model, which included highly efficient manufacturing and logistics, allowed for low inventory levels and the ability to be the industry leader in selling the most relevant technology, at the best value, to our customers.
But the key paragraph of the 10-Q announces the intended reversal:
We are actively reviewing all aspects of our logistics, supply chain, and manufacturing footprints. This review is focused on identifying efficiencies and cost reduction opportunities while maintaining a strong customer experience. Two examples of this include; our announcement on March 31, 2008, that we will close our desktop manufacturing facility in Austin, Texas, and the sale of our small package fulfillment center in the second quarter of Fiscal 2009. … In addition, we anticipate taking further actions to reduce total costs in design, materials, and operating expenses.
The WSJ counts the factory sale as a done deal, while the other reports are more tentative. All attribute the change to increasing cost pressures (i.e. commoditization) of the PC industry.

 1172435 Dellcomputers300The WSJ notes that Dell had already moved to outsource laptop production to Taiwanese makers such as Foxconn. While IBM once made its own laptops, now all remaining US laptop makers (i.e. HP and Apple) have their laptops made by Taiwanese companies. Gateway solved their laptop supply problem by selling themselves two years ago to Acer (of Taiwan).

Actually, Dell’s decision to sell its factories and switch to CM is a decade behind the times. With my postdoc in 2000-2001, I studied Apple’s supply chain revitalization, including its decision to sell all its factories from 1996-1999. HP also began the process of shedding its factories and switching to CM use in this same time period.

The FT notes that Dell’s new supply chain guru is Michael Cannon, former CEO of Solectron (one of the earliest CM firms). This presumably played a role in Dell being willing to let go of its historic source of advantage.

But it does raise the question: if Dell can’t gain competitive advantage from manufacturing — and its direct-build IT logistics model has been widely copied — what will it to do avoid commodization? The temptation will be to move upmarket to a differentiated product, but Dell has failed in its previous attempts to do this because it contradicts Dell’s corporate culture (and customer reputation) as the low cost leader.

On the other hand, you can’t be the low cost leader if you have the same cost structure as all your competitors. Dell will be using the same component parts assembled in the same sort of CM factories as HP (and Acer and Lenovo and Toshiba). Other than an occasional feature on a new laptop, there’s negligible differentiation among the top five, so their similar features and cost structure — in the face of softening demand — is a recipe for further price wars.

Photo of a Dell factory from the BBC

Friday, September 5, 2008

New EE/RE blog

Every year or so, I teach a strategy elective to SJSU MBA students, which is entirely about the issues facing high-tech companies. Although I’ve occasionally brought in biotech, those who know me and my resume would not be shocked to hear that it’s pretty much all IT, all the time.

The last few times I taught the class, students have been interested in pursuing alternative energy or other cleantech topics (such as plug-in hybrids) as their term project. However, given the structure of the class, it’s hard to tie the project to the class concepts (and thus get a good grade) without course readings that bear on the topic.

So this year I vowed to push outside my comfort zone and do something to meet this student demand. For Fall 2008, I have added a week on alternative energy, starting from a very comprehensive Stanford case (actually, industry note) on the energy industry. I also hope to write my own case on photovoltaic energy, with help from a local tech entrepreneur.

As part of this journey of teaching, research and personal discovery, I decided to do what any good blogger would do: create a new blog. My hope is to stay focused on energy efficiency and renewable energy, since many of these will allow me to stay in the EE (rather than chemistry) technical domain.

However, what I think is new is that I intend to focus on the economics (i.e. business viability) of these topics. I believe that this is relatively under-covered, although I’ve found a few sites that talk about these issues (sometimes). That also means no more cleantech topics on this blog, because I think they are separate (if overlapping) audiences.

I expect that blog to be relatively low traffic, perhaps once every week or two, rather than 3x-5x/week for this blog. I have a constant amount of time available to blog, so I will need to allocate my time carefully to those topics where I can make the most original observations (in a reasonable amount of research time).

Thursday, September 4, 2008

Steve and I have been wrong

I bought my first Mac (the 128k variety) in January 1984. As part of the rationalization for spending $3,000 for a computer toy, I promised my wife I’d find a way to make money off of it — so I wrote a book on Basic programming. (Which wasn't published, but that’s another story).

Of course the Mac reflected Steve’s uncompromising vision of what “insanely great” was. One thing where he was notoriously inflexible was on the question of a cooling fan. The advantage of the fan was that it would keep the computer from overheating and malfunctioning; the disadvantage was that it made noise.

The original Mac 128/512 and Mac Plus all lacked the cooling fan, and the quietness of the computer I found appealing, at least at first. (After Steve left Apple, the Macintosh SE and SE/30 in the same case did have a fan.)

However, I found in the summer of 1984 — writing my book nights and weekends in Southern California — that without a fan that once the room temperature was about 85° or 90°, the computer would malfunction in unpredictable ways. To be able to work under typical summer conditions, I cobbled together a solution with velcro and a fan from a surplus parts store, and later bought one of the sleek add-on products sold by third parties.

Fast forward 24 years and probably 20 Macs later. Between 4-6 p.m. this afternoon, my MacBook Air was acting strangely. I kept rebooting and closing applications but it would take 15 minutes to do something that should take 30 seconds. I finally gave up and did something else.

On tonight’s TV news, I found the answer: record temperatures from our latest heatwave. The high today in San José was 99°, and 101° for the reporting station closest to my house. I’m guess it was above 90° inside today in my home office. (Of course there’s A/C at work, but none at home).

My use of the Air over the past 6 months has shown a consistent pattern that when the computer gets hot, the computer seems to slow down — consistent with older power management schemes of reducing power/cycles to the CPU to reduce its heat output. So today the computer got slowed down to avoid overheating. Months ago I put the Mac on a box roughly 4"x4"x1" to increase the cooling and heat transfer under the case, but that obviously wasn’t enough.

Mac users have been complaining (like Steve) about fan noise for years, including on the MacBook Air. Still, the computer needs an aggressive variable speed fan that goes full blast when it’s needed.

Apple claims

  • Operating temperature: 50° to 95°F (10° to 35°C)
  • Storage temperature: -13° to 113°F (-24° to 45°C)
but I don’t buy it. It’s not much use having having a portable computer that gives up before I do.

Wednesday, September 3, 2008

Final requiem for AT&T

Tuesday’s paper reported the final death of AT&T. It had been dying a slow and lingering death since the end of the dot-com bubble killed the demand for telecommunications equipment.

When I say AT&T, I don’t mean SBC, the San Antonio-based Baby Bell that bought a struggling long distance company in 2005, and with it, rights to call itself “AT&T.”

No, I meant the most innovative part of AT&T prior to the 1984 divestiture — the part that brought us the transistor, cellular phones, information theory, communications satellites, lasers, electronic switching, Unix, C, C++, and a host of other key technologies. In short, the part that housed Bell Telephone Laboratories aka Bell Labs.

After 1984, as AT&T kept getting reorganized, Bell Labs went with Western Electric which became Lucent in the 1996 trivestiture. In his 2003 book, Narain Gehani does a great job of capturing the spirit of Bell Labs during its last lingering days, a commendable spirit despite being obviously long after its 1950s-1960s peak.

In 2006, Lucent disappeared from the face of the earth, when it was acquired by the French company Alcatel, née Compagnie Générale d'Electricité. The new company was headquartered in Paris, and despite claims of a merger of equals, the French division was clearly dominant.

While a certain amount of power sharing between the two countries and cultures been maintained since the merger, that ended Tuesday when the company announced its new CEO and chairman. The Wall Street Journal noted that the two appointments are a clear indication that the Francophone culture has become dominant. As Heidi Moore noted:

The choices also signal that the combination wasn’t a merger in the first place: it was a takeover, by Alcatel, and future leaders ought not to forget it. It also shows the dangers of not working out such details when negotiating a cross-border deal in the first place.
Moore noted that former Lucent CEO Patricia Russo refused to learn French. Advice to company executives: if you don’t want to learn French, don’t sell your company to French owners.

Moore continued on:
We have seen this movie before, most famously in the 1998 combination of Daimler and Chrysler. From the beginning there was an uneasy fit between Mercedes’ upscale, expensive parts and processes and Chrysler’s more workaday functions. There also was the cultural rift: in the weeks before the combined company took on its new NYSE listing, executives bickered over whether to use business cards in American sizes or European sizes. (The European sizes won).

The deal, initially billed as a merger of equals, clearly came to be seen as a takeover by the company that had the stronger will and the more-forceful culture: Daimler.
In her list of acquisitions, Moore forgot to mention the 2002 HP-Compaq “merger of equals,” now run by HP with the HP brand and HP headquarters.

I don’t know why any reporter or analyst ever ever buys into the fiction of a “merger of equals.” They don’t exist: one side wins and the other loses. Occasionally the surviving company is not the one that put up the money, as with the 1996 reverse acquisition of Apple by NeXT that brought Steve Jobs and his management team to turn around the company.

When people are pretending to merge, then the battle may be resolved over years rather than up front. However, in any acquisition one set of values, systems, culture and executive will be left standing when the integration is finally resolved. The only exception would be when the alien body is spit out, as with Daimler’s 2007 de-acquisition of Chrysler.

This is why small acquisitions seem to me to be the least troubling: everyone knows which culture and leaders will survive, so no time is wasted on pretending that the acquiree will play a meaningfull role in defining the overall corporate strategy or culture.

Tuesday, September 2, 2008

Google, the integrated software company

Everyone has been going gaga over Google’s new browser, Chrome. I actually had to work today (teaching classes), so since I’m coming late, let me point to other coverage. The announcement was made in the Google blog, which referenced the Google comic book (officially released on Google books but more readably presented by Blogoscoped).

Om Malik (as is often the case) has an incisive summary of the new software’s features, while Walt Mossberg and the Merc each have a first test drive.

There’s lots of speculation about Chrome being aimed at Firefox rather than Internet Explorer. As of August, market share is 72.2% IE, 19.7% Firefox, 6.4% Safari, and 0.7% Opera and Netscape. (Of the Safari, 0.3% is iPhone, up 58% from July to August). Kara Swisher (of the WSJ-owned site AllThingsD) is skeptical about Google’s chances of winning a browser war:

Google’s in no danger of foundering, given its search business still dominates and quite profitably, of course.

But, for all the halo of that, Google has also never had any other similar true home run with any of the other products it has released so far.
Of course, Chrome is obviously the Android browser, since (as with Android) it’s Google’s browser based on WebKit. Apple already has a browser based on WebKit — it’s called Safari, available for Windows.

While Chrome has more features than Safari, the main difference appears to the new Javascript virtual machine (called V8) from its virtual machine development group in Denmark (replacing SquirrelFish, the most recent WebKit Javascript interpreter). The main difference (according to the comic book) is the multithreaded VM.

Still, why does Google have to go off and make its own browser, given that Apple is already making one? Why not work together? (After all, Google’s CEO is on Apple’s board). Google’s hubris (some would say arrogance) en route to Total World Domination seems to require its go-it-alone strategy, just as with Android it’s calling all the shots and not cooperating with other embedded Linux efforts.

The answer is that this is not about browser wars: this is what strategy professors call “multipoint competition,” in this case over competing hopes for Total World Domination. Microsoft dominated the software industry in the 1990s, and Google is determined to claim that mantle within 5 years.

This means that any software that Microsoft offers Google will eventually offer as well. If it can overcome its hubris, perhaps Google will partner or ally to deliver with other members of the anti-Microsoft camp. I suspect it will always choose weaker partners (like Yahoo or Sun) to keep control rather than strong ones (such as Apple or IBM), at least until that time later this century when it abandons hopes for Total World Domination.

By competing with Microsoft, Google is en route to be a fully integrated software producer, one that happens to deliver more (but not all) of its software as a network service rather than a client-based application. This is a fully integrated stack — from its Android variant of Linux through Chrome to Google-hosted services — rather than the vertical integration of inputs and outputs that characterized 20th century innovation leaders.

Google’s mantra (repeated at the official Chrome intro) is that the browser is not a browser, but a computing platform. This is an update (15 years later) of Sun’s old mantra “the network is the computer.” Unlike with Sun, Google will own both sides of the network used by hundreds of millions (if not billions) of consumers: the client browser, applets that run on the client, and the server stack that delivers those applets and services over the Internet.

Update: It is now clear that Chrome and Android have separate forks of the WebKit open source project, even if someday they may be merged back together.

Monday, September 1, 2008

Evaluating Ajax alternatives

Ajax has been a central part of Google’s efforts to design richer, client-independent Internet applications. I think by any measure it’s been a success, in part fueled by (and fueling) the success of WebKit, the open source HTML rendering engine developed by Apple engineers.

In the August issue of IEEE Computer, George Lawton authored a 3-page summary of “New Way to Build Rich Internet Applications.”

The article lists two major Ajax development tools: the Google Web Toolkit and Microsoft ASP.Net Ajax. It also lists the non-Ajax alternatives: Adobe solutions (Flash, AIR and Flex), Microsoft Silverlight and Sun’s JavaFX.

I haven’t been a code-level engineer since Palomar closed in 2004, and I haven’t had occasion to create any Internet apps other than many pages of HTML and a few small fragments of Perl and Javascript. However, I found the article provided a helpful overview of the competing RIA technologies, with no obvious axes to grind.

The one odd thing was only a single passing reference to Eclipse. Its Rich Ajax Platform version 1.0 was released 11 months ago. It leverages the existing Eclipse Rich Client Platform architecture that is the basis of many Eclipse applications from the past five years.