Showing posts with label infrastructure. Show all posts
Showing posts with label infrastructure. Show all posts

Wednesday, June 27, 2007

Openness in First Monday

This afternoon’s e-mail brought notice from Brian Kahin of publication of his special issue of the online academic journal First Monday. The special issue captures the papers presented during the DCCI conference at the National Academies in January. Included in the issue is my own paper, “Seeking Open Infrastructure: Contrasting Open Standards, Open Source and Open Innovation,” which elaborates on my own earlier talk.

Why should anyone care? Imagine if in 1992 a bunch of smart academics had talked about what they thought the Internet would look like. Many of them would have been wrong, but some would have spotted key trends 3, 5 or 10 years in advance. If Kahin’s vision is correct, this issue describes how the next-generation digital infrastructure will evolve over the coming decade.

Technorati Tags: , , ,

Saturday, February 3, 2007

Is Standardization Broken?

As I’ve said my research and this blog, open product compatibility standards provide the essential interoperability for the creation and use of most IT products. Of course, openness is a matter of degree: even the most proprietary standard usually has some degree of openness to allow third party use and complementary products such as software applications.

When creating standards for a given technology, firms face four competing imperatives:

  • technical interoperability, the nominal (and usually easiest) goal of a standardization effort;
  • creating value, both through features of the technology, and by encouraging a supply of products that deliver or build upon that technology;
  • capturing value, since if the various producing firms don’t make money there’s not much point in getting involved; and
  • timeliness, because if the standard comes too late (as with 56K modems or 802.11n), firms will go ahead and ship products without a standard.
On Tuesday, Carl Cargill delivered an impassioned plea at the DCCI conference arguing that the standardization regime as we know it will be unable to meet the demands of the 21st century IT industry. (His slides are not yet posted, so for now I’m working off my notes and a private copy of the slides.) A sample quote:
We’re post-industrial society and we have a pre-industrial standards regime … We still have the basic model for ISO as we had in 1945.
Cargill focused on standards consortia, which he described as "where we get together and help each other". Some of his major points:
  • Consortia are increasingly balkanized, with declining participation.
  • Standards are no longer an open effort to encourage interoperability but an attempt to block rivals. [NB: Pam Jones of GrokLaw recently used documents from a Microsoft court case to show how proprietary extensions eliminate interoperability and buyer choice.]
  • Standards are largely ignored by teachers, researchers and consultants.
  • China uses standards as an industrial policy, Europe for social and political goals, but US policy largely ignores them.
While the railroads and Gutenberg had 50 years to get their act together, by Cargill’s estimate, we have only 5 years to fix standardization. If cooperative standardization fails, the alternative is proprietary standards.

Some readers might say Carl who? Cargill did standards at DEC, Netscape and now is Chief Standards Officer of Sun. He’s been the most visible proponent of U.S. IT standardization for more than a decade, having testified before Congress, written various books and articles, funded an online library of standards research, and sponsored a series of provocative industry workshops on standardization issues. We first met at the 1999 edition of SIIT, the main academic conference on IT standards.

From any other source, I might accuse the speaker of having a lack of perspective: after all, cooperative standardization has often been about jockeying for position and digging in your heels if you don’t get what you want. But Carl’s no Pollyanna: he not only lived through the Unix wars, he was a field commander for one side.

If there’s a canary in the coal mine, it’s last year’s IEEE 802.20 mess, where Intel and others aligned with the competing 802.16 (WiMax) standard sought to block a standard backed by Qualcomm. In an unprecedented move, the IEEE suspended the committee and its leaders until it straightened the mess out.

Tim Simcoe noted how firms that try too hard to capture value won’t create any. I’d like to think that this would lead firms to self-correct, but obviously there are cases where such self-regulation has failed. One cause may be that a firm prefers the status quo to a successful standard; as the lawyers like to say, cui bono?

Update February 4 (08:45 a.m.) — Carl’s slides have been posted to the program.

Technorati Tags: , ,

Friday, February 2, 2007

The GPL Won’t Solve This NIH Problem

I almost called this “NIH — Not Just for Programmers Anymore,” “Physiologists Who Need a GPL” or perhaps “Forking the Human Body.” The 1st choice is particularly fun because NIH has a double meaning, one for the IT sector and another for health care.

There was an interesting tale of forking at this week’s DCCI conference at the National Academies of Science and Engineering. The story was told by Brian Athey, a bioinformatics professor at the University of Michigan. Because his slides are not yet posted, I need to reconstruct the story from my sketchy notes.

[Head cross-section]Back in the early 1990s, the federal National Library of Medicine began its Visible Human Project. There’s a nice article at GE explaining how the various 3D imaging technologies (plus cadaver slices) were used to generate the dataset. (For the IT types, the GE study talks about using the big iron of the day — SGI dual 150 MHz screamers). As Athey told it, after the NLM had a virtual man (and then woman), everyone else wanted one, including DARPA with its Virtual Soldier, NASA with its Virtual Astronaut, and others I couldn’t type quickly enough. Even Ray Kurzweil wants to get involved.

If I understood his point, there’s a lot of re-inventing the wheel, everyone creating their own human by starting over from scratch, rather than doing cumulative innovation where one researcher builds upon (and contributes back to) the infrastructure created by others. BTW, if you go to NLM website, their license even has a form of reciprocity.

It’s yet another reminder to the GPLniks and others who believe that compulsory sharing is the answer to all the world’s problems. Take embedded Linux. Even if you can force people to share changes, that doesn’t mean that those changes will be used: it took several years to get essential embedded Linux features incorporated into the mainstream kernel. For more almost 15 years, the various BSD variants have been (without compulsion) sharing their respective code, but still pursuing separate projects — a canonical example of the exaggerated fear of forking held by many GPLniks.

Sometimes forking is justifiable, if, for example different efforts have irreconcilably different goals. A better choice than forking would be a modular design that allows meeting goals via incremental improvements rather than re-implementation (a pet peeve of mine as a software engineer and manager for more than 25 years). In other cases, “duplicative” investment harnesses competition as a way to choose the best solution from a range of possibilities.

But in many cases, it’s merely pride — i.e. NIH (Not Invented Here) — that leads to forking. Fortunately for medical research, the other NIH (National Institutes of Health) directly or indirectly fund the bulk share of U.S. public research. So if the NIH says “no” to NIH — and yes to cumulative innovation — then there’s hope for at least one sector to channel resources towards areas that do some good.

Technorati Tags: , , , ,

Wednesday, January 31, 2007

Building an Open “Cyberinfrastructure”

Monday and Tuesday I was at an NSF-sponsored conference at the National Academies of Science and Engineering. Because it’s in DC, it has had a real mouthful of a name: “Designing Cyberinfrastructure for Collaboration and Innovation: Emerging Frameworks and Strategies for Enabling and Controlling Knowledgeuntil they realized it was a distraction and shortened it.

The title and the cyberinfrastructure jargon made it sounds more intimidating than it was: although participants were expected to use the cyberinfrastructure buzzword (tied to an NSF program of the same name), it contained an interesting assortment of 12-minute talks about real problems of openness from a cross-section of views. Based on their printed bios, the 47 scheduled speakers came from academia (44%), government (US plus UK plus OECD, 26%), companies (17%) and nonprofits (13%).

Most of the talks related directly to IT, although a few focused on IP and innovation issues related to the sciences (mainly biopharma).

Major Themes
There were three major themes:

  • The Need for Cyberinfrastructure. Why do we need cyberinfrastructure? After releasing the Atkins report in 2003 — named for the blue-ribbon panel led by Dan Atkins — the NSF was already sold on the idea. This part was mainly an introduction for the rest of us that haven’t been thinking about the problem.
  • How Infrastructure is Different. The dictionary defines infrastructure as shared facilities necessary for economic activity. Many of the speakers offered intriguing examples of how infrastructure enables research, innovation and economic activity.
  • Partitioning Between Public Good and Private Gain. My own talk fit in this category, as did the many talks about the IP system. It was clear that infrastructure is not the same as taxpayer-supported: the government pushed rural electrification and telephone deployment (or the Internet) even though facilities were owned by private firms.
As Yogi Berra would say, this was like dejà vu all over again. Organizer Brian Kahin noted that the issues of cyberinfrastructure today resemble those of 1994 with plans for the NII (a term that Al Gore and other policy wonks were then using to refer to what became the Internet). Then at Harvard’s Kennedy School, among his various efforts Brian hosted a 1996 conference and published a 1997 book on global NII policy, which is where I first met him. The issues of creating a digital communications infrastructure are similar to those of a decade ago: architecture, implementation, funding and use.

What’s different is the scale of data transmission and storage required for things like patient trials, medical imaging and digital astronomy. Mark Ellisman of UCSD (founder of the Biomedical Informatics Research Network) talked about 5 petabytes (that’s about 5 million gigabytes) for a single 3D image of a rodent organ. Imagine what movies would require.

What Was Interesting
It’s hard to capture 16 hours of meetings into 750 words, which is why I have separate posts on IP (coming soon), a general question of “what is open,” and my own talk about open standards/open source/open innovation.

My favorite talks were (in no particular order)
  • Siobhán O’Mahony (Harvard) and Fiona Murray (MIT), with 2 interesting case studies of how private innovation is getting more public and vice versa. They concluded that there’s a need to create boundary organizations if resources (technology, IP, other assets) is going to be distributed between shared (public) and private interests.
  • Shane Greenstein (Northwestern), excerpted from his in-progress book on the history of the U.S. internet service provider (ISP) industry. He talked about how individual (often small) firms performed economic experiments, and how these economic (i.e. business) innovations spread and thus were cumulative across the industry.
  • Steve Jackson (Michigan): once an infrastructure is created, there is an “inside,” an “outside” and often “orphans.” Certainly the decisions made in creating the infrastructure define winners and losers.
  • Brett Frischmann (Loyola Chicago) talked about how infrastructure is a sharable generic input into a wide variety of output, and how the value of infrastructure is actually realized downstream by consumers of outputs.
  • Sara Boettiger (PIPRA) whose talk I mentioned earlier.
Carl Cargill (Sun) also spoke forcefully about how the standards system is broken, while others talked about the issue in less apocalyptic terms. Because standardization is a central issue of openness in the IT industry, I’ll summarize these arguments later.

The slides for some of the talks have been posted and more will be posted later in the week. The individual papers are likely to end up in an online journal in a few months.

Technorati Tags: , , , ,

Monday, January 29, 2007

WiFi trapped by its own success?

WiFi (aka 802.11a, b, g) has been a tremendous success. In fact, given its modest goals as a way to connect handheld computers in a warehouse, it widespread adoption in every laptop and an increasing number of PDAs an cell phones is remarkable.

If anything, it’s been too successful. Too successful, you say? Isn't that like being too rich or too thin?

The problem is that a large installed base creates an upward compatibility constraint that can be irresistible. Inertia for an existing standard is the cumulative effect of the number of customers times the individual switching costs (plus producer-related switching costs — in this case the base station and chip makers). As Brian Dipert of EDN reports, the committee took its time in standardizing, and meanwhile various greedy and impatient vendors shipped so many “draft 802.11n” products, that no one would vote for a final standard that was incompatible with all the nonstandard product in the field.

Meanwhile, George Ou has a provocative post where he argues that the 802.11n standardization committee wimped out, deciding to create something that's not really all that much better than 802.11g. As Ou tells it, the problem was that rather than spend a few extra bucks (initially) on a chip that also supported 5 GHz, they stuck with the crowed 2.4 GHz band. The existing 2.4 GHz spectrum only supports 3 (or 4) simultaneous channels and are already crowded, so (my reading of it is) unless you’re on a deserted mountaintop you’ll never see the claimed 100 Mbps throughput.

If I were the Enhanced Wireless Consortium, when the final standard gets blessed I’d get the press some sample units to demonstrate actual performance.

Technorati Tags: , , , ,