Building an Open “Cyberinfrastructure”
Monday and Tuesday I was at an NSF-sponsored conference at the National Academies of Science and Engineering. Because it’s in DC, it has had a real mouthful of a name: “Designing Cyberinfrastructure for Collaboration and Innovation: Emerging Frameworks and Strategies for Enabling and Controlling Knowledge” until they realized it was a distraction and shortened it.
The title and the cyberinfrastructure jargon made it sounds more intimidating than it was: although participants were expected to use the cyberinfrastructure buzzword (tied to an NSF program of the same name), it contained an interesting assortment of 12-minute talks about real problems of openness from a cross-section of views. Based on their printed bios, the 47 scheduled speakers came from academia (44%), government (US plus UK plus OECD, 26%), companies (17%) and nonprofits (13%).
Most of the talks related directly to IT, although a few focused on IP and innovation issues related to the sciences (mainly biopharma).
Major Themes
There were three major themes:
- The Need for Cyberinfrastructure. Why do we need cyberinfrastructure? After releasing the Atkins report in 2003 — named for the blue-ribbon panel led by Dan Atkins — the NSF was already sold on the idea. This part was mainly an introduction for the rest of us that haven’t been thinking about the problem.
- How Infrastructure is Different. The dictionary defines infrastructure as shared facilities necessary for economic activity. Many of the speakers offered intriguing examples of how infrastructure enables research, innovation and economic activity.
- Partitioning Between Public Good and Private Gain. My own talk fit in this category, as did the many talks about the IP system. It was clear that infrastructure is not the same as taxpayer-supported: the government pushed rural electrification and telephone deployment (or the Internet) even though facilities were owned by private firms.
What’s different is the scale of data transmission and storage required for things like patient trials, medical imaging and digital astronomy. Mark Ellisman of UCSD (founder of the Biomedical Informatics Research Network) talked about 5 petabytes (that’s about 5 million gigabytes) for a single 3D image of a rodent organ. Imagine what movies would require.
What Was Interesting
It’s hard to capture 16 hours of meetings into 750 words, which is why I have separate posts on IP (coming soon), a general question of “what is open,” and my own talk about open standards/open source/open innovation.
My favorite talks were (in no particular order)
- Siobhán O’Mahony (Harvard) and Fiona Murray (MIT), with 2 interesting case studies of how private innovation is getting more public and vice versa. They concluded that there’s a need to create boundary organizations if resources (technology, IP, other assets) is going to be distributed between shared (public) and private interests.
- Shane Greenstein (Northwestern), excerpted from his in-progress book on the history of the U.S. internet service provider (ISP) industry. He talked about how individual (often small) firms performed economic experiments, and how these economic (i.e. business) innovations spread and thus were cumulative across the industry.
- Steve Jackson (Michigan): once an infrastructure is created, there is an “inside,” an “outside” and often “orphans.” Certainly the decisions made in creating the infrastructure define winners and losers.
- Brett Frischmann (Loyola Chicago) talked about how infrastructure is a sharable generic input into a wide variety of output, and how the value of infrastructure is actually realized downstream by consumers of outputs.
- Sara Boettiger (PIPRA) whose talk I mentioned earlier.
The slides for some of the talks have been posted and more will be posted later in the week. The individual papers are likely to end up in an online journal in a few months.
Technorati Tags: biotech, infrastructure, intellectual property, Internet, open standards