What's true for the users of networks is doubly so for the producers of the goods that create them. From the perspective of a producer of a good that shows strong network effects such as a word processing program or an operating system, the optimal position is to be the company that owns and controls the dominant product on the market. The ownership and control is probably by means of intellectual property rights, which are, after all, the type of property rights one finds on networks. The value of that property depends on those positive and negative network effects. This is the reason Microsoft is worth so much money. The immense investment in time, familiarity, legacy documents, and training that Windows or Word users have provides a strong incentive not to change products. The fact that other users are similarly constrained makes it difficult to manage any change. Even if I change word processor formats and go through the trouble to convert all my documents, I still need to exchange files with you, who are similarly constrained. From a monopolist's point of view, the handcuffs of network effects are indeed golden, though opinions differ about whether or not this is a cause for antitrust action. 41

But if the position that yields the most revenue is that of a monopolist exercising total control, the second-best position may well be that of a company contributing to a large and widely used network based on open standards and, perhaps, open software. The companies that contribute to open source do not have the ability to exercise monopoly control, the right to extract every last cent of value from it. But they do have a different advantage; they get the benefit of all the contributions to the system without having to pay for them. The person who improves an open source program may not work for IBM or Red Hat, but those companies benefit from her addition, just as she does from theirs. The system is designed to continue growing, adding more contributions back into the commons. The users get the benefit of an ever-enlarging network, while the openness of the material diminishes the lock-in effects. Lacking the ability to extract payment for the network good itself—the operating system, say—the companies that participate typically get paid for providing tied goods and services, the value of which increases as the network does. 42

I write a column for the Financial Times, but I lack the fervor of the true enthusiast in the "Great Game of Markets." By themselves, counterintuitive business methods do not make my antennae tingle. But as Larry Lessig and Yochai Benkler have argued, this is something more than just another business method. They point us to the dramatic role that openness—whether in network architecture, software, or content—has had in the success of the Internet. What is going on here is actually a remarkable corrective to the simplistic notion of the tragedy of the commons, a corrective to the Internet Threat storyline and to the dynamics of the second enclosure movement. This commons creates and sustains value, and allows firms and individuals to benefit from it, without depleting the value already created. To appropriate a phrase from Carol Rose, open source teaches us about the comedy of the commons, a way of arranging markets and production that we, with our experience rooted in physical property and its typical characteristics, at first find counterintuitive and bizarre. Which brings us to the next question for open source. Can we use its techniques to solve problems beyond the world of software production? 43

In the language of computer programmers, the issue here is "does it scale?" Can we generalize anything from this limited example? How many types of production, innovation, and research fit into the model I have just described? After all, for many innovations and inventions one needs hardware, capital investment, and large-scale, real-world data collection—stuff, in its infinite recalcitrance and facticity. Maybe the open source model provides a workaround to the individual incentives problem, but that is not the only problem. And how many types of innovation or cultural production are as modular as software? Is open source software a paradigm case of collective innovation that helps us to understand open source software and not much else? 44

Again, I think this is a good question, but it may be the wrong one. My own guess is that an open source method of production is far more common than we realize. "Even before the Internet" (as some of my students have taken to saying portentously), science, law, education, and musical genres all developed in ways that are markedly similar to the model I have described. The marketplace of ideas, the continuous roiling development in thought and norms that our political culture spawns, owes much more to the distributed, nonproprietary model than it does to the special case of commodified innovation that we think about in copyright and patent. Not that copyright and patent are unimportant in the process, but they may well be the exception rather than the norm. Commons-based production of ideas is hardly unfamiliar, after all. 45

In fact, all the mottos of free software development have their counterparts in the theory of democracy and open society; "given enough eyeballs, all bugs are shallow" is merely the most obvious example. Karl Popper would have cheered.14 The importance of open source software is not that it introduces us to a wholly new idea. It is that it makes us see clearly a very old idea. With open source the technology was novel, the production process transparent, and the result of that process was a "product" which outcompeted other products in the marketplace. "How can this have happened? What about the tragedy of the commons?" we asked in puzzlement, coming only slowly to the realization that other examples of commons-based, nonproprietary production were all around us. 46

Still, this does not answer the question of whether the model can scale still further, whether it can be applied to solve problems in other spheres. To answer that question we would need to think more about the modularity of other types of inventions. How much can they be broken down into chunks suitable for distribution among a widespread community? Which forms of innovation have some irreducible need for high capital investment in distinctly nonvirtual components—a particle accelerator or a Phase III drug trial? Again, my guess is that the increasing migration of the sciences toward data- and processing-rich models makes much more of innovation and discovery a potential candidate for the distributed model. Bioinformatics and computational biology, the open source genomics project,15 the BioBricks Foundation I mentioned in the last chapter, the possibility of distributed data scrutiny by lay volunteers16—all of these offer intriguing glances into the potential for the future. Finally, of course, the Internet is one big experiment in, as Benkler puts it, peer-to-peer cultural production.17 47

If these questions are good ones, why are they also the wrong ones? I have given my guesses about the future of the distributed model of innovation. My own utopia has it flourishing alongside a scaled-down, but still powerful, intellectual property regime. Equally plausible scenarios see it as a dead end or as the inevitable victor in the war of productive processes. These are all guesses, however. At the very least, there is some possibility, even hope, that we could have a world in which much more of intellectual and inventive production is free. " 'Free' as in 'free speech,' " Richard Stallman says, not "free as in 'free beer.' "18 But we could hope that much of it would be both free of centralized control and low- or no-cost. When the marginal cost of reproduction is zero, the marginal cost of transmission and storage approaches zero, the process of creation is additive, and much of the labor doesn't charge, the world looks a little different.19 This is at least a possible future, or part of a possible future, and one that we should not foreclose without thinking twice. Yet that is what we are doing. The Database Protection Bills and Directives, which extend intellectual property rights to the layer of facts;20 the efflorescence of software patents;21 the UCITA-led validation of shrinkwrap licenses that bind third parties;22 the Digital Millennium Copyright Act's anticircumvention provisions23—the point of all of these developments is not merely that they make the peer-to-peer model difficult, but that in many cases they rule it out altogether. I will assert this point here, rather than argue for it, but I think it can be (and has been) demonstrated quite convincingly.24 48

The point is, then, that there is a chance that a new (or old, but underrecognized) method of production could flourish in ways that seem truly valuable—valuable to free speech, innovation, scientific discovery, the wallets of consumers, to what William Fisher calls "semiotic democracy,"25 and, perhaps, valuable to the balance between joyful creation and drudgery for hire. True, it is only a chance. True, this theory's scope of operation and sustainability are uncertain. But why would we want to foreclose it? That is what the recent expansions of intellectual property threaten to do. And remember, these expansions were dubious even in a world where we saw little or no possibility of the distributed production model I have described, where discussion of network effects had yet to reach the pages of The New Yorker,26 and where our concerns about the excesses of intellectual property were simply the ones that Jefferson, Madison, and Macaulay gave us so long ago. 49

LEARNING FROM THE SHARING ECONOMY 50