Thursday, August 13, 2009
Lawrence Lessig's "CODE Version 2.0": an existential journey of a new way to regulate us -- through the Internet
Author: Lawrence Lessig
Title: "Code Version 2.0"
Publication: New York, Basic Books, ISBN 0-465-03914-6, 410 pages, indexed, paper
This book is an upgrade of this Stanford law professor’s earlier “CODE and Other Laws of Cyberspace”. And I like its existential approach. We need to think about what we mean by regulation, what we mean by freedom, and what exactly we have to lose. His basic premise is that the “code” behind our technology effectively implements “regulation” and that can be as important a limit to our experience of freedom as any government’s laws. He is somewhat skeptical of libertarianism (“what Declan doesn’t get” is his last chapter); sometimes you may need government when the asymmetric regulation by private interests impacts people more. His last paragraph warns that we are not in a “great time, culturally, to come across revolutionary technologies.” Compared to the Soviets who were caught by their revolution, “we, unlike they, have something to lose.”
The book is organized into five parts, with a sonata-like format, following his argument. He starts with the “admission” of the unregulability of the original Internet, but says that “Code” develops to implement regulation by various interests. There follows a “latent ambiguity” especially in areas like privacy and free speech, requiring new “fundamental” choices that the framers of the Constitution never encountered. He went on to describe jurisdictional conflicts, and argues that the government(s) will inevitably face pressures to make the Internet more regulable. In the world of regulation, there is basic antipathy between “East Coast Code” (the formal legal system) and “West Coast Code” (the practical regulation implied by the architecture and “code” of applications on the Net). And when Washington plays on the road in San Francisco, the home team wins.
Along the way, he makes many interesting observations. Early in the book, he talks about the enhancement of the “Identity Layer” in protocols, to the point that the properties of a visitor could be ascertained without disclosing full identity (for example, is the visitor a minor), to the point that, contrary to popular belief (and some court opinions) different visitors could be kept from getting certain content illegal for them. Later, he discusses COPA (which I cover in more detail on my Internet filtering blog) and suggests that web publishers self-label with a simpler scheme than PICS (proposed by ICANN).
The free speech section is quite interesting. He equates legal pornography to “harmful to minors” – a notion that was challenged in COPA. But the most interesting part of his discussion on the free speech paradigm concerns its corollary – publication and distribution. He talks about the constitutionality of FCC regulation of broadcast, with the Spectrum issue, and indicates that today’s Net, amplified by wireless, makes the entire broadcast regulatory system (however well politically motivated) moot. He points out that America during the late colonial or Revolutionary City era had a cottage pamphleteering enterprise a bit like today’s Internet blogs in psychological terms.
He does give a lot of attention to the idea that the Web has made everyone a publisher, and he sees the collection of self-published materials (blogs, tweets, sites, videos, social networking profiles and wallpapers) as a good antidote to the “establishment” in that the sheer diversity of material offered by so many speakers offers a counterweight to concentration of power in the media. So far so good. In that sense, for example, any individual speaker would generally maintain some bias influenced by his or her own circumstances and even family responsibilities, chosen or not; the collection of speech offers the “objectivity.” (That collective “objectivity” is the result of Code, especially Google’s, he would say.) But what has happened that certain individuals and small interests have developed code infrastructures that in some sense give them puppetmeister control over the architecture of speech. It seems as if Mark Zuckerburg or Jimmy Wales have social power comparable to that of Barack Obama, in the “Coast” analogy (sorry, Wikipedia is actually housed in Florida, I think). I think my sites and blogs take this a step further, in which I have “encoded” the actual content, expressing opposing viewpoints, and projecting a certain objectivity or neutrality within my own content. Perhaps that steps over the line: I can draw attention to myself, and perhaps unwanted attention to others connected to me (because of the way others perceive social norms, however wrongfully), in a way that suggests I won’t accept an partisan or automatic filial responsibility for others (a claim that society could some day decide it cannot live with). Lessig doesn’t get quite that far (I thought he might) but does mention the “implicit content” problem, where the effect of content depends functionally on the speaker. He gives the example of an account of an alien landing in a supermarket tabloid as not being believed, but it a major network reported one, it would be believed (call it the Orson Welles problem). In fact, I got into trouble when substitute teaching just because a screenplay that I wrote as fiction was seen as “evidentiary” (sort of like the military’s “rebuttable presumption” in the “don’t ask don’ tell” policy), whereas LionsGate films had once made a commercial film for Lifetime with a similar story and message, and hardly anyone noticed. Do I have the same free speech rights as LionsGate? (by the way, a Canadian company). I guess not. Without obvious commercial gain, some people will see such asymmetric "universal speech" as enticement, or as throwing sand in "beasts' eyes", or (as I explained on my main blog May 30, 2009) as an "existential" threat to confidentality in most business dealings. "CODE", for all its origins in "chaos" and a neo-freedom, can bring back social hierarchy with a vengeance.
Lessig does cover the DMCA problem, tracing it back to the gradual evolution of copyright law, pretty well. He covers the paradox, that some artists depend on a free content model (a paradigm that the ISP and telecommunications industry might not be able to sustain or indulge forever – again, Code is law) while others must jealousy protect their “property” as their livelihood. Copyright law complications have grown as copying technology evolved (the Sony Betamax case) but digital copies, with their perfection, provided threats never imagined before, which Congress reacted to with overkill, prohibiting overriding or circumvention copying technology that might really only support Fair Use. Yet, a couple of provisions in late 1990s law (Section 230 and DMCA safe harbor) may, however criticized, be responsible (by limiting downstream liability) for allowing the self-publishing freedom on the Net that we count on today. We should not take it for granted forever. In the analysis of intellectual property, Lessig refers to the writings on the “nonrivalrous” nature of such property as expressed in the writings of founding father Thomas Jefferson.
Lessig gives a more condensed discussion of privacy than did GWU law professor Daniel Solove in his book on privacy, but Lessig explores the interesting notion that privacy rights could be construed as part of property rights (a libertarian notion).
I've come to see myself as a dweller on two or three planets, with Cyberspace as the newest of them, and my urban exile as the second. And they all must become reconciled.