When the Supreme Court overturned the Communications Decency Act (CDA) last summer, its decision seemed to put to rest much of the controversy over internet free speech. But there are now a host of more limited efforts afoot to prune back the range of internet content and limit access to various kinds of online material. Such technical innovations as "content filtering" and "censor-ware" make it possible for individuals, employers, internet service providers, and others to block out selected portions of the online world. While the CDA's criminal penalties for publishing "indecent" material made an easy mark for free speech advocates, these new forms of control pose more subtle and incremental threats—and should force us to confront whether keeping the government out of the censorship business will be sufficient to assure freedom online.
The new world of online media is inevitably changing the terms of debate about freedom of speech and of the press. Words, ideas, and images are being liberated from their original connection to such physical objects as books, papers, magazines, and photographs, and the costs of copying and transmitting information are dropping sharply. Just what constitutes "publishing," for instance, becomes blurred when books, articles, and even casual notes can be distributed to the entire world, instantaneously and at negligible cost. Much of the difficulty of crafting good public policy for the Internet stems from the fact that the Net removes all the incidental and often overlooked ways in which we have traditionally used physical space to segregate and restrict information. Consider the fact that Playboy magazine is behind the counter and not on the magazine rack at the local convenience store, or that certain subterranean activities can only be found in the seedier sections of our central cities. If these tacit ways of organizing information are to be reproduced on the Internet, they must be explicitly reconstituted. But often these barriers can only be rebuilt with meddlesome and obtrusive changes in the way the Web works.
The PICS Is In
Much of the debate over how to reconstitute the old barriers and regulate the flow of online information centers on "content filtering" and something called PICS (the Platform for Internet Content Selection). PICS originated in the minds of the men and women who designed the World Wide Web. While Congress was hashing out what would become the Communications Decency Act, a group of internet policy planners began to formulate a system that would allow individual users to decide what could and could not appear on their computer screens. Rather than banning information at the "sending" end, internet users would be able to block offensive material at the "receiving" end. Everybody could then carve out his or her own zone of comfort on the Internet, with just the right mix of Puritanism and prurience. It was an ingenious solution—a kinder, gentler version of the CDA. It would assuage the fears of parents, conciliate free speech advocates, and short-circuit the political argument for a broad regime of internet censorship.
The PICS project was coordinated and directed through the World Wide Web Consortium, an independent body that has taken a leading role in formalizing standards and protocols for the Web, with support from many of the biggest internet industry companies. The designers went to great lengths to make the system unobjectionable to both civil libertarians and those who wanted to limit the circulation of indecent material. In fact, their literature betrays an almost quaint sensitivity to the theory and language of multiculturalism. They designed PICS not as a set of ratings or categories but as a format for devising a variety of different ratings systems, each reflecting different cultural and political perspectives. To understand the distinction, consider the difference between a word processing format like Microsoft Word and the infinite variety of documents that one could author in that format. PICS is not a rating system; it is a format that can be used to create many different rating systems.
PICS envisions at least two basic models in which rating systems might operate. The first—and conceptually more straightforward—is self-rating. Publishers of Web sites rate their own material, alerting viewers to coarse language, nudity, or violence. Publishers would choose whether to rate their sites and, if so, what ratings system to use. PICS would also allow third-party rating. Different organizations or companies could set up "rating bureaus" that would rate sites according to their own political, cultural, or moral standards. Thus the Christian Coalition might set up its own rating bureau, as could the National Organization for Women. Individual users could then decide whether to filter material using the voluntary self-ratings or subscribe to a rating bureau that suited their personal sensibilities.
Given the obvious similarities, many have compared PICS to an internet version of the much-touted V-chip [see Richard Parker, "Screening a la Carte"]. But the V-chip analogy is only partly correct, and the differences are telling. The weight of the argument for the content filtering approach is that individuals decide what they will and will not see. But PICS-based content filtering is actually much more flexible and scalable than this standard description implies. There are many links in the information food chain separating your personal computer from the source of information. And what you see on the Internet can potentially be filtered at any of those intermediate points. You can block material at your computer, but so can libraries, your employer, your internet service provider, your university, or even—depending on where you live—your nation-state. With the V-chip you control what comes on your television set. But with PICS the choice may not be yours.
There are already a host of new software products on the market that allow this sort of "upstream" content filtering. They are being introduced widely in the workplace and, to a lesser degree, in schools and libraries. This so-called internet access management software makes possible not just filtering and blocking but also detailed monitoring of internet usage. It can monitor what individual users view on the Web and how long they view it. It can even compile percentages and ratios of how much viewing is work related, how much is superfluous, and how much is simply inappropriate. These less savory uses of the technology won't necessarily be used. But the opportunities for abuse are obvious and they reach far beyond issues of free speech into elemental questions of personal privacy.
The other problem with PICS is more subtle and insidious. You often do not know just what you are not seeing. Because of a perverse but seemingly inevitable logic, companies that provide content filtering or site blocking services must keep their lists hidden away as trade secrets. The logic is clear enough. The companies expend great resources rating and compiling lists of prohibited sites; to make those lists public would divest them of all their value. But whatever the rationale, this practice leads to numerous tangled situations. Public libraries that have installed site blocking software are in the position of allowing private companies to determine what can and cannot be viewed in the library. Even the librarians don't know what is blocked and what is not.
The possible integration of search engine technology and PICS-based rating holds out the prospect of a Web where much of the material that would not appear on prime-time television just slips quietly out of view. Even more unsettling, many internet search engine companies—with a good deal of prodding from the White House—have announced plans to begin refusing to list sites that will not, or cannot, rate themselves. Again, the implications are far-reaching. With the increasing size and scope of material on the Web, most people use search engines as their gateway to finding information online. Not being listed is akin to having the phone company tell you that you are welcome to have as many phone numbers as you like but no listings in the phone book. This is one of the ways in which "voluntary" self-rating can quickly become a good deal less than voluntary. There are also bills pending before Congress that would either mandate self-rating or threaten sanctions for "mis-rating" internet content. This is the sort of creeping, indirect censorship that makes PICS so troubling.
One of the compensations of real-world censorship is that school boards and city councils actually have to ban unpopular books and look like fools doing it. The crudeness and heavy-handedness of the state's power to censor is always one of the civil libertarians' greatest advantages in battles over the banning and burning of books. But content filtering makes censorship quiet, unobtrusive, and thus all the more difficult to detect or counter. It is difficult to quantify just what is different about the new information technology. But the essence of it is an increasing ability to regulate the channels over which we communicate with one another and find out new information.
To all these criticisms the creators of PICS say simply that they and their technology are neutral. But this sort of "Hey, I just make the guns" attitude is hardly sufficient. To their credit, they also point to the more positive uses of content filtering. And here they have a point. In its current form the Internet is a tangled jumble of the useful, the useless, and the moronic. PICS could help users cut through the clutter. Topic searches could become more efficient. In one oft-cited example, content filtering could allow internet searches for information about a particular medical condition that would produce only material from accredited medical organizations. Of course, the question then be comes, who accredits? There are standards of authority and discrimination we will gladly accept about information for treating breast cancer that we would never accept if the topic is, say, art or political speech. And in any case none of these potentially positive uses negate, or really even speak to, the reality of possible abuses.
The Appeal of Censorship
This new debate over content filtering has sliced apart the once potent coalition of interests that banded together to defeat the Communications Decency Act. One of the striking features of the anti-CDA fight was how it lined up technologists, civil libertarians, and major corporations on the same side. What became clear in the aftermath, however, was that companies like Microsoft, Netscape, and IBM were not so much interested in free speech, as such, as they were in preventing government regulation—two very distinct concepts that we now tend too often to conflate.
In fact, the seamless and adaptable censoring that makes civil libertarians shudder is precisely what makes it so attractive to business. Businesses do not want to refight culture wars in every locale where they want to expand internet commerce. If parents from the Bible Belt are afraid that their children will find gay rights literature on the Web, they won't let them online to buy Nintendo game cartridges either. The same logic is even more persuasive when commerce crosses international borders. International internet commerce is widely seen as one of the most lucrative prospects for the internet industry, and much of that trade would take place with countries that either do not share American standards of cultural permissiveness or that routinely censor political material. Content filtering will let American companies sell goods to China over the Internet without having to worry that pro-Tibetan independence Web sites will sour the Chinese on the Internet altogether. Content filtering allows us to carve the Internet up into countless gated communities of the mind.
These concerns about "cyber-rights" can seem like overwrought digital chic—an activism for the affluent. And often enough, that is just what they are. But it is important to take a broader view. Today the Internet remains for most a weekend or evening diversion and only relatively few of us use it intensively in the workplace. But the technologies and principles that we formulate now will ripple into a future when the Internet—and its successor technologies—will be more and more tightly stitched into the fabric of everyday communication. In a world of books and print, the "Government shall make no law" formulation may be adequate. But in a world of digitized information, private power to censor may be just as deleterious as public power, and in many respects may be more so.
Free Expression at Risk
There is also an unfortunate convergence between this growing power of nongovernmental censorship and the declining value of open expression as a positive social ideal. In a political climate such as ours, which is generally hostile to government power, a subtle and perverse shift can take place in our understanding of the First Amendment and the importance of free speech. We can begin to identify the meaning of free speech simply as a restriction on governmental power and lose any sense that free speech has value on its own merits. One might say that it is the difference between free speech and free expression, the former being narrow and juridical, based largely on restrictions on government action, and the latter being a more positive belief not in the right but in the value of open expression for its own sake. We seem to be moving toward a public philosophy in which we would shudder at the thought of government censoring a particular book or idea but would be more than happy if major publishing companies colluded together to prevent the same book's publication.
Our political and cultural landscape is replete with examples. We see it in support for the V-chip, government's strong-arming of TV networks to adopt "voluntary" ratings, and in the increasingly fashionable tendency for political figures to shame entertainment companies into censoring themselves. The sort of public shaming of which Bill Bennett has made a career has a very good name in our society, and too few speak up against it. The move to rate television programming may well be benign or, at worst, innocuous in itself. But it points to a broader trend for government to privatize or outsource its powers of censorship. This sort of industry self-regulation is said to be voluntary. But more and more often it is "voluntary" in the sense that Senator John McCain must have had in mind when he threatened to have the Federal Communications Commission consider revoking the broadcasting licenses of NBC affiliates if the network did not agree to adopt the new "voluntary" TV rating system.
The idea that there will be a great multiplicity of rating systems may also be deceptive. Despite the possibility of an infinite variety of rating systems for a multitude of different cultural perspectives, everything we know about the computer and internet industries tells us that pressures lead not toward multiplicity but toward concentration. Aside from Microsoft's various anticompetitive practices, basic structural forces in the computer and software industries make it likely that we will have one or two dominant operating systems rather than five or six. The Web browser market has followed a similar trend toward consolidation. There would likely be a greater demand for a diversity of options in the market for content filtering and site blocking services. But the larger, overarching economic pressures—and the need to create vast economies of scale—would be simply overwhelming. Effectively rating even a minute portion of the Web would be an immense undertaking. The resources required to rate the Web and constantly update those ratings could be recouped only by signing up legions of subscribers. Far more likely than the "let a hundred flowers bloom" scenario is one in which there would be a few large companies providing content filtering and site blocking services. And these would be exactly the kind of companies that would become the targets of crusading "family values" politicians trying to add new candidates to the list of material to be blocked.
The First Amendment, Updated
The novelty of this new information technology calls on us to think and act anew. We cannot now foresee what changes in technology are coming or what unexpected implications they will have. What is clear, however, is that there is no easy translation of real-world standards of intellectual freedom into the online world. Our current conceptions of First Amendment rights are simply unequal to the task. It is easy enough to say that the First Amendment should apply to cyberspace, but crude applications of our current doctrines to the online world involve us in unexpected and dramatic expansions and contractions of intellectual freedoms and free speech. In the architecture of the new information economy, private power will have a much greater and more nimble ability to regulate and constrict the flow of information than state power will. Taking account of this will mean updating both the jurisprudence and the public philosophy of free speech rights. Much like the law of intellectual property, public policy toward free speech must undertake a basic reconsideration of the values it seeks to protect and the goals it seeks to serve.
Partly this will mean focusing more on the goals of First Amendment freedoms and less on the specific and narrow mechanics of preventing government regulation of speech. It may even mean considering some informational equivalent of antitrust legislation—a body of law that would intervene, not to regulate the content, but to ensure that no private entity or single corporation gained too great a degree of control over the free flow of information. What it certainly does mean is that we must abandon that drift in public policy that allows government to outsource its power to censor under the guise of encouraging industry self-regulation. Government may not be fully able to alter some of the pernicious directions in which information technology is evolving—and it may be good that it cannot. But government can at least avoid policies that reinforce the negative tendencies. "Voluntary" industry self-regulation should really be voluntary and we should inculcate within ourselves—and particularly our policymakers—a critical awareness of the implications of new technologies. Whatever the merits of creating PICS and the infrastructure of content filtering, now that it exists we must be vigilant against potential abuses. We should make critical distinctions between the narrow but legitimate goals of content regulation-like providing mechanisms for parents to exercise control over what their children see—and the illegitimate uses to which these technologies can easily be applied.
There are many ways in which we can subtly adjust the law of intellectual property, civil liability, and criminal law to tip the balance between more or less restrictive regimes of free speech, privacy, and individual expression. The federal government might limit the ability to claim intellectual property rights in lists of blocked sites. Such a policy would limit the profitability of commercial ventures that compiled them. We can also limit, as much as possible, internet service providers' liability for what material flows through their hardware. This would remove one of the incentives that they would have for filtering content before it reached the individual user. Yet another tack is to rethink the civil liabilities we impose on employers when rogue employees download obscene or conceivably harassing material on their computer terminals. This, again, would remove at least one of the rationales for pervasive content filtering in the workplace. Sensible public policy can be devised to safeguard the values of an open society in the information age. But too often we are letting technology lead public policy around by the nose.
What we need is a wholesale reevaluation of our collective attitudes toward the meaning and value of free speech and the role it plays in our society. Though we strain mightily to avoid government censorship, there is little public commitment in our society today to a culture of free expression on its own merits. Public calls from Bill Bennett to shame media companies into "doing the right thing" are widely acclaimed. Political leaders too often take a wink-and-a-nod approach when private bodies take on the censoring role that government itself cannot. But the myopic focus on government as the singular or most significant threat to free speech rests on a basic misreading of our history. In America, the really pointed threats to free speech and free expression do not come from government. They never have. They have always come from willful majorities intent on bullying dissenters into silence. The new information technology and content filtering make that even more feasible than it has been in the past. And that is the problem.