To be sure, the quantitative estimates of the rates of overblocking apply only to those four commercially available filters analyzed by plaintiffs' and defendants' expert witnesses. Nonetheless, given the inherent limitations in the current state of the art of automated classification systems, and the limits of human review in relation to the size, rate of growth, and rate of change of the Web, there is a tradeoff between underblocking and overblocking that is inherent in any filtering technology, as our findings of fact have demonstrated. We credit the testimony of plaintiffs' expert witness, Dr. Geoffrey Nunberg, that no software exists that can automatically distinguish visual depictions that are obscene, child pornography, or harmful to minors, from those that are not. Nor can software, through keyword analysis or more sophisticated techniques, consistently distinguish web pages that contain such content from web pages that do not.

In light of the absence of any automated method of classifying Web pages, filtering companies are left with the Sisyphean task of using human review to identify, from among the approximately two billion web pages that exist, the 1.5 million new pages that are created daily, and the many thousands of pages whose content changes from day to day, those particular web pages to be blocked. To cope with the Web's extraordinary size, rate of growth, and rate of change, filtering companies that rely solely on human review to block access to material falling within their category definitions must use a variety of techniques that will necessarily introduce substantial amounts of overblocking. These techniques include blocking every page of a Web site that contains only some content falling within the filtering companies' category definitions, blocking every Web site that shares an IP-address with a Web site whose content falls within the category definitions, blocking "loophole sites," such as anonymizers, cache sites, and translation sites, and allocating staff resources to reviewing content of uncategorized pages rather than re-reviewing pages, domain names, or IP-addresses that have been already categorized to determine whether their content has changed. While a filtering company could choose not to use these techniques, due to the overblocking errors they introduce, if a filtering company does not use such techniques, its filter will be ineffective at blocking access to speech that falls within its category definitions. Thus, while it would be easy to design, for example, a filter that blocks only ten Web sites, all of which are either obscene, child pornography, or harmful to minors, and therefore completely avoids overblocking, such a filter clearly would not comply with CIPA, since it would fail to offer any meaningful protection against the hundreds of thousands of Web sites containing speech in these categories. As detailed in our findings of fact, any filter that blocks enough speech to protect against access to visual depictions that are obscene, child pornography, and harmful to minors, will necessarily overblock substantial amounts of speech that does not fall within these categories.

This finding is supported by the government's failure to produce evidence of any filtering technology that avoids overblocking a substantial amount of protected speech. Where, as here, strict scrutiny applies to a content-based restriction on speech, the burden rests with the government to show that the restriction is narrowly tailored to serve a compelling government interest. See Playboy, 529 U.S. at 816 ("When the Government restricts speech, the Government bears the burden of proving the constitutionality of its actions."); see also R.A.V. v. City of St. Paul, 505 U.S. 377, 382 (1992) ("Content-based regulations are presumptively invalid."). Thus, it is the government's burden, in this case, to show the existence of a filtering technology that both blocks enough speech to qualify as a technology protection measure, for purposes of CIPA, and avoids overblocking a substantial amount of constitutionally protected speech. Here, the government has failed to meet its burden. Indeed, as discussed in our findings of fact, every technology protection measure used by the government's library witnesses or analyzed by the government's expert witnesses blocks access to a substantial amount of speech that is constitutionally protected with respect to both adults and minors. In light of the credited testimony of Dr. Nunberg, and the inherent tradeoff between overblocking and underblocking, together with the government's failure to offer evidence of any technology protection measure that avoids overblocking, we conclude that any technology protection measure that blocks a sufficient amount of speech to comply with CIPA's requirement that it "protect[] against access through such computers to visual depictions that are (I) obscene; (II) child pornography; or (III) harmful to minors" will necessarily block substantial amounts of speech that does not fall within these categories. CIPA Sec. 1712 (codified at 20 U.S.C. Sec. 9134(f)(1)(A)). Hence, any public library's use of a software filter required by CIPA will fail to be narrowly tailored to the government's compelling interest in preventing the dissemination, through Internet terminals in public libraries, of visual depictions that are obscene, child pornography, or harmful to minors.

Where, as here, strict scrutiny applies, the government may not justify restrictions on constitutionally protected speech on the ground that such restrictions are necessary in order for the government effectively to suppress the dissemination of constitutionally unprotected speech, such as obscenity and child pornography. "The argument . . . that protected speech may be banned as a means to ban unprotected speech . . . . turns the First Amendment upside down. The Government may not suppress lawful speech as the means to suppress unlawful speech." Ashcroft, 122 S. Ct. at 1404. This rule reflects the judgment that "[t]he possible harm to society in permitting some unprotected speech to go unpunished is outweighed by the possibility that protected speech of others may be muted . . . ." Broadrick v. Oklahoma, 413 U.S. at 612.

Thus, in Ashcroft, the Supreme Court rejected the government's argument that a statute criminalizing the distribution of constitutionally protected "virtual" child pornography, produced through computer imaging technology without the use of real children, was necessary to further the state's interest in prosecuting the dissemination of constitutionally unprotected child pornography produced using real children, since "the possibility of producing images by using computer imaging makes it very difficult for [the government] to prosecute those who produce pornography using real children." Ashcroft, 122 S. Ct. at 1404; see also Stanley, 394 U.S. at 567-58 (holding that individuals have a First Amendment right to possess obscene material, even though the existence of this right makes it more difficult for the states to further their legitimate interest in prosecuting the distribution of obscenity). By the same token, even if the use of filters is effective in preventing patrons from receiving constitutionally unprotected speech, the government's interest in preventing the dissemination of such speech cannot justify the use of the technology protection measures mandated by CIPA, which necessarily block substantial amounts of constitutionally protected speech.

CIPA thus resembles the Communications Decency Act, which the Supreme Court facially invalidated in Reno v. ACLU, 521 U.S. 844 (1997). Although on its face, the CDA simply restricted the distribution to minors of speech that was constitutionally unprotected with respect to minors, as a practical matter, given Web sites' difficulties in identifying the ages of Internet users, the CDA effectively prohibited the distribution to adults of material that was constitutionally protected with respect to adults. Similarly, although on its face, CIPA, like the CDA, requires the suppression of only constitutionally unprotected speech, it is impossible as a practical matter, given the state of the art of filtering technology, for a public library to comply with CIPA without also blocking significant amounts of constitutionally protected speech. We therefore hold that a library's use of a technology protection measure required by CIPA is not narrowly tailored to the government's legitimate interest in preventing the dissemination of visual depictions that are obscene, child pornography, or in the case of minors, harmful to minors. For the same reason that a public library's use of software filters is not narrowly tailored to further the library's interest in preventing its computers from being used to disseminate visual depictions that are obscene, child pornography, and harmful to minors, a public library's use of software filters is not narrowly tailored to further the library's interest in protecting patrons from being unwillingly exposed to offensive, sexually explicit material. As discussed in our findings of fact, the filters required by CIPA block substantial numbers of Web sites that even the most puritanical public library patron would not find offensive, such as http://federo.com, a Web site that promotes federalism in Uganda, which N2H2 blocked as "Adults Only, Pornography," and http://www.vvm.com/~bond/home.htm, a site for aspiring dentists, which was blocked by Cyberpatrol as "Adult/Sexually Explicit." We list many more such examples in our findings of fact, see supra, and find that such erroneously blocked sites number in at least the thousands.

Although we have found large amounts of overblocking, even if only a small percentage of sites blocked are erroneously blocked, either with respect to the state's interest in preventing adults from viewing material that is obscene or child pornography and in preventing minors from viewing material that is harmful to minors, or with respect to the state's interest in preventing library patrons generally from being unwillingly exposed to offensive, sexually explicit material, this imprecision is fatal under the First Amendment. Cf. Reno, 521 U.S. at 874 ("[T]he CDA lacks the precision that the First Amendment requires when a statute regulates the content of speech."); League of Women Voters, 468 U.S. at 398 ("[E]ven if some of the hazards at which [the challenged provision] was aimed are sufficiently substantial, the restriction is not crafted with sufficient precision to remedy those dangers that may exist to justify the significant abridgement of speech worked by the provision's broad ban . . . .").

While the First Amendment does not demand perfection when the government restricts speech in order to advance a compelling interest, the substantial amounts of erroneous blocking inherent in the technology protection measures mandated by CIPA are more than simply de minimis instances of human error. "The line between speech unconditionally guaranteed and speech which may legitimately be regulated, suppressed, or punished is finely drawn. Error in marking that line exacts an extraordinary cost." Playboy, 529 U.S. at 817 (internal quotation marks and citation omitted). Indeed, "precision of regulation must be the touchstone in an area so closely touching our most precious freedoms." Keyishian v. Bd. of Regents of the Univ. of the State of N.Y., 385 U.S. 589, 603 (1967) (internal quotation marks and citation omitted); see also Bantam Books, Inc. v. Sullivan, 372 U.S. 58, 66 (1963) ("The separation of legitimate from illegitimate speech calls for sensitive tools.") (internal quotation marks and citation omitted). Where the government draws content-based restrictions on speech in order to advance a compelling government interest, the First Amendment demands the precision of a scalpel, not a sledgehammer. We believe that a public library's use of the technology protection measures mandated by CIPA is not narrowly tailored to further the governmental interests at stake. Although the strength of different libraries' interests in blocking certain forms of speech may vary from library to library, depending on the frequency and severity of problems experienced by each particular library, we conclude, based on our findings of fact, that any public library's use of a filtering product mandated by CIPA will necessarily fail to be narrowly tailored to address the library's legitimate interests. Because it is impossible for a public library to comply with CIPA without blocking substantial amounts of speech whose suppression serves no legitimate state interest, we therefore hold that CIPA is facially invalid, even under the more stringent standard of facial invalidity urged on us by the government, which would require upholding CIPA if it is possible for just a single library to comply with CIPA's conditions without violating the First Amendment. See supra Part III. 3. Less Restrictive Alternatives

The constitutional infirmity of a public library's use of software filters is evidenced not only by the absence of narrow tailoring, but also by the existence of less restrictive alternatives that further the government's legitimate interests. See Playboy, 529 U.S. at 813 ("If a less restrictive alternative would serve the Government's purpose, the legislature must use that alternative."); Sable, 492 U.S. at 126 ("The Government may . . . regulate the content of constitutionally protected speech in order to promote a compelling interest if it chooses the least restrictive means to further the articulated interest."). As is the case with the narrow tailoring requirement, the government bears the burden of proof in showing the ineffectiveness of less restrictive alternatives. "When a plausible, less restrictive alternative is offered to a content- based speech restriction, it is the Government's obligation to prove that the alternative will be ineffective to achieve its goals." Playboy, 529 U.S. at 816; see also Reno, 521 U.S. at 879 ("The breadth of this content-based restriction of speech imposes an especially heavy burden on the Government to explain why a less restrictive provision would not be as effective . . . ."); Fabulous Assocs., Inc. v. Pa. Pub. Util. Comm'n, 896 F.2d 780, 787 (3d Cir. 1990) ("We focus . . . on the more difficult question whether the Commonwealth has borne its heavy burden of demonstrating that the compelling state interest could not be served by restrictions that are less intrusive on protected forms of expression.") (internal quotation marks and citation omitted).

We find that there are plausible, less restrictive alternatives to the use of software filters that would serve the government's interest in preventing the dissemination of obscenity and child pornography to library patrons. In particular, public libraries can adopt Internet use policies that make clear to patrons that the library's Internet terminals may not be used to access illegal content. Libraries can ensure that their patrons are aware of such policies by posting them in prominent places in the library, requiring patrons to sign forms agreeing to comply with the policy before the library issues library cards to patrons, and by presenting patrons, when they log on to one of the library's Internet terminals, with a screen that requires the user to agree to comply with the library's policy before allowing the user access to the Internet. Libraries can detect violations of their Internet use policies either through direct observation or through review of the library's Internet use logs. In some cases, library staff or patrons may directly observe a patron accessing obscenity and child pornography. Libraries' Internet use logs, however, also provide libraries with a means of detecting violations of their Internet use policies. These logs, which can be kept regardless whether a library uses filtering software, record the URL of every Web page accessed by patrons. Although ordinarily the logs do not link particular URLs with particular patrons, it is possible, using access logs, to identify the patron who viewed the Web page corresponding to a particular URL, if library staff discover in the access logs the URL of a Web page containing obscenity or child pornography. For example, David Biek, Director of Tacoma Public Library's main branch, testified that in the course of scanning Internet use logs he has found what looked like attempts to access child pornography, notwithstanding the fact that Tacoma uses Websense filtering software. In two cases, he communicated his findings to law enforcement and turned over the logs to law enforcement in response to a subpoena.