If you are concerned about the web content your children or yourself are exposed to there is help! There are companies that have search engines that act as filters. They are largely designed to filter out "pornography" and stuff like "how to make bombs". But no system is perfect and sometimes these systems filter out info on things like breast or testicular cancer because of how they filter. For parents concerned with web content they should also beware and watchful of Newsgroup access by their children. While websites are unlikely to let anyone see anything other that teasing advertisments the Newsgroups are totally uncontrolled. See our "How to Snoop in a Computer" section for info on revealing what has been viewed and typed on a computer.

 

Web Browser Filters

Seen by some as a powerful tool for protecting children from online pornography and by others as "censorware," Internet content filters have generated much controversy, debate, and confusion. This document attempts to describe the concerns and issues raised by the various types of filtering software. It is hoped that these questions and answers will help parents, libraries, schools, and others understand the software that they may be considering (or using).


Questions

1)Basics

1.1) What is a content filter?
1.2) Why do many people want filtering?
1.3) Can filtering programs be turned off?
1.4)I don't want to filter, but I do want to know what my child is viewing. Is that possible?
1.5)What is the scope of Internet content filtering? Do filters cover the WWW? Newsgroups? IRC? Email?

2)Stand-alone Systems

2.1) What is a stand-alone system?
2.2) Who decides what gets blocked and what doesn't?
2.3) How do stand-alone programs determine what should be blocked?
2.4) What's wrong with list-based filtering?
2.5) What's wrong with filtering based on keyword searches?

3.0) The Platform for Internet Content Selection (PICS)

3.1) What is PICS?
3.2) How does PICS-based filtering differ from stand-alone systems?
3.3) What is a ratings system?
3.4) How are ratings systems developed?
3.5) Who rates sites?
3.6) What PICS-based ratings systems can I use?
3.7) How do I use PICS?
3.8) Should I rate my Site?
3.9) What should a publisher consider before self-rating?
3.10) What concerns are raised by Third-Party Ratings?
3.11) What about sites that aren't rated? What if someone puts the wrong rating on a site?
3.12) What if I don't like the ratings systems that are available? Can individuals and organizations start new ratings systems?
3.13) What's wrong with PICS and Internet ratings in general?

4.0) Alternatives

4.1) Can anything work?
4.2) I understand that there are many problems with filters and ratings. What can I do to protect my children?
4.3) What roles can ISPs play?

5.0) Where Can I Find More Information?

6.0)Credits

6.1)Who gets the credit?
6.2)Who is CPSR?


Answers

1) Basics

1.1) What is a content filter?

A content filter is one or more pieces of software that work together to prevent users from viewing material found on the Internet. This process has two components.

Rating: Value judgments are used to categorize web sites based on their content. These ratings could use simple allowed/disallowed distinctions like those found in programs like CyberSitter or NetNanny, or they can have many values, as seen in ratings systems based on Platform for Internet Content Selection (PICS, see question 3.0).

Filtering: With each request for information, the filtering software examines the resource that the user has requested. If the resource is on the "not allowed" list, or if it does not have the proper PICS rating, the filtering software tells the user that access has been denied and the browser does not display the contents of the web site.

The first content filters were stand-alone systems consisting of mechanisms for determining which sites should be blocked, along with software to do the filtering, all provided by a single vendor.

The other type of content filter is protocol-based. These systems consist of software that uses established standards for communicating ratings information across the Internet. Unlike stand-alone systems, protocol-based systems do not contain any information regarding which sites (or types of sites) should be blocked. Protocol-based systems simply know how to find this information on the Internet, and how to interpret it.

1.2) Why do many people want filtering?

The Internet contains a wide range of materials, some of which may be offensive or even illegal in many countries. Unlike traditional media, the Internet does not have any obvious tools for segregating material based on content. While pornographic magazines can be placed behind the counter of a store, and strip-tease joints restricted to certain parts of town, the Internet provides everything through the same medium.

Filters and ratings systems are seen as tools that would provide the cyberspace equivalent of the physical separations that are used to limit access to "adult" materials. In rating a site as objectionable, and refusing to display it on the user's computer screen, filters and ratings systems can be used to prevent children from seeing material that their parents find objectionable. In preventing access, the software acts as an automated version of the convenience-store clerk who refuses to sell adult magazines to high-school students.

Filters are also used by businesses to prevent employees from accessing Internet resources that are either not work related or otherwise deemed inappropriate.

1.3) Can filtering programs be turned off?

It is assumed that parents or other authoritative users who install filtering programs would control the passwords that allow the programs to be disabled. This means that parents can enable the filter for their children but disable it for themselves. As with all other areas of computer security, these programs are vulnerable to attack by clever computer users who may be able to guess the password or to disable the program by other means.

1.4) I don't want to filter, but I do want to know what my child is viewing. Is that possible?

Some products include a feature that will capture the list of all Internet sites that have been visited from your computer. This allows a parent to see what sites their child has viewed, albeit after the fact. Similar software allows employers to monitor the Internet use of their employees. Users of these systems will not know that their Internet use is being watched unless they are explicitly told.

Whether used in homes or workplaces, these tools raise serious privacy concerns.

1.5) What is the scope of Internet content filtering? Do filters cover the WWW? Newsgroups? IRC? Email?

While some stand-alone systems claim to filter other parts of the Internet, most content filters are focused on the World-Wide-Web. Given the varied technical nature of the protocols involved, it's likely that filtering tools will do well with some of these, and poorly with others. For example, filtering software can easily block access to newsgroups with names like "alt.sex". However, current technology cannot identify the presence of explicit photos in a file that's being transferred via FTP. PICS-based systems currently only filter web sites.

2) Stand-alone Systems

2.1) What is a stand-alone system?

A stand-alone filtering system is a complete filtering solution provided by a single vendor. These filters block sites based on criteria provided by the software vendor, thus "locking in" users. If a customer does not like the vendor's selection of sites that are to be blocked, she must switch to a different software product.

2.2) Who decides what gets blocked and what doesn't?

This is the biggest practical difference between stand-alone systems and protocol-based systems. Stand-alone systems limit users to decisions made by the software vendor, although some let the parents or installers and and remove sites. Protocol-based systems provide users with a choice between alternative ratings systems, which publishers and third parties can use to develop ratings for content. See question 3.2 for more information.

2.3) How do stand-alone programs determine what should be blocked?

Currently available filtering tools use some combination of two approaches to evaluate content: lists of unacceptable (or acceptable) sites, and keyword searches.

List-based blocking works by explicitly enumerating sites that should either be blocked or allowed. These lists are generally provided by filter vendors, who search for sites that meet criteria for being classified as either "objectionable" or "family-friendly".

Filtering software vendors vary greatly in the amount of information and control they make available to users. Most vendors do not allow users to see the actual list of blocked sites, as it is considered to be a kind of trade secret. However, some vendors provide detailed descriptions of the criteria used to determine which sites should be blocked. Some vendors might allow users to add sites to the list, either in their own software or by sending sites to the vendor for review.

Stand-alone filtering tools also vary in the extent to which they can be configured by users. Some software packages allow users to make selections from a list of the categories they would like blocked. For example, a parent may wish to block explicit sex but not discussions of homo as a life-style. Others might allow users to choose from a range of choices in any given topic area. For example, instead of simply blocking all nudity, these tools might allow users to chose to allow partial nudity while blocking full nudity.

Keyword-based blocking uses text searches to categorize sites. If a site contains objectionable words or phrases, it will be blocked.

2.4) What's wrong with list-based filtering?

There are several problems with filtering based on lists of sites to be blocked.

First, these lists are incomplete. Due to the decentralized nature of the Internet, it's practically impossible to definitively search all Internet sites for "objectionable" material. Even with a paid staff searching for sites to block, software vendors cannot hope to identify all sites that meet their blocking criteria. Furthermore, since new web sites are constantly appearing, even regular updates from the software vendor will not block out all adult web sites. Each updated list will be obsolete as soon as it is released, as any as any site that appears after the update will not be on the list, and will not be blocked. The volatility of individual sites is yet another potential cause of trouble. Adult material might be added to (or removed from) a site soon after the site is added to (or removed from) a list of blocked sites.

Blocking lists also raise problems by withholding information from users, who may or may not have access to information describing the criteria used to block web sites. While some vendors provide descriptions of their blocking criteria, this information is often vague or incomplete. Several vendors have extended blocking beyond merely "objectionable" materials. In some instances, political sites and sites that criticize blocking software have been blocked.

This obscurity is compounded by practices used to protect these lists of blocked sites. Vendors often consider these lists to be proprietary intellectual property, which they protect through mathematical encryption, which renders the lists incomprehensible to end users. As a result, users are unable to examine which sites are blocked and why. This arbitrary behavior demeans the user's role as an active, thoughtful participant in their use of the Internet.

2.5) What's wrong with filtering based on keyword searches?

Keyword searching is a crude and inflexible approach that is likely to block sites that should not be blocked while letting "adult" sites pass through unblocked. These problems are tied to two shortcomings of this approach:

Keyword searches cannot use contextual information. While searches can identify the presence of certain words in a text, they cannot evaluate the context in which those words are used. For example, a search might find the word "breast" on a web page, but it cannot determine whether that word was used in a chicken recipe, an story, or in some other manner. In one notable incident, America Online's keyword searches blocked a breast cancer support group.

Keyword searches cannot interpret graphics. It is not currently possible to "search" the contents of a picture. Therefore, a page containing ly explicit pictures will be blocked only if the text on that page contains one or more words from the list of words to be blocked.

3.0) The Platform for Internet Content Selection (PICS)

3.1) What is PICS?

The Platform for Internet Content Selection (PICS) was developed by the W3 Consortium - the guiding force behind the World-Wide-Web - as a protocol for the exchange of rating information. Paul Resnick - University of Michigan professor and the creator of PICS - described PICS in a Scientific American (March 1997) article:

The Massachusetts Institute of Technology's World Wide Web Consortium has developed a set of technical standards called PICS (Platform for Internet Content Selection) so that people can electronically distribute descriptions of digital works in a simple, computer-readable form. Computers can process these labels in the background, automatically shielding users from undesirable material or directing their attention to sites of particular interest. The original impetus for PICS was to allow parents and teachers to screen materials they felt were inappropriate for children using the Net. Rather than censoring what is distributed, as the Communications Decency Act and other legislative initiatives have tried to do, PICS enables users to control what they receive.
There are two components involved in the practical use of PICS: ratings systems, and software that uses ratings systems to filter content.

3.2) How does PICS-based filtering differ from stand-alone systems?

Stand-alone filtering products generally include lists of sites to be filtered and explicit filtering criteria. Purchasers of these products are tied to the filtering decisions made by the software vendor.

PICS-based software uses an alternative approach based on distributed sharing of ratings information. Instead of using blocking lists or keyword searches, programs that use PICS use standardized "ratings systems" to determine which sites should be blocked. Available from software vendors or from Internet sites, these ratings systems are be used to describe the content of Internet sites (see question 3.7 for a description of how PICS works in practice). Users of PICS-based software are usually given the ability to choose which ratings system they would like to use.

As an open standard, PICS can be used for a wide range of applications. In addition to providing a means for blocking content deemed unsuitable for children, PICS might also be used for describing content in terms of its educational content, potential for violations of privacy, or any other criteria that involve rating of Internet sites.

In some senses, programs that use PICS are much more flexible than stand-alone filtering software. Users of PICS software are not tied to the judgments of the software vendor, and the descriptions of the criteria used by the ratings systems are publicly available. However, users are currently limited to choosing between a small number of ratings systems, each of which has its own biases and viewpoints. Users that disagree with the popular ratings systems may be unable to use PICS in a manner that fits their needs and viewpoints.

3.3) What is a ratings system?

A ratings system is a series of categories and gradations within those categories that can be used to classify content. The categories that are used are chosen by the developer of the ratings system, and may include topics such as such as " content," "race," or "privacy." Each of these categories would be described along different levels of content, such as "Romance; no sex ", "Explicit activity", or somewhere in between. Prominent ratings systems currently in use include RSACi, SafeSurf, and NetShepherd.

A rating is a description of some particular Internet content, using the terms and vocabulary of some ratings system.

3.4) How are ratings systems developed?

The PICS developers and the W3 Consortium built PICS to be an open standard, so anyone can create a ratings system. Individuals and groups can develop ratings systems by defining categories and describing ratings within those categories. Once a ratings system is developed, it must be publicized to users and publishers.

3.5) Who rates sites?

The PICS standard describes two approaches to the rating of sites:

Self-Rating: Web site publishers can evaluate their own content and put PICS rating information directly into their web pages. Currently, this evaluation can be done through Web pages provided by developers of the major ratings services.

Third-Party Ratings: Interested third parties can use PICS ratings systems to evaluate web sites and publish their own ratings for these sites. Educational groups, religious groups, or individuals can rate sites and publish these ratings on the Internet for users to access.

3.6) What PICS-based ratings systems can I use?

From a technical perspective, you can use any PICS-based ratings system. However, your practical options are somewhat more limited. While you might configure your browser to use "Joe's Internet Ratings", it's unlikely that many sites have ratings for Joe's system, so it wouldn't be of very much use.

Your browser software may influence choice of ratings service. If you use Microsoft's Internet Explorer, you only have one choice (RSACi) built in to the initial distribution. To use other ratings services, IE users must download files from the 'Net and install them on their PCs.

Currently (as of September 1997), there are three PICS services that are being widely used or promoted:

RSACi: Sponsored by the Recreational Software Advisory Council (known for ratings on video games), RSACi is probably the most widely used PICS ratings system in use today. RSACi's ratings categories include violence, nudity, sex, and language, with 5 ratings within each category. As of September 1997, RSACi claims to have over 43,000 sites rated.

SafeSurf: Developed by the SafeSurf corporation, this system's categories include "Age Range," "Profanity," "Hetero Themes," "Homo Themes," "Nudity," "Violence," "Sex, Violence, and Profanity, " "Intolerance," "Glorifying Drug Use," "Other Adult Themes," and "Gambling," with 9 distinctions for each category.

SafeSurf and RSACi both rely on self-rating of Internet sites by web publishers.

NetShepherd: Based in Calgary, Net Shepherd rates sites based on quality levels (1-5 stars). Unlike SafeSurf and RSAC, NetShepherd conducts third-party ratings of web sites. They claim to have rated over 300,000 sites. NetShepherd has also announced partnerships with firms such as Altavista and Catholic Telecom, Inc.

3.7) How do I use PICS?

To use PICS, users start by configuring their browsers or PICS software to use a ratings system (such as RSACi or SafeSurf). Once the ratings system is chosen, users must examine each of the categories in order to choose a preferred level of information for that category. In practical terms, this means deciding how much they are willing to allow. For example, one ratings system's choices for nudity include "none," "revealing attire," "partial nudity," "frontal nudity," and "explicit."

Once these choices have been made, the browser software uses them to filter sites. When an Internet site is requested, the browser compares the site's rating with the user's selection. If the site has ratings for the chosen system and those ratings fit within the parameters chosen by the user, it is displayed as usual. If the appropriate ratings fall outside of those parameters (perhaps the site has "frontal nudity," while the user was only willing to accept "partial nudity"), access to the site is prohibited, and the user is shown a message indicating that the site is blocked.

Since most web sites are not currently rated, most software provides users with the option of blocking out sites that do not contain PICS ratings.

In order to prevent mischievous children from changing ratings or disabling PICS altogether, most browsers can be configured to require a password before disabling PICS.

3.8) Should I rate my site?

The answer to this question will depend upon who's being asked.

RSACi, SafeSurf, and other proponents of ratings would obviously like everyone to rate their sites, while civil libertarians and opponents of ratings argue against any ratings.

Publishers of family-oriented sites or those who are trying to reach audiences concerned with Internet content might consider rating. Similarly, purveyors of adult material might rate their sites in order to be "good citizens".

3.9) What should a publisher consider before self-rating?

Web site publishers must decide which (if any) ratings systems to use. Since each ratings system requires a separate valuation process, and separate modifications to web pages, it may not be practical for web-site publishers to use all of the popularly available ratings.

In evaluating ratings systems, publishers may want to examine the categories used by each system and the distinctions used by those categories. Different systems will classify ratings systems in different ways, some of which may misrepresent the content of web sites. For example, sites discussing safe sex might not want to be placed in the same category with pornographic sites.

Web site publishers might also consider the popularity of the ratings services. Currently (as of September 1997), there are only a few major ratings services. Publishers are free to user other ratings, but these may not be useful to the Internet users who rely upon the popular systems. This presents a dilemma for some publishers, who can either accept the ratings of the popular systems, even if those ratings misrepresent their material, or refuse to rate their sites, knowing that this might cause their sites to be unavailable to some users.

Versions of Microsoft's Internet Explorer have provided an extreme example of this problem. Although IE allows user to use any PICS ratings system, RSACi is the only system that is built in to the selection list. Since Internet Explorer is the most widely-used PICS-capable browser (as of fall 1997, Netscape's Navigator does not support PICS), it seems likely that many PICS users will be relying upon RSACi. For publishers interested in reaching a wide audience, this market force may determine their choice of ratings system.

Finally, philosophical concerns may cause some people to decide not to rate. Web-site publishers who are not comfortable with the general content of available ratings systems, or who object to the concept of ratings, may choose not to rate their own sites.

MSNBC's troubles with ratings provide an ironic illustration of this possibility. Displeased with the RSACi ratings that would be necessary, MSNBC management removed all rating information from the site. MSNBC and other news organizations briefly discussed the possibility of creating a new ratings system specifically for news reporting.

While this proposal was eventually rejected, it illustrates some of the problems with content ratings. Well-funded publishers like MSNBC might be able to effectively create ratings systems that meet their needs, but smaller publishers who want to rate their sites may be forced to accept unsatisfactory ratings.

3.10) What concerns are raised by third-party ratings?

Since third-party ratings aren't validated by any technical means, third-party ratings can be easily misused. Just as stand-alone filtering software can block sites for political or business reasons (even if those sites do not contain adult content), third party raters might apply inaccurate labels to web sites in order to make sure that they would be blocked by PICS-compliant software.

To make matters worse, third party rating does not require the consent or even notification of a web-site publisher. Since third party ratings are distributed by third party "label bureaus," a web-site publisher may not know if her pages have been rated, or what the ratings said.

Third-party ratings also present significant technical challenges that may discourage their development. Unlike self-ratings, third party PICS ratings do not reside on publisher's web pages. Instead, they must be distributed to users using one of two methods:

3.11) What about sites that aren't rated? What if someone puts the wrong rating on a site?

PICS ratings can be truly useful for parents only if a significant percentage of the Internet's web sites are accurately rated. Currently, this is not the case. The 40,000 sites that have self-rated with RSACi, or even the 300,000 sites rated by NetShepherd, represent a small fraction of the total number of web sites available.

Some software, such as Microsoft's Internet Explorer, provides users with the option of blocking out any site that does not have a rating. This choice may be appropriate for some, but it severely restricts the available options. By blocking out most of the Web (including possibly some sites designed for younger users), this approach presents children with a severely restricted view of the world.

The accuracy of PICS ratings is obviously a concern. For example, unscrupulous purveyors of adult material might attempt to use an inaccurate rating in an attempt to slip through PICS filters. In RSACi's terms of use, the RSAC reserves the right to audit sites in order to guarantee accuracy of ratings. SafeSurf takes this one step further. The proposed Online Cooperative Publishing Act calls for legal penalties for sites that label inaccurately, or refuse to rate. In June 1997, Sen. Patty Murray (D-Washington) proposed the Child-safe Internet Act of 1997, which called for similar penalties. While these legislative suggestions might be effective in promoting the use of ratings, they raise serious concerns in terms of first-amendment rights and possibilities for overly aggressive enforcement. Question 4.1 discusses these possibilities in more depth. There are currently no quality controls on third-party ratings.

These issues of quality and accountability would become even trickier if numerous schemes were to come into use. If there were dozens of PICS ratings schemes to choose from, publishers would not know which to choose, and users might not know which to trust.

3.12) What if I don't like the ratings systems that are available? Can individuals and organizations start new ratings systems?

Currently, there are two choices for individuals and organizations that are uncomfortable with the existing ratings systems.

The first - and currently the only viable alternative - is to avoid use of PICS for self-rating, and in Internet browsers.

The second approach would be to develop a new ratings vocabulary, as an alternative to RSACi, SafeSurf, or other currently available ratings systems. This involves several steps:

The first step is generation of a ratings system, including categories that would be discussed and distinctions within those categories. This would require a discussion of the values that will be represented in the ratings system, and how these values should be expressed.

Once the system has been developed, sites must be rated. This can be done in one of two ways:

Once the ratings have been generated for the web sites, the new ratings system must be publicized to potential users. As described above, this could be expensive and difficult.

Given the significant resources that will be needed to effectively deploy a new ratings system, it seems unlikely that there will be a large number of PICS alternatives available in the near future. The developers of PICS are trying to change this through the PICS Incubator project, which offers resources to organizations interested in developing new ratings systems.

3.13) What's wrong with PICS and Internet ratings in general?

In theory, there are many useful applications of rating information.

Book reviews and movie ratings are only two examples of the many ways in which we use information filters. Used in conjunction with other information sources - including advertising and word-of-mouth - these ratings provide a basis for making informed decisions regarding information.

Unfortunately, PICS does not currently provide users with the contextual information and range of choices necessary for informed decision making. When deciding which movies to see, we have access to reviews, advertisements and trailers which provide information regarding the content. These details help us choose intelligently based on our values and preferences. On the other hand, PICS-based systems do not provide any contextual detail: users are simply told that access to a site is denied because the site's rating exceeds a certain value on the rating scale.

Furthermore, the limited range of currently available PICS ratings system does not provide users with a meaningful choice between alternatives. Parents who are not comfortable with any of the current ratings systems may not find PICS to be a viable alternative.

Continuing with our analogies to other media, consider book reviews in a world where only two or three publications reviewed books. This might work very well for people who agree with the opinions of these reviewers (and, of course, for the reviewers themselves!), but it would work very poorly for those who have differing viewpoints.

Some might argue that the "success" of a single set of movie ratings offers a model for PICS. However, ratings are generally applied only to movies made for entertainment by major producers. Documentaries and educational films are generally not rated, but similar web sites could be rated under PICS.

Movie ratings also provide a cautionary lesson that should be considered with respect to the Internet. Unrated movies, or movies with certain ratings, often have a difficult time reaching audiences, as they may not be shown in certain theaters or carried by large video chains. This has led to self-censorship, as directors trim explicit scenes in order to avoid NC-17 ratings. This may be appropriate for commercially-oriented entertainment, but it could be dangerous when applied to safe-sex information on the Internet.

Ratings systems also fail to account for the global nature of the Internet. Legal or practical pressures aimed at convincing Internet publishers to rate their own sites will have little effect, as these businesses or individuals have the option of simply moving their material to a foreign country. Furthermore, the existing ratings systems are of limited value to those in countries that do not share western values.

Concerns about unrated international material or differing cultural values could be addressed through direct censorship. For example, governments might use PICS ratings or proprietary filtering software to implement "national firewalls" which would screen out objectionable material. Alternatively, ratings might be used to "punish" inappropriate speech. If search engines chose to block sites with certain ratings (or unrated sites), or if browsers blocked certain ratings (or lack of ratings) by default, these sites might never be seen.

It is possible that a wide range of PICS ratings system could come into use, providing families with a real opportunity to choose ratings that meet their values. The utility of PICS might also be increased by use of new technologies like "metadata" (data about data, used to describe the content of web pages and other information resources), which might be used to provide contextual information along with PICS ratings. However, these tools may not be available for general use for some time, if at all.

Some people confuse ratings with the topical organization that is used in libraries and Web sites like Yahoo. While no system of organization of information is neutral, topical schemes attempt to describe what a resource is "about". Rating rarely helps us find information resources topically and is usually too narrowly focused on a few criteria to be useful for information retrieval.

4.0) Alternatives

4.1) Can anything work?

The answer to this question will depend largely on the perspective of the asker.

If this question is taken to mean: "Are there any solutions that would provide children with the ability to use the Internet without ever seeing material that is explicit or "adult,"the answer is probably yes. This would require a combination of three factors:

  1. Legislation requiring "accurate" ratings and specifying penalties for those who do not comply.
  2. Technical measures to prevent the transmission of unlabeled material, or any material from foreign sites (which would not be subject to US laws).
  3. Mandatory use of filtering software, using mandated settings.
The obvious legal, political, and practical problems with this scenario would certainly doom it to failure. While mandated standards have been suggested by some groups, it is quite likely that they would be found unconstitutional and in violation of the Supreme Court's Reno v. ACLU decision that overturned the Communications Decency Act. Furthermore, the accuracy of content ratings is a matter of judgment that would not easily be legislated. Practically, laws requiring the use of filtering software would be virtually unenforceable. Finally, if efforts aimed at "sanitizing" the Internet somehow managed to survive legal challenges, they would have a chilling effect upon speech on the Internet.

If the question is interpreted as meaning: "Are there any solutions that provide some protection from adult or objectionable material without restricting free speech?" the answer is much less clear. Stand-alone systems clearly don't meet these criteria, as they place users at the whims of software vendors, who may block sites for arbitrary reasons. In theory, PICS might fit this role, but the lack of a meaningful choice between substantially different ratings systems leaves parents and publishers with the choice of using ratings that they may not agree with, or that fail to adequately describe their needs or materials.

Describing speech as "adult" or "appropriate for children" is inherently a tricky and value-laden process. In the U.S., many people have attempted to prevent schools and libraries from using everyday publications like Huckleberry Finn and descriptions of gay/lesbian lifestyles. The fierce debates over these efforts show that no consensus can be reached. Increased use of filtering software would likely be the beginning, rather than the end, of debates regarding what Internet materials are "appropriate" for children, and who gets to make that decision.

4.2) I understand that there are many problems with filters and ratings. What can I do to protect my children?

The first thing that parents should do is to consider the extent of the problem. While some news reports might leave parents with the impression that the Internet is nothing but pornography, this is far from the case. In fact, it's unlikely that children would randomly stumble across pornographic material. Furthermore, many adult sites have explicit warnings or require payment by credit card, which further decrease the chances of children "accidentally" finding pornography.

Secondly, parents should play an active role and interest in their children's use of the Internet. For some children this might mean restricting Internet use to closely supervised sessions. Other children might be able to work with clearly defined rules and guidelines. Parents should also work to educate children regarding proper use of the Internet. Just as parents teach children not to talk to strangers on the street, parents might discourage children from visiting certain web sites, divulging personal or family information, or participating in inappropriate chats.

Some parents might consider using filtering software, despite all of the potential drawbacks. Parents considering this route should closely examine their options, in order to understand their options and the implications of any choice.

For stand-alone filtering systems, this means investigating the criteria used in developing blocking lists and/or news reports describing the software. If possible, parents might try to find stand-alone systems that allow users to view and edit the lists of blocked sites.

Parents considering the use of PICS systems should investigate the categories used by the various ratings systems, in order to find one that meets their needs. Information about PICS-based systems can be found at the home pages of the respective ratings systems.

In general, the use of a filtering product involves an implicit acceptance of the criteria used to generate the ratings involved. Before making this decision, parents should take care to insure that the values behind the ratings are compatible with their beliefs.

Finally, parents should realize that the Internet is just a reflection of society in general. Much of the "adult" content on the Internet can be found on cable TV, at local video stores, or in movie theaters. Since other media fail to shield children from violence or content, restrictions on the Internet will always be incomplete.

4.3) What roles can ISPs play?

Some have called upon ISPs to play a greater role in helping parents filter the 'Net for their children. There are two ways that ISPs might participate in these efforts:

ISP-Based Filtering: ISPs might do the filtering themselves, preventing their customers from accessing objectionable materials, even if those customers do not have their own filtering software. This requires the use of a proxy server, which would serve as a broker between the ISP's customers and remote web sites. When a customer of a filtering ISP wants to see a web site, his request goes to the proxy server operated by the ISP. The proxy server will then check to see if the site should be blocked. If the site is allowable, the proxy server retrieves the web page and returns it to the customer.

This approach is technically feasible. In fact, it's currently used by many corporations, and some ISPs that offer this service. However, proxying requires significant computational resources that may be beyond the means of smaller ISPs. Even if the ISP can afford the computers and Internet bandwidth needed, this approach is still far from ideal. In order to do the filtering, proxy servers would have to use stand-alone or PICS-based systems, so they would be subject to the limitations of these technologies (see 2.4, 2.5, and 3.13). The shortcomings of existing filtering systems may prove particularly troublesome for ISPs that advertise filtering services, as these firms could be embarrassed or worse if their filters fail to block adult material. Finally, ISPs that filter material may lose customers who are interested in unfiltered access to the Internet.

Providing Filtering Software: Others have suggested that ISPs should be required to provide users with filtering software. While this might be welcome by parents who are thinking about getting on to the 'Net (and by software vendors!) it could present a financial serious burden for smaller ISPs.

5.0) Where Can I Find More Information?

World-Wide-Web Consortium PICS home page

The PICS Incubator Project

RSACi

SafeSurf

NetShepherd

CyberPatrol

NetNanny

Fahrenheit 451.2: Is Cyberspace Burning? - The ACLU's Report on Filtering Software

Peacefire(anti-ratings group)

Computer Professionals for Social Responsibility (CPSR)

6)Credits

6.1)Who gets the credit?

This document grew out of discussions held by CPSR's Cyber-Rights working group and other concerned individuals during the summer of 1997. Andy Oram, Craig Johnson, Karen Coyle, Marcy Gordon, Bennett Hasleton, Jean-Michel Andre, and Aki Namioka provided invaluable assistance. Please feel free to distribute or copy this document. Comments can be sent to hhochheiser@cpsr.org.

6.2)Who is CPSR?

CPSR is a public-interest alliance of computer scientists and others concerned about the impact of computer technology on society. We work to influence decisions regarding the development and use of computers because those decisions have far-reaching consequences and reflect our basic values and priorities. As technical experts, CPSR members provide the public and policymakers with realistic assessments of the power, promise, and limitations of computer technology. As concerned citizens, we direct public attention to critical choices concerning the applications of computing and how those choices affect society.