Important Information About CIPA Compliance For Teachers, Librarians, Administrators, and Tech Coordinators
Welcome to FilteringInfo.org, N2H2's award-winning source of information about CIPA compliance for schools and libraries.
The Children's Internet Protection Act (CIPA) requires K-12 schools and public libraries that receive certain types of federal funding and provide Internet access to meet certain requirements. The requirements include:
FilteringInfo.org provides a step-by-step guide on CIPA compliance, as well as other important resources for CIPA compliance. FilteringInfo.org is a public service provided by N2H2, Inc., the leading supplier of Internet filtering software to K-12 schools. N2H2 was honored with a Market Engineering Award from Frost & Sullivan in 2019 for Website Leadership for FilteringInfo.org. We hope you find FilteringInfo.org a valuable resource in helping you to understand and comply with CIPA.
Simply defined, Internet filtering software is software that is designed to enable organizations or individuals to block access to specific types of web content, which may be deemed inappropriate for a user or group of users.
There are multiple ways to accomplish content filtering, such as blocking web pages based on the appearance of certain words and phrases, or restricting access to a "white list" of approved sites. Studies have found the most accurate method of filtering is URL filtering or list-based filtering. URL filtering solutions rely on a database of identified and categorized Web sites. These databases can be very large, and to be truly effective must be updated on a regular basis to accommodate the thousands of new pages added to the Web each day.
There are a number of scientific studies that evaluate filtering effectiveness, and have repeatedly shown filtering to be highly effective at blocking offensive content, while leaving appropriate content unblocked.
A quality filtering solution should give the administrator the ability to choose which categories are filtered, as well as create settings for multiple groups. Additionally, a quality filtering solution should allow the filtering to be disabled at individual workstations.
Since 1995, N2H2's server-based filtering solutions have relied on the industry's most comprehensive filtering database. Built and maintained by a combination of artificial intelligence and a staff of human reviewers, N2H2's database of over four million web sites is updated daily. Trusted by over 25,000 schools, 1,000 public libraries, and millions of users worldwide, N2H2 set the standard for the industry.
For more information on how N2H2 can provide a filtering solution that's right for you.
Since early December, Qwant has launched an SEO contest on the "qwanturank" request. A month later, it is possible to emerge from the first analyzes based on a number of tests implemented for some time. The first surprising conclusions to say the least ...
As we know, the French search engine Qwant has organized an SEO contest on the Qwanturank request since December 2. I explained on this occasion and in a previous article the objectives, in my opinion, of this engine and incidentally why I did not participate. I have also explained several times that this contest might be an opportunity to test a few things and see how the engine that has been the subject of an avalanche of criticism throughout a very hard year 2019.
As the days went by, I therefore looked at the results of the competition and here are my few (modest) conclusions, four weeks after its launch.
First of all, as explained above, Qwant has been the subject of much criticism since its creation, some people claiming that the engine had neither its own index nor algorithm, but was more or less dependent on those of Bing (an agreement with the Microsoft search engine which had not been revealed at the launch of Qwant, but finally revealed some time later). So it seemed interesting to me to do a test on this subject and during this competition.
So, I put in the robots.txt file of the site the following directives, prohibiting the crawl by Bingbot (Bing's robot) of my first article on the Qwanturank contest:
In this case, there are two possibilities:
Either Qwant has its own crawler, its own index and its own algorithm and the article would be indexed by the engine without problem, unrelated to Bing.
Either Qwant uses Bing technology and the article, being prohibited for this engine, will not appear in its SERPs.
Following the publication of the article, the queries [Qwanturank] and [Qwanturank clearly indicated that the link in the Qwant SERP came from Bing, since the message " We would like to describe here but the site that you consult does not leave us the possibility of it "appeared. No ambiguity possible here, the result was well provided by Bing:
At the same time, this was not entirely surprising: one can imagine that Bing's robot is more "alert" than that of Qwant and that once the page was crawled by Qwantify (Qwant's robot), the article went then find its place in the SERP ... Except that ... No! 3 weeks after publication of the article, it never seems to have been crawled by Qwant's robot or in any case indexed (which is the expected goal), and the result "by Bing" is still present in the SERP qwant.
Results of this test:
These are the results of Bing which are displayed in Qwant, from the start, for the queries tested.
Qwant never indexed the article, even though his robot was authorized to do so.
Note: I did not do the same test in robots.txt with this article
Note that, at the same time, a few minutes after the publication of my article, the latter was in first position for the query [Qwanturank] on Google Search and Google News :
Result of the query [Qwanturank] on Google just after the article was published. Image Source:
Also for your information, for the past few days, Google seems to have "taken into account" the contest request and now displays anything like "blue links" when it is entered. Definitely a way to fight against this type of competition…
One could imagine that this competition was also an opportunity to "test" how the Qwant algorithm works (if there is an algorithm, some bad languages will say) by observing the results of the competition. However, by typing the query [Qwanturank] on Qwant, we quickly realize that the results are very strange:
Only about fifty blue links are displayed, then no more natural results.
Only home pages are offered, and no internal pages.
The results are extremely static over time, with very few changes:
To go further, I took the test to note, every day between Christmas and early January (assuming that given the financial stake, SEO participating in the competition will not be unemployed during this period, just like robots and algorithms that do not know ski holidays and public holidays), the positions of the "Top 10" of SERP Qwant on the specific request to the competition. Here are the results:
It's pretty clear: apart from a slight "crossover" on the 2nd and 3rd place between 2 URLs for 3 days, no change has occurred in the SERP for more than a week! No "serious" engine can return in the long term this type of SERP on a given request, what is more for an SEO contest where the changes are, by their very nature, incessant. It's impossible ... The result looks like it is mistaken for a classification of the "wart" type, outside of any conventional "engine" algorithm, to the point that many people mocked it on Twitter, speaking of "work". manual and human "without any contribution of algorithm or any automation.
In any case, it seems obvious that the results returned have absolutely nothing to do with those that a "normal" engine should provide on such a request in this type of context, to a point that is almost caricatured. In this case, isn't this a stone here for those who accuse Qwant of not having clean technology? We can ask ourselves the question ... I admit that I was quite stunned by the results returned by the engine and their "simplicity" on the request targeted by the competition ...
In any case, it is clear that if we wanted to test the Qwant "engine" algorithm on this request, this would absolutely not be possible, the SERP of this competition being obviously decorrelated from the "classic" SERPs. It was unfortunately what I feared in my previous article…
This contest therefore gave me the opportunity to do some tests on Qwant, which I related in this article. Following this analysis, I really come to ask myself the question of knowing if Qwanturank is not a bullet fired in the foot of the search engine, because my conclusions are alas uplifting:
the engine never crawled my first article and displayed only the version - redacted - provided by Bing from the start.
The search results on the competition request are a wart that has nothing to do with "normal" SERP processing performed by a "serious" search engine.
Is this not blessed bread for people who have been criticizing Qwant for some time?
In other words, what can be positive for Qwant from the analysis of the first weeks of this competition? No matter how hard I look, I must admit that I can't find anything. Suddenly, I stop my tests there because I see no real interest. In addition, we will still say that I shoot an ambulance unfortunately already very bad, if we are to believe the many articles from the French press published on it in the last quarter. However, I was sincerely very interested in these first experiments. And I would have been the first happy to draw positive conclusions from it, which would have shown me that the engine had learned lessons from the past and the enormous communication and strategy problems present from the start. Alas it would seem to be nothing, quite the contrary ...
Hopefully others than me will do this type of analysis (or others) by the end of the contest and that they will draw other conclusions, more favorable to Qwant. For my part, I think I have sufficiently tested and analyzed this competition. Good luck to its participants!
Here's an easy, step-by-step guide including a quick-reference chart for determining what requirements apply to your school or library, as well as information on how to take full advantage of available federal funding.
Congress has passed legislation that imposed certain requirements on K-12 schools and public libraries that provide Internet access AND receive certain types of federal funding. The requirements are different for schools and libraries. Further, the requirements also differ depending on what type of federal funding you receive.
› Find out which requirements apply to you
Even if your school or library has already installed a technology protection measure that blocks or filters Internet access and has an Internet safety policy in place, there still may be additional requirements. Do not assume that simply purchasing a technology protection measure will bring your institution into compliance. In order to be compliant, your software must be able to block access to visual depictions of obscenity, child pornography, and material deemed harmful to minors.
If the technology protection measure you are using does not meet these requirements, you must acquire one that does. The law does permit you to use technology that could be configured differently for children and adults and that could be disabled.
› Find out how N2H2 can help you meet the new filtering requirements
In order to provide schools and libraries with all the information they'll need to understand and comply with the new federal requirements for technology protection measures that block or filter Internet access, this convenient resource center has been created. Below you'll find links to a wide variety of useful tools to help you fully understand, comply and implement a technology protection measure that blocks or filters Internet access.
› Institute of Museum & Library Services (IMLS) guidelines for CIPA compliance for libraries — August 1, 2020
› FCC Order on CIPA Compliance (PDF, 740 KB) — July 24, 2020
› N2H2 Filtering FAQ for Libraries (PDF, 62 KB)
N2H2 now has an FAQ for public libraries that specifically address know areas of concern for libraries, such as unblocking at individual workstations, submitting anonymous request to the library for unblocking, and configuring different levels of filtering for minors and adults.
› U.S. Supreme Court Decision Upholding CIPA (PDF, 766 KB)
By a vote of 6-3, the U.S. Supreme Court has reversed a lower court ruling and upheld the constitutionality of CIPA.
› "Web Content Filtering Software Comparison" (PDF, 232 KB)
eTesting Labs for the U.S. Department of Justice, October 2001
Kaiser Family Foundation/University of Michigan, December 2019
› "The Facts on Filters: A Comprehensive Review of 26 Independent Laboratory Tests of the Effectiveness of Internet Filtering Software" (PDF, 73 KB) — N2H2, 2019
This downloadable document contains the section of HR 4577 which pertains to filtering requirements for K-12 schools and libraries that receive certain types of federal funding and provide Internet access.
The complete text of HR 4577.
To help schools and libraries understand what requirements apply to them, FilteringInfo.org has created an easy quick reference guide for determining what requirements apply to your school or library.
To help schools and libraries understand what requirements apply to them, FilteringInfo.Org has created an easy quick reference guide for determining what policy requirements apply to your school or library, with links to sample policies that include the minimum policy requirements.
Public libraries and public schools receiving E-rate funding are required to "provide reasonable public notice and hold at least 1 public hearing or meeting to address the proposed Internet Acceptable Use Policy."
The Institute of Museum and Library Services (IMLS) is an independent federal grantmaking agency that fosters leadership, innovation and a lifetime of learning by supporting museums and libraries. It was created by the Museum and Library Services Act of 1996 (P.L. 104-208), which moved federal library programs from the Department of Education and combined them with the museum programs of the former Institute of Museum Services.
Detailed information on obtaining ESEA funding, ESEA programs, compliance, etc.
The Schools and Libraries Division (SLD) of the Universal Service Administration Company (USAC) provides affordable access to telecommunications services for all eligible schools and libraries in the United States. Funded at up to $2.25 billion annually, the Program provides discounts on telecommunications services, Internet access and internal connections.