Guest post: The operation of NZ’s internet censorship filter

The following is a guest post from Matt Taylor about the operation of the government’s internet censorship in New Zealand.

The Digital Child Exploitation Filtering System is New Zealand’s internet filter, run by the Department of Internal Affairs. If you’re with one of the participating internet service providers you can’t access the content on the blacklist. It’s meant to only be used to block images of child sexual abuse (and “think of the children!” gains buy-in very effectively), but unlike other censorship decisions which must be released, the list and process is secret, so no one really knows what’s blocked, and overseas experience dictates that filters rarely work as first stated.

Apart from the secret list, and the secret processes around the list, the filter is meant to be open and transparent. Here’s a quote from the February 2010 Independent Reference Group minutes:

“The Group suggested that the Department publish as much information about the system as possible. This would include regular statistics and a copy of the presentation [in the presentation given to the IRG, the operation of the filtering system, the compilation of the filtering list, and the appeal process was explained].”

The presentation never made it to the DIA website, so I requested it. Sticking with the trend of being a complete mess in regards to keeping records, they have lost it.

Here’s a similar Powerpoint they sent, shown at the NetSafe Conference in April 2010 (pdf).

To encourage some additional transparency, I submitted an Official Information Act request last month, along with Joshua Grainger. If you’d like to see the full responses, they’re here and here (pdfs).

Scope of the filter

From the February 2010 minutes:

“The Department has no intention to expand the scope of the filter beyond child sexual abuse images and has entered into a contractual agreement with the system’s supplier that it not do so.”

From the letter to ISPs advising them of the filter’s availability (pdf):

“The Department recognises that, to ensure public confidence in the DCEFS, the scope of the system must remain on child sexual abuse material and its operation must be open to scrutiny. Accordingly, the Department’s contract for the use of the software that supports the DCEFS constrains its use to filtering to child sexual abuse material.”

From the Common Questions and Answers page:

What assurances are there that the filter will not in future be extended to block content other than that intended?
The Department’s contract for the use of the software that supports the DCEFS constrains its use to filtering child sexual abuse material.

I requested the section of the filter contract that discusses the limitations of the filter (the DIA refuse to provide the full contract). I received a summary of it:

“While the Department has previously refused to release the whole contract with Netclean, it has referred to clauses in that contract as one of the reasons why the scope of the filtering system can’t expand. The following is a summary of the relevant conditions of the Customer Licence Agreement.

  • The primary goal of the NetClean Whitebox is to block access to child pornography.
  • In order to achieve the main objective, NetClean allow that even non-child pornography is filtered, as long as it is material which is illegal to possess under the country’s law and that the main objective for the installation is to block access to child pornography.
  • The filter must not be used to restrict freedom of expression, nor to prevent the transmission of information which in itself is legal to possess.
  • Furthermore, the installation of NetClean Whitebox must not violate the articles 18 and 19 of the Universal Declaration of Human Rights.”

Does this mean just child sexual abuse material that isn’t child pornography can be blocked (the DIA say that the “bad” content is wrongly called child pornography)? Or anything illegal?

To me it seems like it’s wide open:

“NetClean allow that even non-child pornography is filtered”.

Appeals and anonymity

From the Code of Practice (pdf):

“5.4 The process for the submission of an appeal shall:
• be expressed and presented in clear and conspicuous manner;
• ensure the privacy of the requester is maintained by allowing an appeal to be lodged anonymously.”

I wondered how, if appeals are meant to be anonymous, the DIA can process appeals when no URL is given by the appellant.

The DIA says:

“If a user does not submit a URL when appealing, the Department does its best to identify the site that appellant was referring to. This is done by looking at the block logs to identify sites blocked shortly before and after the appeal form was accessed and ISP of the appellant. The sites identified during that period will then be reviewed.”

This makes sense because blocks probably don’t happen that often. However, if the ISP of the appellant can be compared, there’s still other information being collected with appeals. It doesn’t seem like this actually happens though.

Here’s what one of the appeal reports says:

“Checked logs for sites blocked between 13:19 and 13:21. Sites in that timeframe identified as …”

No mention of ISP comparison.

Number of sites filtered

The DIA stated in a 2009 press release that the number of sites being filtered was over 7000. This dropped down to 400-700 in 2011, which is much closer to the 500 or so URLs the Internet Watch Foundation blocks.

The DIA on the massive decrease:

“The number quoted in the press release on 16 July 2009 related to the sites on the list during the trial. As there was a 2 year gap between the trial and going live with the filter system, the majority of the sites were taken down or had ceased to operate.”

So were the URLs being blocked not being regularly reviewed during the trial, and that’s why there was such a high number, or did the policy on what sites could be filtered change?

In an email from Peter Pilley at the DIA to Richard Baalham, Networks Manager at Callplus (Slingshot), he claims:

“During the trial we had 0 false positives as we [reviewed] the list each month to ensure it [was correct] and current.”

The high number of 7000 could have definitely influenced ISPs to join the filter. Here’s what Allan Freeth, the CEO of TelstraClear sent to Rick Barker, the Minister of Internal Affairs regarding TelstraClear joining:

“We will add a filter to all web browsing by Clearnet and Paradise customers that stops browsers from accessing known child sex abuse sites around the world. There are more than 7,000 such sites…”

Another thing that was curious about the number of sites filtered was that no URLs were removed from the filter between April 2011 and August 2011. All URLs are supposed to be reviewed monthly, and it seems unlikely that none were due to be removed.

When asked whether the list was reviewed monthly each month in 2011, the DIA said:

“Yes, it is this strict checking that is currently keeping our list so low.”

When asked why no URLs were removed between April 2011 and August 2011:

“On review, all sites continued to contain abuse material and subsequently had not been taken down by enforcement agencies in other countries or were still under investigation?”

“Were still under investigation” seem to be the key words in this answer. It’s pretty clear-cut whether a site still contains child sex abuse images, and should only take seconds to check. An investigation isn’t required, and it’s more likely that some sites just weren’t checked at all, thus still being “under investigation”.

I raised the prospect last month that if child sex abuse sites could identify DIA access to the sites they could provide innocuous content to the DIA instead of the child sex abuse content that they’re showing to everyone else. The DIA say that in January 2012 they received 449 URLs for review from the public through the ChildAlert site.

In the NetSafe Powerpoint it says that reports through ChildAlert are received and pre-processed by ECPAT to remove false reports and filter bad reports.

This means that a high proportion of URLs received by the DIA should be able to be filtered.

Only 50 URLs that were submitted by the public in January 2012 were subsequently added to the filter list.

(If you’re curious, they say that they reviewed over 500 URLs in January 2012, excluding the monthly review of all URLs on the filter list. That included 21 URLs supplied through the Interpol Worst Sites Project with “a number of other sites” that came to their attention as a result of investigations also being reviewed.)

Material filtered

A Russian child model website was discussed during the IRG’s March 2011 meeting:

“The Group was asked to look at a child model website in Russia. The young girl featured on the site appears in a series of 43 photo galleries that can be viewed for free. Apparently the series started when the girl was approximately 9 years old, with the latest photographs showing her at about 12 years old. The members’ part of the site contains more explicit photos and the ability to make specific requests. While the front page of the website is not objectionable, the Group agreed that the whole purpose of the site is to exploit a child and the site can be added to the filter list.”

When asked for anything held regarding this website, and whether the website was considered a case of clearly illegal, objectionable images of child sexual abuse, the DIA replied:

“The website in question is divided into a public area and a member-only area. The public area contains images of a young girl dresses in a variety of outfits, which would not be classified as objectionable. The member-only area contains more sexualised images of the same girl that are objectionable. The Independent Reference Group (IRG) agreed that, as the purpose of the site was to sexually exploit a child, it should be added to the filter list.”

The IRG misses the point that if you tell the public that you’re only going to filter images of children being sexually abused, you can’t turn around and filter other content too, even if it is wrong.

The DIA say that the Russian child model website was not considered to be a borderline case.

Because a whole website is being filtered when only part of it contains material we were told was going to be filtered, I asked the following:

“I understand photographs of real life children being sexually abused, CGI and drawings of children being sexually abused, and the Russian child model website are being blocked. Are any URLs being blocked that don’t come under that list?”

I received this answer:

“Material being blocked by the filtering system complies with the Code of Practice, which states:

2.1 The scope of the DCEFS will be limited to preventing access to know websites that contain publications that promote or support, or tend to promote or support, the exploitation of children, or young persons, or both, for sexual purposes.

2.2 The DCEFS will focus on preventing access to known websites containing child sexual abuse images.”

Joshua asked:

“Has the filter list ever contained sites with solely written material?”

The DIA replied:

“No, there are no sites on the list that contain solely written material.”

Note that this doesn’t really answer the question of whether the list ever contained those sort of URLs.

From a 21 October 2008 email from Peter Pilley at the DIA to Graham Walmsley, Wholesale General Manager at Callplus:

“We have over the last 2 years built as system for the purposes of restricting access to sites that host child sexual abuse materials such as images, movies, stories etc…”

Independent Reference Group members

Here’s what Nathan Guy, Minister of Internal Affairs sent someone in regards to Steve O’Brien’s membership on the IRG (he’s the manager of Censorship Compliance at the DIA).

“I am advised that the InternetNZ submission on the Department of Internal Affairs’ draft Code of Practice for the filtering system suggested that membership of the IRG include a wide range of interests, including law enforcement and government. As Mr O’Brien is highly experienced in the enforcement of censorship law and would be working closely with the IRG, he was considered an appropriate appointment to the Group.”

Also, if you’re curious, InternetNZ and ISPANZ weren’t asked to participate in the IRG.

IRG reviewing sites

The DIA have released conflicting information regarding whether the IRG will actually look at URLs on the list, you know, to make sure they contain material that should be being blocked.

From the Common Questions and Answers page:

Will the IRG actually review/view the list of sites?
The IRG will be able to inspect the filter list and have access to the inspectors’ reports on any of the sites blocked. They will also be able to check from DIA premises any particular website on that list if they have concerns about it.”

In response to a OIA request that was disclosed with Joshua’s OIA, the DIA say:

“The Department will not be subjecting the membership of the IRG to the content of the websites on the filter list. As stated in the Code of Practice, the members of the IRG will be provided with:

  • the officers’ reports (which identify each website by URL and describe the contents)
  • details of all appeal applications and the resulting action taken,
  • reports of any technical issues with the filter or connections to any ISP,
  • such other information that may lawfully be provided to assist the IRG in fulfilling its function.”

If you’re curious, the IRG has reviewed one URL (the Russian child model website). They have the opportunity to review URLs at their meetings, but haven’t taken any of those opportunities up (if you’re confused and think this conflicts with the first sentence of this paragraph, I am too).

The secret list

Here’s what I asked:

“I understand requests for a full copy of the filter list have been previously declined. Could you please send me a list of just the domains from the list of URLs that are blocked, unless the whole domain is blocked (my assumption is if the whole site is blocked it’s a site only for child sexual abuse material. I’m looking for the sites that have URLs blocked, but also have non-child sexual abuse content on them).

If [the above] is not possible, the domains of search engines, file sharing locker services, and social networks that have URLs blocked, and a copy of the list with the first domain name part removed, but TLD and the rest of the URL intact.”

Their reply:

“Possession of child sexual abuse material is an offence that carries a maximum penalty of 5 years imprisonment. As the release of part of the URL of the websites being filtered would facilitate a search for such material, the Department is withholding the information requested … in terms of section 6(c) of the Act (where the release of the information is likely to prejudice the maintenance of the law).”

This is a stretch, especially as a reason for refusing to disclose what search engines, file lockers, and social networks have URLs that are blocked.

Note that all other censorship decisions are released. By law, the Classification Office has to publish its decisions, which they do.

The DIA did, however, release a count of the top-level domains (TLDs) of the websites on the filter list as at 12 March 2012:

  • .com – 283
  • .ru – 93
  • .net – 65
  • .info – 23
  • .biz – 6
  • .in – 6
  • .us – 5
  • .org – 4
  • .me – 2
  • .tv – 1
  • .ir – 1
  • .su – 1
  • .ws – 1

The prevalence of .com domains on the list is extremely confusing for me. The US Government loves to take them, as well as .net, .org and .tv domains down.

If they can take down a .com domain for copyright infringement (see Megaupload’s homepage) why are 353 domains that the US Government should be happy to take care of on the list? Copyright infringement vs. child porn. Priorities people, priorities.

I asked whether abuse reports and take down requests are sent to hosting companies and law enforcement etc. when URLs are added to the filter.

The DIA responded with:

“The Department works with partner agencies in other jurisdictions to get international sites removed.”

The IRG’s December 2011 report (pdf) states that:

“Additionally 18% of the users originated from search engines such as google images”.

I asked whether Google was informed of those images:

“We have a very good relationship with Google and they have been made aware of any objectionable links available via their services. The statement in the December 2011 report used Google Images as an example of a type of service. It was not a statement that 18% of users originated from Google Images.”

I asked how long the DIA takes to make Google aware of those objectionable links available through their services. The DIA replied:

“Google is advised of objectionable links available via its services as soon as is practicable.”

Investigator reports

I asked for a copy of all investigator reports held. I received a sample investigator’s report (available in the response PDF linked in the introduction).

Here’s what the DIA removed:

“Information that would identify the site, including a screen capture of the webpage has been removed in terms of section 6(c) of the Act. Information that would identify officers involved in the operation of the filter have been withheld in terms of section 9(2)(g)(ii) of the Act (to protect officers from improper pressure of harassment).”

The filter and privacy, Google Analytics

I asked for a copy of any contract the DIA has with companies that provide internet services to power the filter, including web and domain hosts for the http://dce.net.nz website (that’s the website people are redirected to when the filter blocks a URL).

The DIA say they have “no contracts with providers of internet services that relate to the filtering system.”

I asked what data is collected when someone tries to visit a blacklisted site, including log data collected by the http://dce.net.nz web host:

“The filter only records the service provider name, the resource requested and date and time. No user data is stored.”

In some of the DIA’s reports, statistics on device type are included. Device type isn’t listed in the Code of Practice as data that’s collected. I asked whether other data is collected in the course of the filtering process that isn’t listed in the Code of Practice. The DIA said that no other data is collected.

This is from 6.1 of the Code of Practice (pdf):

“During the course of the filtering process the filtering system will log data related to the website requested, the identity of the ISP that the request was directed from, and the requester’s IP address.”

The Code of Practice also says that the requester’s IP address is logged and it says that the system will anonymise the IP address. The DIA have previously said that the system retains the IP address for up to 30 days. The DIA clarifies(?):

“When a person requests a webpage that is blocked, the IP address of the requester will be presented to the service so that blocking page can be sent to them. IP addresses are anonymised by the system itself, no record is kept. The filtering system anonymises IP addresses using a tool developed by Netclean. By not logging the data, the system prevents anyone from reviewing source IP. All IP addresses appear as 0.0.0.0.”

The DIA additionally stated that data from the filtering system has never been used in support of any investigation or enforcement activity and that no data from the filtering system has been shared with other departments.

I asked for anything held discussing the implementation of Google Analytics on the http://dce.net.nz website. I also asked what is the data Google Analytics provides is used for and whether any privacy issues were raised regarding the use of Google Analytics:

“Google Analytics is a free service offered by Google that generates statistics about the visitors to a website, in particular the referrers used. Google Analytics is used to confirm other statistics generated from the filter and to provide better reporting to the IRG and public. The Department does not consider the the use of Google Analytics raises any privacy concerns.”

I think it’s quite significant that information about New Zealanders is being sent overseas to Google.

I asked whether the DIA has a contract with Google:

“Google Analytics is free software. The terms and conditions for the use of Google Analytics are available at http://www.google.com/analytics/tos.html.”

Curiously, under the privacy section of the Terms and Conditions Google states:

“You must post a privacy policy and that policy must provide notice of your use of a cookie that collects anonymous traffic data.”

http://dce.net.nz doesn’t have a privacy policy.

Google’s privacy overview for Google Analytics states again that:

“All website owners using Google Analytics are required to have a privacy policy that fully discloses the use of Google Analytics.”

Google also logs whether the visitor has been to the site before on behalf of the DIA. This isn’t disclosed in the Code of Practice (pdf). Google Analytics also collects IP addresses:

“Google Analytics collects the IP address of website visitors in order to provide website owners a sense of where in the world their visitors come from. This method is known as IP geolocation.”

The IP addresses are not passed to the website owner (the DIA), but it’s unclear whether Google stores them after the geolocation process has taken place.

From the IRG’s August 2011 minutes:

“Andrew Bowater asked whether the Censorship Compliance Unit can identify whether a person who is being prosecuted has been blocked by the filtering system. Using the hash value of the filtering system’s blocking page, Inspectors of Publications now check seized computed to see if it has been blocked by the filtering system. The Department has yet to come across an offender that has been blocked by the filter.”

I asked the DIA to explain what this meant:

“Every image, photograph, document or movie found on a computer can be run through a hashing process that will generate, using a mathematical algorithm, a unique hash value for that file. A hash value is a set of numbers and letters strung together and once assigned this hash value cannot be altered. If the same image is hashed twice, the hash value will remain consistent; however, if even 1 pixel of an image is altered that new image will be assigned a new hash value.

When the Department seizes a computer or storage device as the result of exercising a search warrant, as part of the forensic examination of that device, the Department is able to look to see whether the offender has been blocked by the filter by looking for the unique hash value generated by objects on the blocking page.

While this information plays no part in the prosecution of an individual, it is useful in understanding the behaviour of persons who access child sexual abuse material and the effectiveness of the filtering system.”

This probably doesn’t take into account the fact that some people have visited the http://dce.net.nz website without being redirected there because of trying to access a blocked URL (like me, and if you’ve clicked on the link, you too).

Correspondence with ISPs

I asked the DIA to send me any correspondence, electronic, written or otherwise, with ISPs regarding them joining or leaving the filter.

Here’s what I received:

  • An email (14/7/08) from the CEO of TelstraClear to the Minister of Internal Affairs and the Minister’s reply (20/08/08);
  • Emails between the Department and Callplus (21/10/08 and 30/10/08). Technical information regarding the operation of the filter and information regarding its location has been withheld under section 6(c) of the Act. The telephone numbers of officers have been withheld under section 9(2)(g)(ii) of the Act.
  • A letter from the Department to Telecom (29/09/09).
  • Emails between the Department and Telecom (1/11/10) regarding a draft press release.
  • A letter from the Department to ISPs (list enclosed) explaining the filtering system and inviting them to contact the Department for more information. The telephone numbers of officers have been withheld under section 9(2)(g)(ii) of the Act.

Other correspondence with Telecom was withheld under:

  • 9(2)(ba)(i) of the Act (to protect information which is subject to an obligation of confidence where the making available of the information would likely to prejudice the supply of similar information, or information from the same source, and it is in the public interest that such information continue to be supplied);
  • 9(2)(j) of the Act (to enable the Department to carry on, without prejudice or disadvantage, negotiation); and
  • 9(2)(h) of the Act (to maintain legal professional privilege).

Here’s Allan Freeth, TelstraClear CEO to Rick Barker, Minister of Internal Affairs:

“We will add a filter to all web browsing by Clearnet and Paradise customers that stops browsers from accessing known child sex abuse sites around the world. There are more than 7,000 such sites…” “While we believe the Internet is a wonderful source of information and that people have the right to determine what they view based on personal taste, there is nothing positive about content that reflects the suffering of children.”

This is Rick Barker back:

“While participation by ISPs in the filtering programmes will remain on a voluntary basis, I expect that customer demand will mean that most ISPs will join the programme.”

He also requested that TelstraClear keep information on who has been using an IP address at a specific time for longer because ISPs are only storing information for the amount of time they require it (which is exactly what they should be doing). Note that he thinks that ISPs shouldn’t place so much importance on what the Privacy Act says:

“The importance of Internet Protocol (IP) address data to DIA investigations has been recently drawn to my attention. The identification of individual computer addresses and the ability to correlate this information with the location of those computers is vital to catch offenders who distribute images of child sexual abuse. I am advised that ISPs consider that, in terms of the Privacy Act 1993, they are required to dispose of information related to IP addresses once this information is no longer necessary for the operation of their businesses. While it is up to each ISP to determine how long they keep this information, I was concerned to learn that some ISPs retain this information for only a very short time. I hope that we can continue to build on the successful partnership between government and business and that TelstraClear will continue to support my Department’s investigations by retaining IP address data for a longer period of time.”

Telecom was given as a reason why other ISPs should join the filter. In a 21 October 2008 email to Callplus, the DIA claimed that “Telecom is coming online very soon”. Telecom released a press release saying they were joining the filter on 3 November 2010.

In October 2008 Telecom still had doubts about the filter, including the legality of it.

Here’s a portion of a 29 September 2009 from Keith Manch, Deputy Secretary, Regulation and Compliance at the DIA to Dean Schmidt, Telecom Senior Executive Government Relations and Grant Fraser, Telecom Senior Solicitor.

“Telecom’s cautious approach to date is understandable. However, as you are aware there is a compelling case that any ISP’s participation in the website filtering system is lawful.

This case is based on the argument that redirecting a get request to the Whitebox and then to the Department’s server is not an interception. In addition, even if a get request is a communication, and we suggest that it is not, then it is certainly not a private communication, because there can be no reasonable expectation of privacy in respect of a request that is analogous to the address on an envelope. Finally, even if a get request is in fact a private communication, there might be an argument that the ISP is a party to that communication.

Telecom should feel reassured that making out any one of these four points would be enough to ensure that the prohibition in section 216B of the Crimes Act 1961 is not breached.

If Telecom has any residual concern that redirecting a get request into the website filtering system is an interception of a private communication, then we suggest it proactively obtain the express or implied consent of its users, through the use of on-line terms and conditions of use. This would ensure that Telecom is a party to the communication, and that the offence provision in section 216B would not apply.

The Department has considered whether to utilise the provision in the Crimes Act to make an Order in Council exempting an interception device from the provisions of Part 9A. The Department does not intend to do so as we consider this unnecessary in light of the points made above. We do not see the Whitebox software as an interception device, and as a result think it would be inappropriate and confusing to seek an Order in Council premised on it being such a device.

Finally, I note your concern that regardless of the strength of our view that what is occurring is entirely legal, someone may seek to challenge it. While I accept that the potential for challenge to arise cannot be completely discounted, I suggest that this risk is minor in comparison with the benefits of joining the website filtering system. Should a challenge emerge, to the extent the Department is able to assist to overcome those proceedings, we would do so.”

And Telecom did add the following for their terms and conditions for broadband, Xtra, mobile broadband, and mobile:

Department of Internal Affairs Digital Child Exploitation Filtering System
Telecom will intercept communications for the purposes of the Department of Internal Affairs’ Digital Child Exploitation Filtering System and in continuing to use Telecom’s services you acknowledge and consent to this.

You can also see the final changes to Telecom’s press release through emails sent between Telecom and the DIA’s PR staff.

Telecom Retail CEO, Alan Gourdie’s quote was changed from:

“The abuse and exploitation of children is intolerable and this filter works to block access to the worst-of-the-worst child exploitation websites.”

to

“The abuse and exploitation of children is intolerable and this filter works to block access to known child exploitation websites.”

and

“The system will be applied in coming weeks.”

was added to the bottom of the release.

IRG minutes refer to detailed traffic reports and information about patterns which is given to ISPs.

In the March 2011 IRG meeting minutes:

“Officials noted that more detailed reports on traffic through the filtering system is being distributed to each ISP. ISPs use this data to assist in the management of their systems, including the operation of their internal filtering systems that they offer customers.”

And October 2010:

“Officials noted that the data obtained from the filter can demonstrate patterns of requests for blocked websites that may be of interest to ISPs. This information includes the 50 most blocked sites and the time of day that the filter is most active but cannot identify particular ISPs. The Group agreed that the DIA should draw any such patterns to the attention of ISPs.”

I asked for this information, but received this:

“The information has been withheld under section 9(2)(b)(ii) of the Act (would be likely unreasonably to prejudice the commercial position who is the subject of the information).”

Integrity of the list

I asked whether a URL could be added to the filter list without the approval of three inspectors and without the knowledge of the IRG, and what the limitations are that would prevent that from happening:

“No. The addition of a URL to the filter list requires three inspectors of publications to agree that the website comes within the scope of the filter system. Once a change to the filter list is agreed, only one officer has the ability to edit the filter list. As the task of reviewing the filter list is shared between members of the Censorship Compliance Unit it is unlikely that the same three inspectors will be involved in the review of a website.”

What does the filter achieve?

I asked whether the DIA has any statistics or figures to back up what they say on the Common Questions and Answers page:

“In the long term, if it is made more difficult for persons with a sexual interest in children to access this material, the market will decline and fewer children will be exploited.”

“The Department firmly believes that if the market for child sexual abuse material is reduced, then fewer children will be abused to support that market. The problem is a global one, to which the Department’s website filtering system can only make a small contribution. The Department therefore has no statistics or figures to confirm that the filtering system has lead to fewer children being exploited.”

Chief Censor

I asked whether the Chief Censor has been consulted over decisions relating to the filter.

I received this reply:

“Many of the publications blocked by the filter have been the subject of classification and are therefore on the online database of classified material that is accessible on the Office of Film and Literature Classification website.”

ISPs that were asked to participate/sent a letter about the Digital Child Exploitation Filtering System

If you want to know who was asked (pdf), but didn’t cave.

  • Actrix
  • ASC Data
  • Airstream Metworks [Networks?]
  • Airnet NZ
  • BorderNET
  • BorgWiFi
  • Compass Communications
  • Plain Communications
  • Cybermedia New Zealand
  • Enternet Online
  • Evolution Wireless Consultants
  • Teldave Communications
  • Farmside
  • Freenet
  • GetRheel
  • Go2 Internet
  • AGRE Enterprises
  • Helix Wireless Ltd
  • Internet Hawke’s Bay
  • ICONZ
  • Inspire Net
  • KC Internet
  • Kinect
  • Kiwi Online
  • KTSA Internet
  • NATCOM
  • Netsmart
  • Netspeed Data
  • NZNET Internet Services
  • NZWireless
  • Orcon Internet
  • PlaNet Internet
  • PrimoWireless
  • Slingshot
  • Snap Internet
  • TelstraClear
  • thepacifiicnet [thepacificnet?]
  • The Packing Shed
  • thinair Communications
  • Uber Networks
  • Vodafone New Zealand
  • Web World
  • WirelessWeb
  • WIZwireless
  • Woosh
  • WorldNet Services
  • Xnet
  • Xtreme Networks

is a student who blogs about politics, privacy, and the internet, among other things. Follow him on Twitter @MattTaylor.

One thought on “Guest post: The operation of NZ’s internet censorship filter”

  1. “The presentation never made it to the DIA website, so I requested it. Sticking with the trend of being a complete mess in regards to keeping records, they have lost it.”

    This from the agency that is responsible for administering the Public Records Act? Embarrassing.

Comments are closed.