The safe harbour provisions in the Harmful Digital Communications Bill are a serious threat to online freedom of speech in New Zealand.
How it works
Anyone can complain to an online content host (someone who has control over a website) that some material submitted by an external user on their site is unlawful, harmful or otherwise objectionable. The online content host must then make a choice:
- Remove the content and thereby qualify for immunity from civil or criminal action.
- Leave the content up and be exposed to civil or criminal liability.
The content host has to make its own determination about whether a piece of given content is unlawful (which may be very difficult when it comes to subjective issues such as defamation and impossible to determine when it concerns legal suppression), harmful or "otherwise objectionable".
Furthermore, there is:
- No oversight of the process from any judicial or other agency.
- No requirement for the content host to tell the person who originally posted the content that it has been deleted.
- No provision for any appeal by the content host or the person who originally posted the material.
- No penalty for people making false or unreasonable claims.
We can safely assume that most content hosts will tend to play it safe, especially if they're large corporates with risk-averse legal teams, and will take down material when requested. They have nothing to gain and plenty to lose by leaving complained about material online.
Serious ramifications for freedom of speech
Don't like what someone has said about you online? Send in a complaint and wait for it to be taken down.
This applies to comments on blogs, forums on auction sites, user-supplied content on news media sites, etc, etc. These are exactly the places where a lot of important speech occurs including discussions about politics and the issues of the day. The debates can often be heated, and some sites are well known for encouraging intemperate speech, but these discussions are becoming and increasingly important part of our national discourse.
This law will make it too easy for someone to stop arguing and start making complaints, thereby suppressing the freedom of expression of those they disagree with.
The jurisdiction problem
Of course, this will only apply to websites that are controlled by people who have a legal presence in New Zealand. Overseas websites will continue to maintain their own rules and ignore New Zealand law and standards of online behaviour.
As currently written, these safe harbour provisions are just a bad idea. They're too open to abuse and we believe they're more likely to be used to suppress acceptable speech than to eliminate harmful or "otherwise objectionable" speech. As a very minimum, the complaint should have to be approved by the Approved Agency referred to in the other parts of the Bill.
That said, the whole idea of removing "otherwise objectionable" speech is also quite worrying. The Harmful Digital Communications Bill already has an expansive set of rules about what sort of harmful speech shouldn't be allowed online and this "otherwise objectionable" seems to extend it even further. One of the principles we stand up for here is that civil liberties such as freedom of expression are as important online as they are offline, and this law goes far beyond anything in the offline world.
We hope to have more comment and analysis on other aspects of the Harmful Digital Communications Bill soon.
Update 1st August 2013
The DIA have now confirmed that they did filter some sites hosted by Google and that this caused problems for both the filter and some internet users.
Officials provided an oral briefing on the incident reported regarding a degradation of service noted by some users of certain services. The Filter Operations Team worked with the provider of those services in question. It was discovered that hentai and cgi based child abuse sites hosted on the blogspot.com domain, a resource operated by Google Inc were included in the list in error.These sites were then shown to the IRG. It was then explained that a list refresh, removed the sites in question, and subsequently resolved this issue.
The problem was further compounded by the severe congestion in the networks of one of the upstream providers used by the system. A review of the Filter’s failsafe systems was undertaken. Steps have been added to ensure that the IPs of large hosting providers are flagged and placed on a white list with a reporting mechanism for the removal of the content from the site. Additional resources were requested from the upstream provider in question to ensure traffic congestion can be avoided in the future.
Back in 2011 we spotted the first indications of how the Department of Internal Affairs Internet filter, used by 90% of all New Zealand Internet connections, actually operates. At the time, we noticed an address - 184.108.40.206 - appearing where it shouldn't in traceroutes to a site.
Now that same address has popped up in traces to Google addresses, specifically googlehosted.l.googleusercontent.com (220.127.116.11). As noted in this thread on Geekzone, some people have been experiencing performance problems reaching some Google services.
These performance problems could be caused by a Google-load of traffic to that IP being routed to the DIA's filtering server which may not be coping with the volume. Note that the filter will only be blocking one web address (URL) at that IP and letting the rest of the traffic through.
Of course this won't affect you if you are using an ISP that doesn't use the filter. Check the list of ISPs here.
Making the link
As noted back in 2011, the address appearing in traces where they shouldn't be are controlled by Fastcom, who list the Department of Internet Affairs as an important customer and which they host infrastructure for.
This was always one of the fears when the filter was introduced - that it would reduce the stability and performance of the New Zealand internet. It appears that this has now happened. Two questions:
- Will the DIA remove the entry for this IP now that they realise the problems it's causing?
- How will the DIA block web addresses hosted at high volume websites such as Google (or Wikipedia) when the filter can't cope?
Seeking more information
Have you been experiencing any issues accessing Google? Can you provide a traceroute for us? Post a comment below.
Rumours and hearsay
Thanks to the people who contacted us with more information, we just wish you were prepared to speak on the record. So far we have heard the following from people that we typically find to be reliable:
- That the DIA has denied filtering that IP address.
- That a senior ISP engineer says that the IP address was definitely filtered by the DIA filter and that they have seen the relevant BGP records.
- That the filtering of at least one Google IP address has been removed but that there might be more.
- That Google was greatly annoyed by the block and contacted the Minister to get it removed.
We'll update these rumours as we can confirm/deny them. Please email any information to firstname.lastname@example.org. We will do our best to keep your name confidential if requested, but suggest using an anonymous remailer for the best anonymity.
Please send any updates or other useful links and we'll incorporate them. Last updated: 10/9/2012.
- What's wrong with the Communications (New Media) Bill and can it be fixed?
- Law Commission - Harmful Digital Communications
- Powers of the Proposed Communications Tribunal
Lawyer Steven Price
Lawyer John Edwards
Stephen Bell at Computerworld
Mike O'Donnell from Trademe at Stuff
David Farrar at Kiwiblog
Chris Barton at NZ Herald
Richard Boock at Stuff
- Negotiation is the new black - the "Approved Agency"
- Trolls provide motivation for greater regulation
- Workshops on the Communications Bill are worth attending
Police Minister Judith Collins
- The creation of a new criminal offence that targets digital communications which are "grossly offensive or of an indecent, obscene or menacing character and which cause harm". Harm is said to include physical fear, humiliation, mental and emotional distress.
- The establishment of an Agency (i.e. Netsafe) that will be able to assist and advise people suffering from unpleasant digital communications.
- The establishment of a Communications Tribunal that will be able to respond to complaints and provide "speedy, efficient and cheap access to remedies such as takeown orders and cease & desist notices."
- Amendments to the Harassment Act, Human Rights Act, Privacy Act and Crimes Act to ensure that the provisions of these laws can be applied to digital communications.
- New requirements for NZ schools to work harder at stopping bullying of all kinds.
While sympathetic to the aims, we have some serious questions about the law and the thinking that lies behind it. This article discusses some of the problems that we see, talks about ways to resolve them and asks whether the problems are too great for some parts to be worth pursuing. We have arranged our arguments thematically and finish with our conclusions and recommendations.
The Law Commission has proposed the creation of a Communications Tribunal that will be able to respond to complaints about internet speech and provide "speedy, efficient and cheap access to remedies such as takeown orders and cease & desist notices." The Tribunal would be made up of one of a number of selected District Court judges, with the optional assistance of a technical expert where required.
We were curious to see how what powers the proposed Bill would give the Communications Tribunal and how that would compare to the other tribunals mentioned in the report.
A future article will discuss the types of complaints that the Tribunal will deal with and the principles they are to use when doing so.
What powers would this Communications Tribunal have?
Once a complaint has been made and accepted by the Tribunal, they have certain investigatory powers:
- require any person to provide any document, information or things
- require any person (including the defendant) to give evidence.
Once the Tribunal has made the decision ("...with as little formality and technicality, and as speedily as is permitted...") it can order one or more of the following:
- remove any material from any online media
- forbid anyone from republishing or encouraging others to republish the same or similar material
- demand a correction, an apology or the right of reply
- publicly identify the author of a particular communication.
If the demand to produce/give evidence or any of these orders are disobeyed it would be punishable by up to 3 months jail and/or a $5000 fine.
Compared to other tribunals
In the Ministerial Briefing, they compare the Communications Tribunal to other tribunals such as the Tenancy Tribunal, Human Rights Review Tribunal and the Disputes Tribunal.
Firstly, we note that there is a major difference between the Tenancy and Dispute Tribunals (where the tribunal is arbitrating an existing agreement between two parties) and the Communications and Human Rights Review Tribunals where there is no pre-existing agreement between the people involved. This means that we think the Human Rights Review Tribunal is a better subject for comparison.
Secondly, disobeying any orders from the other tribunals does not result in a jail sentence but rather fines of between $1500 and $5000. The ability to back its decisions with a threatened 3 month jail sentence is is a major difference in the powers of the Communications Tribunal.
Thirdly, the laws for the other tribunals are much more detailed as to how they are to perform their work. There are procedures, clarifications of who can appear and when, oath-taking, rights of appearance and notification, etc, etc. The proposed Bill is either unfinished or the Law Commission really does seem to want hearings to be a quick and dirty affair, something that may not be appropriate when talking about issues that have important Bill of Rights implications.
Fourthly, the other tribunals do have some powers to order evidence and testimony - but legally privileged information is protected and the Human Rights Review Tribunal is subject to the Evidence Act.
Is there any defence/appeal?
There is no requirement for the defendant to be heard or to have a chance to put their case forward. (Lawyer John Edwards counters this by saying that the Tribunal's requirement to comply with the principles of natural justice would require that affected parties be given an opportunity to be heard.)
The complainant can appeal a decision to an Appeal Tribunal (made up of two District Court judges).
The defendant has no opportunity to appeal any decision, nor do other possible targets of an order (the ISP, webhost or 'any other person').
The Communications Tribunal would have very broad powers over internet content. Breaching one of their orders will result in a serious fine of up to $5000 or jail time of up to three months. This contrasts with the report stating that it would be "protective, rather than punitive" and would "not have powers to impose criminal sanctions". If you refuse to follow the orders (possibly because you believe they are unfair, breach your freedom of expression, or because it's technically impossible) you'll find that punitive criminal sanctions quickly follow.
The Law Commission repeatedly mentions that the Tribunal should be "speedy" and "efficient" with "little formality". The proposed Bill is very light on detail when it comes to the nitty gritty of running a Tribunal - presumably with the thought that this would just slow them down. They seem to be of the view that the Tribunal must react in "internet time" without quite realising that a result in days or even hours probably won't be good enough to avoid harm to the complainant.
The cases coming before the Tribunal are not always going to be easy, with internet flamewars and inter-clique battles typically leading to bad behaviour from all of the parties that will need to be unpicked properly to make a fair decision.
This lack of process and protection for the rights of the defendant to a fair hearing (including the right to silence) will surely lead to bad decisions that fail to take into account the principles of natural justice.
Moreover, the Tribunal is dealing with a very serious matter, the right to freedom of expression as guaranteed by the NZ Bill of Rights. This is not some petty dispute over who pays for the repairs to a car or whether the oven was cleaned properly on vacating a flat. The level of formality and respect to the rights of the participants is very different between the Communications Tribunal and the more directly comparable Human Rights Review Tribunal.
We believe that, even before you consider the grounds for complaining to the Communications Tribunal and the principles it will follow to make decisions, there are some serious problems with the Tribunal as conceived by the Law Commission. The proposed remedies are too expansive, the penalties for disobeying too harsh and the unseemly haste that will go into making a decision is not appropriate.
This post has been corrected on 22/8/2012 to clarify that only the complainant, not the defendant, can appeal an order of the Tribunal.
The Law Commission has released Harmful Digital Communications (PDF) - the rushed report into the "adequacy of current sanctions and remedies". According to the summary they are proposing:
- The creation of a new criminal offence that targets digital communications which are "grossly offensive or of an indence, obscene or menacing character and which cause harm". Harm is said to include physical fear, humiliation, mental and emotional distress.
- The establishment of a Communications Tribunal that will be able to respond to complaints and provide "speedy, efficient and cheap access to remedies such as takeown orders and cease & desist notices." It is also envisioned that Netsafe would take a larger role in being a first port of call for people seeking help.
- Amendments to the Harassment Act, Human Rights Act, Privacy Act and Crimes Act to ensure that the provisions of these laws can be applied to digital communications.
- New requirements for NZ schools to work harder at stopping bullying of all kinds.
The last two of these seem innocuous so our response will concentrate on the first two.
New "digital communications" offence
While it is undoubtedly true that the internet has allowed people to be nasty to each other on a wider scale than before, we are still not convinced that new laws are needed.
This is especially true when the Commission believes that the law should forbid offensive speech that has only got as far as causing someone "significant emotional distress", a rather low bar when adolescents or other excitable people are involved. (The Commission acknowledges that this goes beyond the current bounds of NZ criminal and civil law.)
We are also concerned when it is proposed to make something illegal on the internet that wouldn't be illegal if it was published in some other way. Does it really make sense that the same message might be legal on a billboard in the middle of Auckland but illegal if it was then posted to the Trademe Forums? As we say in our founding principles, "We believe that our civil liberties don't just disappear when using the internet."
It seems like that the new law will mainly be used as just another threat/weapon by people already engaged in internet battles.
All in all, we view this proposed new law with suspicion and fear that it will limit freedom of expression and cause more problems than it solves.
Establishment of a Communications Tribunal
It is always a concern when a new body with the power to censor is created, epecially when it is envisioned that it should be "speedy, efficient and cheap". When you realise that it's going to be tasked with censoring communications on the global internet, you have to wonder just what they were thinking.
Even reading the summary paper you get the feeling that the Law Commission doesn't think the Communications Tribunal is going to do much good, citing problems with identifying people and establishing jurisdiction overseas. Obviously it's only really going to have jurisdiction in New Zealand and this is just going to drive people's nastiness offshore.
Furthermore, the Tribunal will consist of one of a number of selected District Court judges, and they're going to have the power to order ISPs and web administrators to take down content. This can be significantly more difficult than it sounds and seems like a significant threat to freedom of expression, especially in those cases where the original author cannot be found therefore cannot defend themselves.
The Communications Tribunal seems to be a "at least we tried" measure, doomed to failure in all but a very narrow range of cases. We question whether it is worth doing at all.
We look forward to reading the full report and the proposed legislation and giving a fuller response when this is available.
We recently received a complaint from a German tourist saying that when he tried to access a couple of innocuous German political sites using the free wireless at Te Papa, a page was displayed saying that his access to those sites was blocked. Te Papa had implemented internet filtering software to control what websites people could access.
The tourist complained to Te Papa. They initially tried to fob him off, but eventually he got through to someone and those sites were removed from the filter. A good outcome, right?
Not So Simple
This incident raises a number of questions:
- Why is Te Papa filtering what people see on the internet?
- What type of content is being blocked?
- Who chooses which types of content to block?
- Finally, why are they using software that flags a German political website as "Pornography (Japanese)"?
Why censor internet access?
We spoke to Te Papa but they couldn't tell us why they felt the need to censor their wireless. They did know that they blocked file sharing protocols to reduce internet traffic but couldn't tell us why they were blocking some websites. We'd understand if Te Papa wanted to use some censorware on internet terminals available to children, but their filter goes far beyond that.
Are they worried that people will somehow download banned material? It's not their responsibility and it's not like they're monitoring phone calls to make sure people don't have illegal conversations.
Are they worried that people will browse offensive material (pictures/video) in a public place and annoy others? An increasing number of their guests have smartphones and "bring their own internet" and someone could as easily watch a porn DVD on a portable player. In any of these cases, it would be a simple matter of asking them to stop.
We reject the idea that internet providers (for that is what Te Papa is doing by providing free wireless) are in any way responsible for what an internet user does with that connection, in the same way that they aren't responsible if someone uses Te Papa provided water or electricity.
Te Papa's Filter
Te Papa could tell us that they are using internet filtering supplied by their internet service provider, Telstra Clear, but they had very little idea about how it works.
- They don't know why they're blocking some types of content.
- They don't know what type of content is being blocked.
- They don't know who decides what to block and what criteria they use.
- They don't really want to find out, saying that they're "happy for them [Telstra Clear] to make the decisions".
Any museum and art gallery is surely aware of issues around censorship and free speech, Te Papa itself has been involved in certain controversies about what should be shown and to who. Why has Te Papa chosen to censor the internet with so little thought about why and how? As our visiting tourist put it:
Seeing this happen at Te Papa, a flagship of the capital, tells me something about democracy and the importance of free speech and human rights in NZ.
We tend to side with the visiting German tourist - it's inappropriate for a place like Te Papa to be censoring the internet.
We suggest that worries about people accessing "bad material" over public internet are overstated. Any inappropriate behaviour (e.g. viewing internet pornography in a public place) can be solved by asking them to stop.
If an organisation decides to press on with censorship anyway, it would seem at a minimum that they should:
- Be able to tell people what sort of material is blocked and why they're doing it.
- Have a process for deciding what to block.
- Provide an easy way to appeal any incorrect blocking.
- Not use software that is as badly written as that used by Te Papa and TelstraClear.
Of course, once you look at all that, doesn't it just seem easier to let people have unconstrained internet access in the first place?
The following is a guest post from Matt Taylor about the operation of the government's internet censorship in New Zealand.
- Very few people (only 9%) knew whether their ISP used the government filter. The ISPs using the filter represent more than 90% of the NZ internet market.
- Less than a quarter (23%) wanted the government choosing whether to filter their internet connection.
- Two-thirds want the filter to include other, non-specified, content.
Tech Liberty's Comment
We've always been opposed to the government's internet censorship system but support the right of people to choose filtering for themselves or their families. We're pleased to see that the people of New Zealand agree with us, rejecting the idea of letting the government impose centralised censorship.
Unfortunately we already have such a system. While it is voluntary at the ISP level, their users get no say in the matter and this survey shows that most are unaware that they are covered by it. We also note that with Telecom, Vodafone and 2 Degrees all having implemented the filter there are no major providers of censorship free mobile data in New Zealand, further undermining any voluntary aspect to the current filter.
At the same time it also seems obvious that the internet has a lot of disturbing content that you might want to block other than just child pornography. Therefore it makes sense that someone wanting "cleaner internet" at their home would be looking for a more general purpose filter than the government's one. A number of ISPs do offer such a service (either free or as an add-on) and it seems that they should be promoting this further.
In conclusion, it seems that the survey shows that the current government internet filter is implemented the wrong way for the wrong purpose and by the wrong people.
Tech Liberty made a submission to the Media Regulation review run by the Law Commission. The summary of our submission is as follows:
We recognise that "big media" still has a lot of influence in New Zealand but that this influence is declining as the internet gives people the ability to:
- self-publish ("little media")
- share and distribute self-published articles
- publicly critique the work of big media.
This change can be seen in the way that online media such as blogs used to be very reactive to work published in newspapers and TV, but now newspapers and TV are increasingly picking up stories from blogs and other forms of social media.
Much of the rest of the review was about how the media should be regulated but we believe that the need for greater media regulation has not been established.
Defining news media
The review suggests that regulation could be a trade-off for official recognition of news media, and spends a lot of time discussing who would be included in the definition of "news media". We believe any definition would either be so broad as to be useless or so narrow that it would miss out many people and publications that arguably should be covered. This is especially true as journalism continues to develop and change in the internet age.
Special privileges for news media
The review suggests that we need a definition because some laws refer to the news media to bestow special privileges. Our preference is that these privileges should be extended to all citizens (e.g. replace the media "fair dealing" section in the Copyright Act with a more general "fair dealing/fair use" provision for all people) or should be available to all people when they are acting as a journalist.
Furthermore, any organisation that wish to include/exclude "news media" can make their own determinations as to who that is rather then relying on a government mandated definition.
We do not believe that there is a need for an external regulator. Indeed, as the internet gives people the means to publicly criticise the output of big media, the need for a regulator is reduced compared to the days when only a very limited number of media companies could get their views out (due to limited airwaves or the need to own a printing press).
Current regulation is also generally quite ineffectual. The original message still goes out and then any correction is ignored as the issue is no longer "news". Regulation tends to be after the fact score-keeping at best.
Any publishing company or journalist who wishes to be taken seriously has the ability to form a group and create their own code of ethics and regulator. The Press Council is an example of this and we do not see why other media groups who wish to be taken seriously could not do the same.
Finally, if there was a regulator our view was that it should be in the form of an Ombudsman with the ability to make morally rather than legally binding decisions.
Malicious speech online
The second part of the review was about harmful speech online.
We agreed that malicious speech online can be a problem just as it is when face to face Furthermore, the nature of the internet means that the malicious speech can both spread further and remain available longer.
We believe that the law is limited in what it can do about people being nasty to each other, either online or in person. Even if current law could deal with these issues, the international nature of the internet and the inevitable jurisdiction issues would mean that only a small proportion of problems could be resolved.
That said, many of the more contentious issues will be conducted by people who know each other well and probably even live in the same area. The law should be able to deal with issues of harassment using existing laws (possibly with the tweaks identified by the Commission to ensure that online communications are definitely covered).
We reject the idea that speech online should be held to a higher standard than any other form of speech.
We do support the creation of a new crime of "malicious online impersonation" with the caveat that it must be very careful not to include obvious cases of parody and other forms of non-serious impersonation.
No ISP responsibility
We oppose any attempt to make ISPs responsible for taking down or blocking information either hosted on their network or available through it. This is because ISPs typically have no visibility or control over the material that their customers might store on servers hosted with the ISP. Typically an ISP will only have one option - passing the request on to the publisher or turning off the entire site. Closing down an entire site would seem a gross over-reaction to the content of one offending post or comment.
It does seem appropriate to us that an ISP might have a responsibility to pass on a takedown message to the site owner (similar to the copyright legislation) or, upon presentation of a suitable court order, reveal the identity of the site owner so that legal action can be taken.