The Harmful Digital Communications Bill has been reported back and the select committee has made a few changes.
Significant changes
The Bill has added the definition of IPAP (Internet Protocol Address Provider – roughly an internet service provider) from section 122A(1) of the Copyright Act and then in section 17(2A) gives the District Court the ability to order an IPAP to release the identity of an anonymous communicator to the court. Of course, this would only reveal the name of the person who owns the internet account that was used and not the name of the person who used it, so the utility of this will be limited.
The Approved Agency (still unnamed, still expected to be Netsafe) would be subject to the Ombudsmen Act, the Official Information Act and the Public Records Act in respect of the functions performed under the bill. This is a welcome change as it’s important that any agency performing state functions is covered by the bills that help provide proper oversight.
There have also been minor changes allowing the courts to vary orders made previously, clearing up which teachers can apply on behalf of pupils, and allowing threats to be treated as possible grounds for an order to be made.
Safe harbour improvements
The major change has been to the section 20 Safe Harbour provisions of the Bill that were dumped into the previous version at the last minute.
The original proposal was terrible – content hosts (pretty well anyone who allows the public to submit comments such as on a blog or forum) would be protected from legal action if they removed material immediately after receiving a complaint. It was obvious that this would be abused by those trying to silence people who they disagreed with.
The good news is that some complaints will be changed from “takedown on notice” to “notice and notice”. This means that upon receiving a complaint, the content host will forward it to the original author of the complained about material (i.e. the person who wrote the comment). If the author agrees or doesn’t respond, the material will be taken down, but if they disagree with the complaint the material will be left up – and the content host will still be protected from legal action under the safe harbour.
However, this does not apply when the original author cannot be identified (or if the author either doesn’t want to respond or can’t respond within the 48 hour time limit). Indeed, the phrasing of the act reads as if content hosts must remove material when in reality they only need do so if they wish to be protected by the safe harbour provisions.
Disturbingly a number of other suggested improvements were not picked up by the select committee. In particular we supported the ideas that complainants should have to make their complaint a sworn statement and that complainants would have to have been harmed by the material themselves.
So while this is a significant improvement, we still fear that these provisions will be abused by serial complainers, internet busybodies and those who want to suppress their “online enemies” by any means possible.
What hasn’t changed
What’s more serious is what hasn’t changed. You can read our articles and submissions to see our full critique of the Bill but there are three points we wish to mention.
Firstly, the Bill sets a different standard for the content of speech online and offline. While we do understand that online communications might require a different approach in available remedies, we firmly believe that the standard of speech should be the same. We note that the internet isn’t only for “nice” speech, it’s increasingly the place where we all exercise the freedom of expression guaranteed to us by the NZ Bill of Rights Act.
Secondly, rather than fixing the horribly broken section 19 – causing harm by posting digital communication – the penalties have been increased. This section completely fails to recognise that some harmful communications have real value to society. For example, the idea that someone might be fined or jailed because they harmed a politician by posting online proof that the politician was corrupt is just horrendous. We honestly believed that the lack of a public interest or BORA test was a mistake but it seems that the Select Committee really does want to criminalise all harmful online speech. This neutered and ineffectual internet is not one we wish to see. (Edit: this section is still subject to the BORA as detailed in 6(2).)
Thirdly, we worry that the bill will be ineffectual where it might be needed most while being most effective where it’s most problematic to civil liberties. Many of the example harms mentioned in the original Law Commission report would not be helped by this Bill – they happen overseas, or they happen too fast, or the people being harmed are just too scared to tell anyone anyway. The Approved Agency will be able to do a lot in the cases where anything can be done, but we’re not convinced of the need for the more coercive elements of the Bill.
Conclusion
There is no doubt that some people are being harmed by online communications. There is definitely a good argument to be made that the government could do something useful to help those people. We’re not convinced that the approach taken by the Law Commission and the Government is effective and we’re quite sure that it includes a number of unreasonable restrictions on the right to freedom of expression guaranteed to us all by the NZ Bill of Rights Act.
It seems inevitable that the Bill will be passed in its current form if there’s time before Parliament closes for the elections. We can but hope that a future government will repeal it and have another go.