Tech Liberty NZ Defending civil liberties in the digital age

What’s wrong with the Communications (New Media) Bill and can it be fixed?

Posted on September 2, 2012

The Law Commission's proposed Communications (New Media) Bill (PDF) is the result of their report on Harmful Digital Communications. They are proposing:

  • The creation of a new criminal offence that targets digital communications which are "grossly offensive or of an indecent, obscene or menacing character and which cause harm". Harm is said to include physical fear, humiliation, mental and emotional distress.
  • The establishment of an Agency (i.e. Netsafe) that will be able to assist and advise people suffering from unpleasant digital communications.
  • The establishment of a Communications Tribunal that will be able to respond to complaints and provide "speedy, efficient and cheap access to remedies such as takeown orders and cease & desist notices."
  • Amendments to the Harassment Act, Human Rights Act, Privacy Act and Crimes Act to ensure that the provisions of these laws can be applied to digital communications.
  • New requirements for NZ schools to work harder at stopping bullying of all kinds.

While sympathetic to the aims, we have some serious questions about the law and the thinking that lies behind it. This article discusses some of the problems that we see, talks about ways to resolve them and asks whether the problems are too great for some parts to be worth pursuing. We have arranged our arguments thematically and finish with our conclusions and recommendations.

Redefining what is acceptable language

A major part of the Bill concerns the creation of principles and rules about what types of language are acceptable in electronic communications. These principles and rules go significantly beyond current New Zealand law.

The Bill sets up 10 communication principles to be used by the Agency and Tribunal, although they state them as universal truths of the form "A communication should not...". While some of these are derived from existing law (e.g. harassment, threats), some of them are new such as that a communication should not be grossly offensive or indecent.

There are potential problems with some of them. For example, the statement that a communication should not make a false allegation should surely read a "knowingly false allegation", and when it says that a communication should not contain a matter that is published in breach of confidence, what does that mean for whistleblowing and leaks?

Principle 10 expands the realms of unacceptable denigration: "A communication should not denigrate a person by reason of his or her colour, race, ethnic or national origins, religion, ethical belief, gender, sexual orientation, or disability." The Human Rights Act currently forbids the denigration of people on the grounds of colour, race, or ethnic or national origins. Principle 10 obviously expands this significantly but, interestingly, it omits some of the other obvious categories from the Human Rights Act such as age, marital status, employment status and family status.

The report gives no reasoning for this expansion of unacceptable denigration, nor does it explain why it chose some grounds but not others. It does say that the Human Rights Act should catch up but this seems to be putting the cart before the horse.

The question has to be where did these principles come from? The Law Commission's report says they're a bit of a grab bag from existing criminal and civil law. Are they a set of principles that we can all agree on? Shouldn't there be some more public discussion around them? If these principles are so obvious and correct why should they only apply to electronic communications?

But the principles won't even be applied to all electronic communications. Major media organisations subject to the Press Council or Broadcasting Standards Authority will be exempt from the Tribunal, even though these two bodies have very different standards than that proposed by this law.

This means that there will now be three standards that a communication could be measured against:

  1. communications made online and offline by a major media organisation (subject to current law and the Press Council or Broadcasting Standards Authority)
  2. communications that not made electronically (subject to current law)
  3. communications made electronically (subject to current law and this new law)

The legality of the communication and the recourse provided to someone harmed by it will therefore change as the communication moves between those three modes. What may be acceptable if written down on paper, may then be unacceptable if a photo of that paper is emailed to someone, but then may be acceptable again if that photo is published on a mass media website. This seems highly undesirable and will lead to confusion when it comes to enforcement.

The Law Commission argues that this is because of the unique nature of digital communications - the way that they can be copied and republished so quickly. However, while this is possible, it will not always be the case and many of the communications that would fall under the jurisdiction of the Tribunal might just be between two people, seen by no one else. At the same time, it is possible to use non-digital communications (e.g. a poster on every noticeboard in a school or workplace) to quickly and efficiently harm someone. It would seem better to look at the actual harm done rather than try to pre-determine which communication modes are most harmful.

An escalation of harm

As well as the principles overseen by the Tribunal, the bill also includes a new criminal offence 'causing harm by means of communications device' and expands the definition of harm to include emotional distress. By including emotional distress, the Law Commission admits that they are pushing NZ criminal law to a new extreme as in the past criminally harmful speech has tended to be speech that leads to fear of physical damage to person or property.

The main problem with including emotional distress in both the offence and the Tribunal's guidelines is that it will be necessarily subjective. While the sections of the bill concerning the Tribunal allow for them to decide that a reasonable person would not suffer significant emotional distress, this is missing from the definition of the criminal offence. Indeed, it is only necessary that the sender of the message intended to cause emotional distress, not that the recipient actually experienced it.

Another unusual element is that the offending communication does not have to be directed at the complainant. Rather, the prosecution can prove that the defendant a) knew that the communication would distress the complainant and that b) somehow the complainant saw the message. This means that if you write something nasty about someone to a friend and the message is forwarded to that someone, you could be found guilty of a criminal offence under this law.

What is odd is that this new definition of criminal harm will only apply to messages or other matters that are communicated electronically. If we accept that intentionally inflicting emotional distress on someone by sending offensive communications should be criminal, shouldn't this apply to communications sent on paper as well? Why do we see this double standard between offline and online communications?

Lack of justice

The Bill has very little detail about how the Tribunal should work, especially when you compare it to the laws enabling other tribunals such as as the Human Rights Review Tribunal. This seems to be a deliberate feature as clause 15(2) shows: "A Tribunal must consider and determine a complaint with as little formality and technicality and as speedily as is permitted by the requirements of this Act; and a proper consideration of the complaint; and the principles of natural justice." The report also refers to the need for the Tribunal to be fast and efficient, and acting in "internet time".

This might make sense from a purely functional point of view - except that complainants also have to show that they have attempted to resolve the matter through other avenues, and complainants other than the Police will have to first take the complaint to the Agency. Surely if the matter has already been delayed this much, there is room for the Tribunal to concern itself with fairness and justice as well as speed. This is especially important as it will often be dealing with analysis of complex social situations and making orders concerning fundamental rights such as freedom of expression.

The Bill also has no mention of the rights of the defendant and the other possible targets of orders (such as ISPs and websites) to be heard before the Tribunal or to request a formal (rather than informal) hearing. While some might see this as being included in "...and the principles of natural justice", we believe that it should be much more explicit to ensure that justice is done for everyone involved. In particular the Bill should provide guidance about how the Tribunal should proceed when the defendant is either unavailable or their identity is unknown.

Defendants and others subject to orders come off even worse when it comes to appealing a decision. Simply put - the Bill only allows for complainants to appeal a decision. This is so obviously unreasonable that we can only assume that it is a mistake in the drafting of the Bill.

If you fail to obey an order of the Tribunal (either by choosing not to or by being unable to) you can be punished with a fine of up to $5000 and a jail term of three months. This seems to be a rather heavy penalty, and of the three tribunals mentioned by the report (Disputes, Tenancy and Human Rights Review) it is the only one with a jail term as a possible punishment.

Finally, we note that the Tribunal has the power to order the defendant to apologise. Until such time as the Tribunal also gains the power to make the defendant sorry for what they have done, such apologies are surely worthless. Forcing a person to make a false apology (and lie well enough to at least sound somewhat sincere) makes a travesty of the process.

Technical difficulties

There are a number of difficulties with the mechanics of how the Tribunal would work.

The first and most obvious is that most of the internet (websites and users) are overseas and not subject to NZ law. The Tribunal can make orders against New Zealand websites and, where they can be identified, NZ authors on overseas websites and expect them to be obeyed. But if the same content is on (or moved to) an overseas site the Tribunal can do no more than politely ask them to take it down (something that the Agency could have already done with just as much authority).

Related to this are the difficulties in identifying people online. Sometimes it will be easy (e.g. Facebook and its real names policy) but at other times it will rely on being able to compel a website to reveal a user's identity (if they even know it in the first place). Again, in many cases the websites will be overseas and the Tribunal's orders will be as effective or ineffective as the Agency's requests.

The report implies that orders to ISPs to reveal user details will be effective in working out the identity of defendants. The ISP could only be useful when an internet IP address used by the defendant is known - but the ISP can only give the name of the account holder and not the name of the person using that internet connection who made that communication. This is not helpful when people in flats, businesses or schools all share internet connections.

Because in some cases there will be many anonymous people repeating the offensive communications, it seems clear that the Tribunal will have to issue orders against "any person". Even assuming they are in New Zealand, how effective will this be when most of the people doing it won't even know of the existence of any order? We can't expect most NZ internet users to keep an eye on the Tribunal to make sure they're not exposing themselves to legal jeopardy by publishing something forbidden online.

Finally, where the author cannot be forced to take down a post (probably because they have not been identified), the Tribunal can issue orders to the website or ISP to take it down. This can be technically quite difficult if ordered at the wrong level - i.e. if an ISP is ordered to take down a post on a website they host for someone else, they will typically have no ability to modify the content of the website and their only option will be to take the entire website offline, a clear case of using a sledgehammer to crack a nut. This would be somewhat mitigated if the subjects of orders had some way to appeal but there is no provision for this in the Bill.

Practicalities and effectiveness

Even with all these flaws and worrying impositions on the Bill of Rights, does the Law Commission's proposal have any chance of actually being effective at reducing harm caused by malicious communications? This leads us to one of the most worrying aspects of the report - that there is no serious analysis of the different types of harmful communications and therefore there is no corresponding discussion of how the proposals would have helped in each type of case (or even whether they would have helped in the cases given as examples in the report). For example, there are obvious major differences between the following scenarios:

  1. the anonymous internet "pile-on" against someone
  2. an in-person argument that has moved to mutual abuse on Facebook
  3. a struggle between two cliques on an online forum
  4. someone harassing their ex-partner via text message

All of these have the potential to get unpleasant enough that they cross the threshold where the Tribunal could get involved. But can the Tribunal help? Will the powers suggested for it by the Law Commission actually help in these sorts of cases?

The first major problem is that in many cases the harm will already have been done before the Tribunal gets involved. In any scenario, the situation first has to get bad enough that the complainant looks for a solution. Then they have to make an initial complaint to the Agency who will try and assist them to resolve it. Finally, if that does not help, it will be escalated to the Tribunal. The chances are that in a significant number of cases that the harm will have already been done or the situation will be resolved.

As said by the victim in one of the more horrible stories in the report, "in less than a day after it happened, I started getting more abuse at school". At that point the information was out, people were already acting cruelly in person and the problem was far beyond the scope of any order the Tribunal could make.

Secondly, a major recurring element in the stories describing the harms done by this type of bullying is that the victim suffers in silence without telling anyone. In these cases it is obvious that the Agency and Tribunal cannot help.

Thirdly, many websites have their own sets of standards. The types of communications that could trigger Tribunal action will often already be in breach of these standards. In that case the website could be expected to take down the offending material upon request of the victim or the Agency and once again the Tribunal will be unnecessary.

Then we get to the range of orders that the Tribunal can make. Obviously takedown and correction notices will be ineffectual if the material is coming from uncooperative overseas websites. Takedown notices on local or cooperative websites may work, but in many cases the offending communication will have already been grabbed by internet archives, syndication services and search engines that keep copies of what they retrieve. Other orders forbidding people from repeating allegations will quite possibly be ignored because the people doing it won't know anything about the order. Even in the perfect case where the Tribunal does have jurisdiction, the efficacy of any takedown order will be limited.

So, after getting through all of those, what sort of percentage of cases will be left where the Tribunal will be able to make a difference? Sadly the Law Commission has done no analysis and wasn't prepared to even make a reasonable guess. We suspect that it will be very few.

Fears for the future

In the report the Law Commissions says that they do not wish to set up an internet censor but that is exactly what they establish in the Bill. The Tribunal has clear powers to order takedowns of material it finds offensive and to pre-emptively forbid people to post things online.

For those who think we are being paranoid, the Bill as proposed has already gone past the limited scope of only being used to censor communications that are causing direct harm to a specific individual (i.e. cyber-bullying, harassment, etc). It also includes a provision to allow the Chief Censor to request the Tribunal to order material to be taken down if it breaches the law around the reporting of suicides. These provisions are not for the protection of an individual, but for the protection of society as a whole (on the grounds that reporting suicides will encourage more suicides).

Once you've gone this far, what's to say that other laws should not be enforced by the Tribunal? The Law Commission itself has recommended the establishment of an internet censor before in their review of the laws around name suppression. We could also use it to remove information about how to breach copyright, how to commit euthanasia, and a range of other social harms.

Some may argue that this is a good thing and will finally lead to the taming of the "wild west of the internet". Unfortunately this is where we, once again, come up against the international nature of the internet. Just because we ban this information being published on New Zealand based websites doesn't mean that other countries will also ban it, and New Zealand users will be freely able to download it from overseas.

This will be the point that someone mentions the internet filter run by the Department of Internal Affairs. Currently used only to block child pornography, why couldn't the Tribunal enforce all these various bans by adding new rules to the filter? We fear that this will lead to the sort of overly controlled and filtered internet seen in countries such as China and Iran.

Conclusion and recommendations

We think the report from the Law Commission shows poor quality thinking and a refusal to grapple with the realities of the situation. It fails to properly define the problem and then fails to explain how their proposed solution will fix it.

The off-hand way in which it recommends a significant extension of what is unacceptable in electronic communication surely shows that the Commission is still stuck in the past where digital communications are of secondary importance, rather than rapidly becoming the primary method that modern people use to exercise their right to freedom of expression.

The recommended lack of consistency between the rules for electronic and other forms of communication is muddy thinking at its worst. While they are correct that the internet and mobiles phones gives people new ways to indulge in some very old fashioned cruelty and viciousness, we believe that any new law should concentrate on the harms being done and not the medium used to commit them.

The lack of due process in the Communications Tribunal is unjust, particularly when it comes to the rights of the defendant and the subjects of any orders to be able to defend themselves. That only the complainant is allowed to appeal the decisions of the Communications Tribunal is unconscionable.

And finally, and most importantly for a civil liberties group, we think that the recommendations are, at best, careless of the rights contained in the NZ Bill of Rights.

The bits we'd keep

  • We support the extension to the law around publishing intimate visual recordings.
  • We support the updates to the Crimes Act, the Harassment Act, the Human Rights Act and the Privacy Act that make it clear that they apply to online behaviour as well.
  • We also support the recommendations of the report that further resources should be put into attempting to limit bullying (both online and offline) at schools.

Criminal offence of causing harm by means of a communication device

We are prepared to accept that a case can be made out for making the very worst of deliberately harmful speech illegal. However, we see no reason why this illegality should only be limited to electronic communications. Surely a poison-pen letter delivered to the letter box can be as harmful as an email or a text message on a phone.

We recommend the following changes:

  1. That the complainant must show that the communication actually caused them harm.
  2. That it apply to all forms of communication and not just electronic communications.
  3. That it must be established that the writer/creator intended for it to be seen by the victim.

The Agency

We believe that the report makes a good case for the establishment of an Agency (possibly an expansion of Netsafe's current role) that can help people resolve problems around harmful digital communications. We see that this would be of significant benefit and would go a long way to resolving most of the problems with harmful digital communications that can be resolved.

We further recommend that the Agency should be charged with collecting statistics about the types of problems that it encounters and how they are resolved (or not). This will be valuable if at some point in the future it is determined that something like the Tribunal is needed.

Communications Tribunal

We believe that the Law Commission's report does not make a compelling argument for the establishment of the Communications Tribunal and that, even if it did, the proposal is lacking in too many respects. It would be a major impost on civil liberties, in particular the rights to freedom of expression and the right to a fair judicial process, while at the same time there is no serious attempt to demonstrate how it would solve the problems talked about in the report.

We further believe that the Agency will be relatively effective and that there will be a very limited number of cases where the Communications Tribunal could add much to the process. The Agency also does not require the proposed new legal and enforceable definition of acceptable online speech which will remove many of our objections to the Bill.

We therefore recommend that the establishment of the Communications Tribunal should be abandoned and that the relevant sections of the Bill be removed. If in the future it becomes clear that New Zealand would benefit from some sort of tribunal we suggest that this process should start again based on the experiences and statistics collected by the Agency.

If the Tribunal is to be kept, we believe that it should be sent back to the Law Commission to have another attempt, taking into account the realities of enforcement online. Furthermore:

  1. There should be protection for the interests of defendants and those subject to orders, including the rights to ask for a formal hearing and to appeal decisions of the tribunal.
  2. The power to order apologies should be removed.
  3. That the communication principles be re-examined and should reflect current NZ law, not go beyond it. An expansion of what is acceptable is a subject worthy of a report of its own.
  4. That disobeying orders of the Tribunal should only be punishable by a fine and not a three months prison sentence.
  5. That the determining factors should be the harm caused balanced off by the acceptability of the communication, not the medium that the communication is made on. i.e. that the law should be made technology independent and apply to all communications.
  6. The exception for mass media companies should be removed.
  7. That the section concerning breaches of the Coroner's Act should be removed as it is at odds with the purpose of the Tribunal.

About Thomas Beagle

Co-founder and spokesperson for Tech Liberty
Comments (10) Trackbacks (0)
  1. The Human Rights Act currently forbids the denigration of people on the grounds of colour, race, or ethnic or national origins.

    Please point me to the section, because I’m not seeing it.

  2. While the media should not be relevant and there must be actual provable harm, surly it’s more relevant whether or not the sender intended harm rather than that they intended the victim to see the communication.

    If we accept that hurtful communication can cause real and significant harm then it can cause that harm when sent to acquaintances of the victim. Or is it tech liberty position that such cases are already covered by libel or other existing laws?

    • That’s a good point.

      The Bill uses the wording:
      ——-
      (1) A person (person A) commits an offence if person A sends or causes to be sent to another person (person B) by means of any communication device a message or other matter that is (offensive, etc)

      (2) The prosecution must establish that—
      (a) person A either—
      (i) intended to cause person B substantial emotional distress; or
      (ii) knew that the message or other matter would cause person B substantial emotional distress; and
      (b) the message or other matter is one that would cause substantial emotional distress to
      someone in person B’s position; and
      (c) person B in fact saw the message or other matter in any electronic media.
      (3) It is not necessary for the prosecution to establish that the message or other matter was directed
      specifically at person B.
      ——

      The first part (1) seems to include that it must be sent to them (and not their family/friends) and (2)(c) that they must have seen it, but then (3) says it doesn’t have to been specifically directed at them.

      So possibly they should add the idea you refer to that you can deliberately harm someone by sending it to other people.

  3. I’m hoping that ‘false allegation’/’knowingly false allegation’ and the definition of ‘indecent’ get cleared up at the select committee stage, but this is sloppy drafting.

    Likewise, I’d hope that whistleblowing protections get blended in here, but I think I may be optimistic on that one. Without amendment, I can see this turning in to an MFAT-style witchhunt for anyone who uses the seemingly anonymous environment of the Internet to highlight bad work practices, all made easier for the employer thanks to IP address tracking.
    Given the broad set of principles they’re trying to establish (blended from all over the place, they say, but I’d like to know exactly where each one comes from), and the language that needs tidying up, I’d be expecting a decent period of time in select committee to shop these principles around and get the bill in to a workable format. Some of the inconsistencies make me wonder just how well the Justice department knows its own legislation.

    As far as jurisdictional difficulties go… unless we want to open up the Internet to some heavy international governance arrangements (and trying to come to consensus on international views of governance will make New Zealand discussions look so productive and straightforward in comparison), this is the same difficulty we face in child protection cases, spam enforcement and other cross-border issues. Certainly in this case it makes it all seem ineffectual, but companies such as Facebook typically do supply information and take action on user accounts if there is evidence the person is breaching local law. Granted, there are a few exceptions – protecting users during the Arab Spring etc – but as far as NZ is concerned these are usually in line with Western principles of freedom of speech etc.

    I guess what I’m trying to say is – yes, it’s difficult to enforce these things across borders, but the difference between trying to enforce them without domestic legislation in place and with domestic legislation in place, is that overseas websites usually look to domestic law for guidance in how they should act (at least, from what I’ve observed). It won’t allow anyone to place charges against someone offshore who they are being ‘harmed’ by, but if other countries are using similar laws, there also will be some level of cross-jurisdiction compliance.

    I’d prefer that the Bill stick to the ‘bit we’d keep’ list – make it clear that just because a behaviour is online, doesn’t mean you don’t have responsibility for it, and fix up the oddity of visual intimate photos not being provided the same protection as recordings (recordings are already covered under the Crimes (Intimate Covert Filming) Amendment Act 2006). However, I’m optimistic that some of the definition problems are just drafting faults.

  4. I don’t think it goes that far. I think calling someone a N!**3r denigrates them. But if that’s all you’re doing/saying, then I dispute that HRA s 61 criminalises this.

  5. Agree totally. Well done for sticking with this.

    My point, which is a bit hard to express, is that this is not the way legislation should be.

    It takes a wide range of expression, beyond that which is currently proscribed and then carves out exceptions. There is a view that this doesn’t matter, because we’ll get that really cool judge who was dealing with Kim Dotcom and they’ll issue their judgments on twitter, etc.

    Hardly a society ruled by laws. More of a playground society, where Teacher will try and deal with any bad behaviour on an ad-hoc basis.

  6. One section I dislike is the exclusion of:

    “the subject-matter of the complaint can be dealt with under the complaints procedure of the Broadcasting Standards Authority or the Press Council”

    I can’t see why the established media should get a free-pass here if nobody else does. Especially since the new system is much harsher.

  7. Lots of food for though, and plenty I can agree with. A few quick points about things that I can’t agree with:

    — I think it’s really overstating things to say that the Law Commission proposes “a new legal and enforceable definition of acceptable online speech”. What it does (rather like the Privacy Act, but tighter) is set out a series of principles that might be used to obtain a limited remedy if they are breached AND you can show serious harm AND the remedy is demonstrably justified AND factors like truth and public interest don’t outweigh the harm AND an expert agency has tried to resolve the issues first. That’s a far cry from your description here.

    –Whistleblowing will surely be protected by the public interest factor and the free speech provision.

    –The Law Commission gives lots of reasons to treat online harms differently. Billboards, for example, aresubject to the Advertising Standards Authority, have not proved problematic, are not available to everyone to publish, aren’t instantly published to the world in a searchable, shiftable format, and are hard to post anonymously.

    –A requirement that a reasonable person would suffer signficiant emotional distress is probably implicit in the proposed criminal offence: it must be shown that “the message or other matter is one that would cause substantial emotional distress to [the victim]”

    –You’re forgetting that there’s a fast track process whereby the agency can quickly refer the matter to the tribunal when it’s serious and urgent

    –Natural justice plainly includes the right of the defendant to be heard

    –You really think that the provisions allowing the Chief Coroner (not Chief Censor) to ask for an order to enforce reports that break the suicide reporting laws is the top of a slippery slope? Looks pretty ring-fenced to me.

  8. This “legislation” is almost purely written for the likes of this website here: http://www.mediawhores.co.nz . If you look at that website the question very quickly forms in your mind- is this legislation so obviously ‘heavy handed’ and being rushed through so as to limit the damage that could be done to our so called ‘democracy’ by the ease of sharing real information and news? We have all watched in recent years as our Government have torn so called ‘democracy’ and even our own laws to shreds and simply do as they please (or for those who really control them). Every effort should be made by any and every New Zealander to insist this legislation is better thought out and defined – within existing laws that have always served us well – and simply extended out to include online communications.


Leave a comment


 

Trackbacks are disabled.