Category Archives: article

Original articles by Tech Liberty

Speech about RealMe, big data & power

Edited text of a speech given by Thomas Beagle at the launch of What If – “an education and action campaign working to stop data collection and sharing by the NZ State and private corporations for the purposes of social control and exploitation, and working for community control of information resources for the benefit of all”.


The technocrats have a utopian view of our data driven future. As the NZ Data Futures Forum puts it, they plan to “unlock the latent value of our data assets and position us as a world leader in the trusted and inclusive use of shared data to deliver a prosperous society.”

  • They promise that we’ll be healthier, with population wide tracking to predict and therefore prevent diseases.
  • They promise that government services will be both cheaper and more effective through better targeting of those who need them.
  • They promise that we’ll be wealthier, with businesses able to offer new and exciting products based on our individual needs.

Indeed, is there anything that government and business couldn’t do if they had enough data and some smart people to analyse it?

Now, this is going to require a lot of data. And when you’re collecting a lot of data you’ve got to make sure that it’s accurate.

One of the things that’s particularly important is making sure that we have the right person. There’s no point in targeting John Andrew Smith with a medical checkup when it’s actually John Adam Smith whose genetic analysis shows their predisposition to a particular condition.

Wouldn’t it be easier if everyone in the country had a single electronic identity, one that we could use as a digital key across all these systems to ensure that we had the right person?

 

RealMe

And this is where RealMe comes in. It’s a joint venture between the Department of Internal Affairs and NZ Post and, in their own words: “RealMe lets you easily and securely prove your identity online, plus access lots of online services with a single username and password.”

The sales pitch is aimed at making it easier for the citizen consumer. Get a RealMe account and access a wide range of critical services that require strong proof of identity such as govt agencies, the health system, banks, and so on.

It’s important to note that there are two sorts of RealMe accounts. You can get as many unverified accounts as you like – but if you want to use the more useful services you will need to get your account verified and your photo taken at an NZ Post shop. You’re only allowed one of these.

RealMe is of particular appeal to financial institutions because of their new responsibilities to identify their customers and report suspicious transactions to the government as a result of the Anti Money Laundering and Countering Financing of Terrorism Act. Kiwibank, the BNZ and TSB Bank are using RealMe, with more expected to follow, although uptake has been slower than expected.

RealMe itself doesn’t store any data about people, but it does enable two services that use it to share data if the person gives them permission. For example, if you apply for medical insurance, you can use RealMe to freely choose to give the insurer secure access to your medical records.

There’s not much more to RealMe, but there doesn’t have to be. It provides two vital components to enable data sharing on an ever larger scale – a key to identify a person, and a pipeline to share the data. It’s an important building block in the creation of our glorious shared data future.

 

Issues with RealMe

Sadly, utopia is not assured. Let’s look at some of the issues.

Firstly, data sharing. While the people who developed RealMe seem to have good intentions, I can’t help feeling that they seem rather naïve. It’s great that data sharing through the RealMe service is voluntary and done under the control of the user, but does anyone really believe that’s how it’s going to work?

If you want health insurance, you will be obliged to give them access to your medical records. Credit applications will demand access to your bank accounts. You could freely refuse – at the price of being turned down for what you’re applying for.

And at some point I can assure you that there will be a small law change allowing the IRD full access to whatever data they want through the RealMe service.

There are other agencies that also have the power to override our privacy choices. The Police, SIS and GCSB can all legally access the information in the systems that RealMe have so kindly linked together, and we’d never know that they’d done it.

Secondly, it seems that RealMe will inevitably evolve into a de facto digital identity card; the “papers please” of the internet age. As processes move online, everyone is going to need a RealMe account and opting out will not be an option.

But there is a deeper philosophical problem with having a single verified identity. Do we actually want to use the same identity for dealing with the government, banks, Trademe, and a variety of social media sites? Will there be increasing pressure to use our ‘official’ identity everywhere? I see important advantages in being able to present different faces to people – to the people we work with, our parents, our children, our friends, our various communities.

And, of course, RealMe has a big future. It’s going to be available whenever the government thinks up a new reason why it needs to track us and spy on us. We don’t just have to worry about what it’s being used for now, we have to worry what will be build on it in the future.

To think of just one example, something that worries governments and businesses alike is the inability to conclusively identify who did what online. It seems possible to me that in ten years’ time we’ll be obliged to connect to the internet using our RealMe identity.

With everything you do online linked back to your RealMe ID, the internet truly will be the greatest surveillance machine ever built.

 

Dystopia

However, it’s when you add large scale data collection and analysis that you realise how this technocratic utopian vision can all too easily become a dystopia.

The same data that can be used to target assistance to those who need it, can be used to penalise those who transgress. Has an algorithm decided you feeding your children too much junk food? Did you spend time helping at the local community centre when you should have been looking for a job? Our data shows you were out in the car when you said you were sick last Tuesday, just how sick were you?

Citizen, justify yourself!

 

Big Data

RealMe is just one more component of the big data transformation of our society.

I don’t think that the big data juggernaut can be stopped. Every day the technology to watch, collate and analyse data is getting cheaper and more powerful. It’s the price of the modern internet and computer driven society.

And personally, I’m still enough of a utopian that I’m not even sure that we want to stop it.

But we know that people react differently when know they’re being watched. We know that people value their privacy and feel powerless when others know their secrets. Can freedom of expression survive in a surveillance state? Will dissent, so necessary in a democratic society, wither under the all seeing eye?

So while we can’t stop it, there is a very clear need to control it. To make sure that we get the benefits while not accidentally creating a society we don’t want to live in.

 

What can we do?

However I do believe that this is possible. We can’t control what foreign companies and governments do, but we can set limits on what our own government can do, and we can pass laws that control what New Zealand companies can do.

This isn’t going to be easy. We do have the Privacy Act, but the technocrats have the ear of government and they’ve already announced plans to repeal the Privacy Act and re-enact it in a form even more friendly towards data sharing. But even then, it’s not just privacy that we’re worried about, but power and control.

To stop this trend, to set up real protections, we’re going to have to persuade our fellow New Zealanders that we need them.

We have the power to decide what sort of country we want to live in. We can reject the surveillance society and the subsequent crushing of our democracy. I hope this meeting is another step on the way to doing so.

 

Problems with Customs having the power to force decryption

It seems obvious – when you enter the country Customs can force you to open a briefcase to look for illegal drugs, so why can’t they force you to decode an encrypted file on your computer so they can look for information about illegal drug smuggling?

Customs have issued a set of papers discussing a planned review of the Customs & Excise Act. In the Powers paper, they are asking for the power to force people to hand over the passwords for their electronic devices or face penalties.

Unfortunately the analogy breaks down when you consider what would actually happen in the real world.

  • If a person tries to enter New Zealand with a locked briefcase and refuses to open it on request, the Customs officer gets a hammer and chisel and forces it open.
  • If the person tries to enter New Zealand with a laptop containing a file that cannot be read and the person doesn’t hand over the key, the Customs officer can do nothing.

The important thing to note is that with a locked physical object there is always the option of literally forcing the issue. Any refusals are merely a delaying tactic.

The situation with encrypted files could be any of the following:

  1. The file is just random information used by an application (e.g. disk performance testing). In this case the person who owns the computer cannot provide the key to decrypt it because there isn’t one – but the Customs people can’t tell whether that (a properly encrypted files looks like random noise).
  2. The file was not put there by the owner of the laptop but was placed there by someone else – either part of the operating system and pre-loaded applications, or by a software install, or by malware, or by someone else who borrowed the computer for the weekend. In these cases the person who owns the computer can’t provide the key because they don’t know it.
  3. The file is an encrypted file containing illegal material that could see the person go to jail for a number of years. They refuse to provide the key and choose to pay the (theoretical) $500 fine instead.

In all these cases there is nothing that the Customs officer can do to overcome either the ignorance of the person or their unwillingness to comply. The issue cannot be forced because a modern encryption system can’t be cracked without the proper key.

There’s also no easy way for the Customs officer to tell which situation they’re dealing with. Is that person saying they don’t know anything about any encrypted files on their laptop telling the truth or lying?

The worrying thing is that in any case where you make the penalties extreme enough to intimidate someone who does have illegal files into handing the key over, you are also going to end up victimising the innocents who either don’t have any encrypted files or don’t have the keys for them by making them suffer those same penalties.

And, of course, someone who really was bringing in illegal files is much more likely to store the information online somewhere, enter the country with a completely clean laptop and download it when they got here. Or they might use an encryption system that supports a “Police Key” and a “Real Key”, where handing over the “Police Key” just presents some fake innocuous files.

Conclusion

We haven’t even considered the civil liberties issues such as being able to protect your most personal files from government snoops, or that Customs has long been suspected of exceeding its powers to do searches on behalf of the Police.

Importantly, things that work in the physical domain don’t always transfer cleanly across to the digital domain. There are real issues with how any such power to force people to hand over keys would be used in practice.

Giving Customs this power might catch a few naive criminals but it’s not going to catch people who are even halfway serious about personal security – and we’re worried that too many blameless people might get caught up in the net, forced into the difficult task of trying to prove that they don’t know something.

The GCSB’s brake on innovation

It started with a Tweet from Steve Cotter, CEO of REANNZ:

Before we go any further let’s unpack some of those acronyms and add one more:

So this is a statement by the CEO of a government owned company whose purpose is to “establish and operate the Advanced Network in order to promote education, research and innovation for the benefit of New Zealand” saying that they can’t do the research and development work they need to do because the bureaucrats in the NCSC at the GCSB are holding them back.

Apparently the NCSC were willing to help, but the law was inflexible enough that making any significant change – like you might want to do quite frequently on an experimental network – was going to require the full notification and authorisation procedure. When asked for an exemption the reply was that this would be extremely unlikely to be granted.

But wait, there’s more

Apparently Google has also been involved with research and development into SDN in New Zealand. We’ve been told by multiple sources that they were so annoyed by the TICSA’s requirements and the NCSC’s administration of them that they have closed the New Zealand section of this project and redeployed the hardware to Australia and the USA. This can only be seen as a loss to New Zealand.

This is a problem

We think it’s a real worry that companies like Google and REANNZ, who are both pushing the boundaries of network research, are giving up in New Zealand due to the constraints imposed by government legislation.

It’s exactly the sort of thing we worried about in our submission to the government about the TICS Bill:

It will introduce a layer of unnecessary bureaucracy and slow down development of services. It will lead to network operators making “safe” choices that they know will be accepted by the GCSB rather than making the best decisions.

Some people have suggested that these companies, REANNZ and Google, just needed to work harder to jump through the NCSC’s hoops. The reality is that they obviously thought that this was not worth the effort and they abandoned the work. How many other companies in New Zealand are experiencing these exact same problems and deciding to just give up… or spend their research dollars in countries with a friendlier environment?

We stand by our original position that a spy agency can’t intercept traffic on one hand and then provide security advice on the other. We don’t believe that New Zealand’s national security is enhanced by giving the GCSB more control of our telecommunications networks than any other spy agency has in any other comparable country. We don’t believe that network operators should have to answer to a layer of micro-managing government bureaucracy to run their businesses. We think that this is in direct contravention of the GCSB’s statutory objective of contributing to the economic well-being of New Zealand.

The TICS Act is proving to be a brake on innovation. It needs to be changed.


More on the story from Juha Saarinen at the NZ Herald.

Can the NZ Police search your phone if you’re arrested?

If the NZ Police arrest you they also have the power to search you. In light of recent decisions in Canada and the US amongst other countries, we had two questions:

  1. Can the Police also search your mobile phone or other smart device if you’re arrested?
  2. Can the Police force you to unlock it if it is secured by a password or fingerprint?

We asked the Police and while the answers aren’t as in-depth as we’d like, we thought we’d share what we got combined with our own analysis.

Firstly, if the Police can legally search you (they have a warrant, you’re in the vicinity of a legal search being executed, you’re suspected of being involved in certain classes of crime, etc), section 125(1)(l) of the Search & Surveillance Act explicitly allows them to search your phone or other data device.

Furthermore, section 130 of that Act can be used to compel assistance (i.e. you must unlock it) if they are doing a legal search. Note that the “no self incrimination” clause is generally understood to refer to the information used to unlock, not the information that is revealed by being unlocked.

The Police also have access to a range of tools used to access the information on such devices. In 2013 the Police Electronic Crime Group searched 1309 mobile phones and other devices. This number doesn’t include any searches at the District level (stats are not recorded) or by officers on the street persuading people to let them examine their phone.

Secondly, section 88 allows the Police to do a warrantless search of someone who has been arrested if they have reasonable grounds to believe that they have a thing that may be used to harm someone, be used to escape, or may contain “evidential material relating to the offence in respect of which the arrest is made”.

It would seem that this clause would allow the Police a large amount of leeway to come up with some vaguely plausible explanation as to why they need to search your digital device if you’re arrested. e.g. they could require the information on it to track your movements or who you communicated with before you were arrested.

Conclusion

From our brief analysis, supported by the information from the Police, it seems that the NZ Police can upon arrest:

  1. Search your mobile phone or other electronic device if they can formulate a plausible reason to do so.
  2. Oblige you to unlock it.

Does anyone have a counter view?

Other questions

How long can the Police hold the data for?

Who can they share the data with?

What limits as to reasonableness will the judiciary impose when it comes up in court?

Update on automated number plate recognition (ANPR)

We recently obtained further documentation from the NZ Police about automated number plate recognition (ANPR). This includes a Police report from September 2013, the ANPR chapter from the Police internal manual and some responses to questions in our letter.

We noted the following points of interest:

  • The Police currently have 17 ANPR equipped vehicles, most of which are patrol cars that can use ANPR when mobile.
  • It costs approximately NZ$35,000 to add ANPR to a patrol car.
  • The ANPR systems are not doing live lookups against the Police databases. Rather data about vehicles of interest is uploaded each morning from a USB flash drive. This is seen as a serious shortcoming.
  • Approximately 3-4% of the cars passing an ANPR unit are “vehicles of interest”.
  • Police did a trial with the Ministry of Justice to use ANPR to identify cars of people with outstanding fines.
  • The system is used to target the expected drivers of vehicles, not just the vehicles. e.g. a car registered to a known drunk driver might be stopped.
  • Originally Police were keeping ANPR data for four months, but after discussions with the Privacy Commission dropped this down to 48 hours. They note that there are not enough ANPR equipped cars to do vehicle tracking anyway.
  • However, the manuals do talk about using this 48 hours of records to detect the location of vehicles after the fact. They give the example of a constable checking the database to see if a newly stolen car passed by one of the ANPR equipped vehicles.
  • Police documentation gives examples of using ANPR equipped vehicles to do sweeps of car parks.
  • There have been problems with the cameras misreading plates, particularly with confusion of O/Q and 1/I.
  • Police documentation points out that Police do not have a blanket power to stop any vehicle (except for administering a compulsory breath test) and that the officer must be sure that they have a legal reason to stop a vehicle of interest.

Comment

While we are not opposed to appropriate use of automated number plate recognition, we are concerned about using the system to target people and not vehicles. e.g. pulling over a vehicle because the registered owner has a drunk driving conviction. This risks unreasonable harassment of both the owner and of anyone else that they might lend the car to.

We are pleased that the Police are not using the system to set up a vehicle tracking database as we see this as a more worrying threat to civil liberties. We also note that Police statement that they believe that they need a tracking warrant under the Search & Surveillance Act to use a device (such as an ANPR database) to track vehicles.

This provides an interesting contrast to recent information from Auckland Transport about the surveillance and tracking systems they are using. We note that we currently have an outstanding LGOIMA request lodged with Auckland Transport about their surveillance plans.

However, it seems that the Police are prepared to use the 48 hours of history that they are keeping to locate vehicles after the fact, we wonder if this will be extended further in the future. This contradicts other statement and we will be asking for more information.

Report: Eyes on New Zealand

Global Information Society Watch has published a report on the state of communications surveillance in New Zealand.

Written by Joy Liddicoat (member of APC and Tech Liberty), this comprehensive and perceptive summary is well worth reading by anyone who wants to know how we got here – and where we need to go.

New Zealand is a small country, with a population of less than five million, situated in the far reaches of the southern hemisphere. But its physical remoteness belies a critical role in the powerful international intelligence alliance known as the “Five Eyes”, which has been at the heart of global controversy about mass surveillance. This report outlines the remarkable story of how an international police raid for alleged copyright infringement activities ultimately became a story of illegal spying on New Zealanders, and political deals on revised surveillance laws, while precipitating proposals for a Digital Rights and Freedoms Bill and resulting in the creation of a new political party. We outline how civil society has tried to respond, and suggest action points for the future, bearing in mind that this incredible story is not yet over.

Read the full report.

Is RealMe a threat to our liberty?

We’ve been watching the introduction of RealMe with some concern. While it appears that they have done some serious thinking around privacy, there are some real issues around unified online identities that have not been sufficiently discussed.

This introductory article talks about what RealMe is and then asks some questions about how it might be used.

 

What is RealMe?

RealMe is a government sponsored online identification service. In their own words: “RealMe lets you easily and securely prove your identity online, plus access lots of online services with a single username and password.”

It’s a renamed version of the iGovt scheme originally set up by the Department of Internal Affairs. it’s now run by a combination of the Department of Internal Affairs and NZ Post (a state owned enterprise).  The major enabling legislation for RealMe is the Electronic Identity Verification Act (2012).

The aim is that your verified RealMe identity will provide enough assurance that you are who you say you are that governments and commercial organisations will be able to provide products and services online that require the most stringent forms of identification such as passports, bank accounts, student loans and so on.

It’s of particular appeal to financial institutions because of their new responsibilities to identify who they’re dealing with after the passing of the Anti Money Laundering and Countering Financing of Terrorism Act. Both the BNZ and TSB Bank are now using RealMe with others expected to follow. Here’s the full list of organisations using it.

At the end of February 2013 there were 853,100 iGovt logins (although some people had more than one).

 

Implementing RealMe

We’ve heard that implementing RealMe within an organisation is both complex and expensive. There is a significant amount of software development that the organisation is required to do, plus RealMe does its own testing to ensure that standards have been met.

Ongoing costs are based on the number of transactions (typically new identifications, RealMe is not necessarily involved once the identity of the person is established the first time). RealMe refused to release details of the pricing, claiming it is commercially sensitive.

 

Privacy and data management.

There’s no doubt that the people who created the system did it with the best of intentions and it seems they’ve taken privacy needs into account. One important point is that two organisations using RealMe can’t share data about a person unless the person has explicitly giving them permission to do so.

However, we have to assume that this will not always be the case. It seems highly likely that at some point the IRD will get a law change to enforce access – we all want to make sure people aren’t cheating the tax system, right? And it makes sense that companies might start insisting on you sharing information, in the same way that health insurance companies currently demand access to your health records. You can refuse but then they won’t provide services to you.

It’s also easy enough for the Police, SIS and GCSB to be able to use the powers granted by their respective laws to access any person’s information across systems as well.

 

A digital identity card

It seems clear that RealMe is rapidly becoming a digital identity card. It’s already not voluntary for a number of people who want to access some services such as Studylink. As more government departments and commercial organisations start requiring it, having a verified RealMe identity is rapidly going to become a requirement.

NZ and Australia both rejected the idea of a non-digital national identity card in the 1980s. There were significant public campaigns against them and the proposals were defeated. So far there’s been no outcry against this new form of digital identity card.

Of course, there were different attitudes then. In those days the very idea of government departments sharing data about people was highly contentious due to fears that the government might snoop too much or would abuse its power. Now data sharing between govt departments is commonplace and expected. RealMe is going to enable more and better data sharing, with increased confidence about the identity of the people they’re sharing information about.

 

Unified identity

But the bigger issue is – what does it mean to have one verified identity that’s used for everything?

Do we actually want to use the same identity for dealing with the government, your bank, Trademe and a variety of social media sites? Will there be increasing pressure to use your ‘official’ identity everywhere? We see advantages in being able to present different faces to people – to the people you work with, your parents, your children, your friends, your community. Is this under threat?

We already know that the world has problems with governments over-surveilling people on the internet. We fear that this surveillance already has a chilling effect on democratic dissent. Will improving it by forcing use of a single identity and further enabling data matching be worth the gains?

 

The future

What does robust and pervasive online identification enable? How will these services be used in 5, 10 or 20 years time?

For example, one of the big problems with law on the internet is proving just who did something. You can trace a downloaded file to an IP address but you don’t know which person there actually did the copyright infringing download. Or maybe you want to find out who anonymously published the suppressed name of the accused in a trial.

A government of the future might look at these problems and decide that internet use should be keyed to your RealMe identity, thus undermining anonymity on the internet. It wouldn’t be a trivial task but it’s also not impossible and would enable the government of the day to track everything you do on the internet. We don’t believe that the government needs this power and we see this level of mass surveillance as a threat to our privacy and our democracy.

 

Conclusion

RealMe has some real advantages – verified identities will make it easier for people to access government and commercial services online, helping us realise some of the promises of the internet revolution. But we’re concerned about measures that increase government power over people and we fear that RealMe might be one of those measures.

Over the next few months we’re planning to explore some of the issues around RealMe. In particular, we want to answer the following two questions:

  • Is RealMe a threat to our liberty now or in the future?
  • If so, how can we mitigate it so that we get the benefits without the costs?

Your ideas and contributions would be welcome.

 

 

 

HDC Bill reported back by the Select Committee

The Harmful Digital Communications Bill has been reported back and the select committee has made a few changes.

Significant changes

The Bill has added the definition of IPAP (Internet Protocol Address Provider – roughly an internet service provider) from section 122A(1) of the Copyright Act and then in section 17(2A) gives the District Court the ability to order an IPAP to release the identity of an anonymous communicator to the court. Of course, this would only reveal the name of the person who owns the internet account that was used and not the name of the person who used it, so the utility of this will be limited.

The Approved Agency (still unnamed, still expected to be Netsafe) would be subject to the Ombudsmen Act, the Official Information Act and the Public Records Act in respect of the functions performed under the bill. This is a welcome change as it’s important that any agency performing state functions is covered by the bills that help provide proper oversight.

There have also been minor changes allowing the courts to vary orders made previously, clearing up which teachers can apply on behalf of pupils, and allowing threats to be treated as possible grounds for an order to be made.

Safe harbour improvements

The major change has been to the section 20 Safe Harbour provisions of the Bill that were dumped into the previous version at the last minute.

The original proposal was terrible – content hosts (pretty well anyone who allows the public to submit comments such as on a blog or forum) would be protected from legal action if they removed material immediately after receiving a complaint. It was obvious that this would be abused by those trying to silence people who they disagreed with.

The good news is that some complaints will be changed from “takedown on notice” to “notice and notice”. This means that upon receiving a complaint, the content host will forward it to the original author of the complained about material (i.e. the person who wrote the comment). If the author agrees or doesn’t respond, the material will be taken down, but if they disagree with the complaint the material will be left up – and the content host will still be protected from legal action under the safe harbour.

However, this does not apply when the original author cannot be identified (or if the author either doesn’t want to respond or can’t respond within the 48 hour time limit). Indeed, the phrasing of the act reads as if content hosts must remove material when in reality they only need do so if they wish to be protected by the safe harbour provisions.

Disturbingly a number of other suggested improvements were not picked up by the select committee. In particular we supported the ideas that complainants should have to make their complaint a sworn statement and that complainants would have to have been harmed by the material themselves.

So while this is a significant improvement, we still fear that these provisions will be abused by serial complainers, internet busybodies and those who want to suppress their “online enemies” by any means possible.

What hasn’t changed

What’s more serious is what hasn’t changed. You can read our articles and submissions to see our full critique of the Bill but there are three points we wish to mention.

Firstly, the Bill sets a different standard for the content of speech online and offline. While we do understand that online communications might require a different approach in available remedies, we firmly believe that the standard of speech should be the same. We note that the internet isn’t only for “nice” speech, it’s increasingly the place where we all exercise the freedom of expression guaranteed to us by the NZ Bill of Rights Act.

Secondly, rather than fixing the horribly broken section 19 – causing harm by posting digital communication – the penalties have been increased. This section completely fails to recognise that some harmful communications have real value to society. For example, the idea that someone might be fined or jailed because they harmed a politician by posting online proof that the politician was corrupt is just horrendous. We honestly believed that the lack of a public interest or BORA test was a mistake but it seems that the Select Committee really does want to criminalise all harmful online speech. This neutered and ineffectual internet is not one we wish to see. (Edit: this section is still subject to the BORA as detailed in 6(2).)

Thirdly, we worry that the bill will be ineffectual where it might be needed most while being most effective where it’s most problematic to civil liberties. Many of the example harms mentioned in the original Law Commission report would not be helped by this Bill – they happen overseas, or they happen too fast, or the people being harmed are just too scared to tell anyone anyway. The Approved Agency will be able to do a lot in the cases where anything can be done, but we’re not convinced of the need for the more coercive elements of the Bill.

Conclusion

There is no doubt that some people are being harmed by online communications. There is definitely a good argument to be made that the government could do something useful to help those people. We’re not convinced that the approach taken by the Law Commission and the Government is effective and we’re quite sure that it includes a number of unreasonable restrictions on the right to freedom of expression guaranteed to us all by the NZ Bill of Rights Act.

It seems inevitable that the Bill will be passed in its current form if there’s time before Parliament closes for the elections. We can but hope that a future government will repeal it and have another go.

HDC Bill and criminalising free speech

[Updated to the reflect the latest version of the Bill as at 23rd July 2015.]

As part of our ongoing look at elements of the Harmful Digital Communications Bill (general critique and safe harbours), we now turn to the new offence of causing harm by posting digital communication (section 19). This is a criminal offence and is not related to the rest of the bill with its 10 principles, Approved Agency and quick-fire District Court remedies. It’s quite simple:

(1) A person commits an offence if:

  1. the person posts a digital communication with the intention that it cause harm to a victim; and
  2. posting the communication would cause harm to an ordinary reasonable person in the position of the victim; and
  3. posting the communication causes harm to the victim.

“harm” is defined in the interpretation section as “serious emotional distress”.

Unfortunately this new offence is actually very wide and may well capture many communications that are of immense value to society – or at least shouldn’t be made illegal.

Let’s consider the case where someone takes a photo of a politician receiving a bribe and, shocked at their corruption, posts that photo to the internet in an attempt get the politician to lose their position. This communication would:

  1. be posted with the intention of harming the victim (the prospect of facing criminal charges or being obliged to resign could be assumed to cause the victim distress).
  2. would cause harm to any reasonable person in the position of the victim (any reasonable person would not like having evidence of their criminal corruption exposed to the world).
  3. could be easily proved to have caused harm (serious emotional distress) to the victim.

The penalty? Up to 6 months in jail or a fine not exceeding $50,000. (Or up to $200,000 for a body corporate.)

In section 19(2) the judge gets some guidelines about how to assess whether the communication causes harm, but nowhere is there the idea that some communications that cause harm might actually have some societal value or would otherwise come under freedom of expression. There are no available defences such as that the communication may be in the public interest, counts as fair comment, or exposes criminal wrongdoing. All we have is the weak language in section 6(2) that the courts must act consistently with the Bill of Rights Act – which doesn’t mean much when the explicit wording of the Bill is against the principles of that Act.

This is obviously a terrible law and will have a detrimental effect on freedom of expression and public discourse in New Zealand. How will our journalists and citizen journalists be able to expose wrong doing when broadcasting it on electronic media such as the internet, radio or TV is a criminal act if it hurts the wrong-doer’s feelings?

This law wouldn’t be acceptable if it applied to speech in a newspaper, it’s not acceptable online.

Safe harbours in HDC Bill are a threat to freedom of expression

The safe harbour provisions in the Harmful Digital Communications Bill are a serious threat to online freedom of speech in New Zealand.

How it works

Anyone can complain to an online content host (someone who has control over a website) that some material submitted by an external user on their site is unlawful, harmful or otherwise objectionable. The online content host must then make a choice:

  1. Remove the content and thereby qualify for immunity from civil or criminal action.
  2. Leave the content up and be exposed to civil or criminal liability.

The content host has to make its own determination about whether a piece of given content is unlawful (which may be very difficult when it comes to subjective issues such as defamation and impossible to determine when it concerns legal suppression), harmful or “otherwise objectionable”.

Furthermore, there is:

  • No oversight of the process from any judicial or other agency.
  • No requirement for the content host to tell the person who originally posted the content that it has been deleted.
  • No provision for any appeal by the content host or the person who originally posted the material.
  • No penalty for people making false or unreasonable claims.

We can safely assume that most content hosts will tend to play it safe, especially if they’re large corporates with risk-averse legal teams, and will take down material when requested. They have nothing to gain and plenty to lose by leaving complained about material online.

Serious ramifications for freedom of speech

Don’t like what someone has said about you online? Send in a complaint and wait for it to be taken down.

This applies to comments on blogs, forums on auction sites, user-supplied content on news media sites, etc, etc. These are exactly the places where a lot of important speech occurs including discussions about politics and the issues of the day. The debates can often be heated, and some sites are well known for encouraging intemperate speech, but these discussions are becoming and increasingly important part of our national discourse.

This law will make it too easy for someone to stop arguing and start making complaints, thereby suppressing the freedom of expression of those they disagree with.

The jurisdiction problem

Of course, this will only apply to websites that are controlled by people who have a legal presence in New Zealand. Overseas websites will continue to maintain their own rules and ignore New Zealand law and standards of online behaviour.

Conclusion

As currently written, these safe harbour provisions are just a bad idea. They’re too open to abuse and we believe they’re more likely to be used to suppress acceptable speech than to eliminate harmful or “otherwise objectionable” speech. As a very minimum, the complaint should have to be approved by the Approved Agency referred to in the other parts of the Bill.

That said, the whole idea of removing “otherwise objectionable” speech is also quite worrying. The Harmful Digital Communications Bill already has an expansive set of rules about what sort of harmful speech shouldn’t be allowed online and this “otherwise objectionable” seems to extend it even further. One of the principles we stand up for here is that civil liberties such as freedom of expression are as important online as they are offline, and this law goes far beyond anything in the offline world.

We hope to have more comment and analysis on other aspects of the Harmful Digital Communications Bill soon.