We recently obtained further documentation from the NZ Police about automated number plate recognition (ANPR). This includes a Police report from September 2013, the ANPR chapter from the Police internal manual and some responses to questions in our letter.
We noted the following points of interest:
- The Police currently have 17 ANPR equipped vehicles, most of which are patrol cars that can use ANPR when mobile.
- It costs approximately NZ$35,000 to add ANPR to a patrol car.
- The ANPR systems are not doing live lookups against the Police databases. Rather data about vehicles of interest is uploaded each morning from a USB flash drive. This is seen as a serious shortcoming.
- Approximately 3-4% of the cars passing an ANPR unit are "vehicles of interest".
- Police did a trial with the Ministry of Justice to use ANPR to identify cars of people with outstanding fines.
- The system is used to target the expected drivers of vehicles, not just the vehicles. e.g. a car registered to a known drunk driver might be stopped.
- Originally Police were keeping ANPR data for four months, but after discussions with the Privacy Commission dropped this down to 48 hours. They note that there are not enough ANPR equipped cars to do vehicle tracking anyway.
- However, the manuals do talk about using this 48 hours of records to detect the location of vehicles after the fact. They give the example of a constable checking the database to see if a newly stolen car passed by one of the ANPR equipped vehicles.
- Police documentation gives examples of using ANPR equipped vehicles to do sweeps of car parks.
- There have been problems with the cameras misreading plates, particularly with confusion of O/Q and 1/I.
- Police documentation points out that Police do not have a blanket power to stop any vehicle (except for administering a compulsory breath test) and that the officer must be sure that they have a legal reason to stop a vehicle of interest.
While we are not opposed to appropriate use of automated number plate recognition, we are concerned about using the system to target people and not vehicles. e.g. pulling over a vehicle because the registered owner has a drunk driving conviction. This risks unreasonable harassment of both the owner and of anyone else that they might lend the car to.
We are pleased that the Police are not using the system to set up a vehicle tracking database as we see this as a more worrying threat to civil liberties. We also note that Police statement that they believe that they need a tracking warrant under the Search & Surveillance Act to use a device (such as an ANPR database) to track vehicles.
This provides an interesting contrast to recent information from Auckland Transport about the surveillance and tracking systems they are using. We note that we currently have an outstanding LGOIMA request lodged with Auckland Transport about their surveillance plans.
However, it seems that the Police are prepared to use the 48 hours of history that they are keeping to locate vehicles after the fact, we wonder if this will be extended further in the future. This contradicts other statement and we will be asking for more information.
Written by Joy Liddicoat (member of APC and Tech Liberty), this comprehensive and perceptive summary is well worth reading by anyone who wants to know how we got here - and where we need to go.
New Zealand is a small country, with a population of less than five million, situated in the far reaches of the southern hemisphere. But its physical remoteness belies a critical role in the powerful international intelligence alliance known as the “Five Eyes”, which has been at the heart of global controversy about mass surveillance. This report outlines the remarkable story of how an international police raid for alleged copyright infringement activities ultimately became a story of illegal spying on New Zealanders, and political deals on revised surveillance laws, while precipitating proposals for a Digital Rights and Freedoms Bill and resulting in the creation of a new political party. We outline how civil society has tried to respond, and suggest action points for the future, bearing in mind that this incredible story is not yet over.
Edited version of Thomas Beagle's opening remarks at the Privacy Panel at NetHui in Auckland on 11th July 2014.
Privacy isn’t dead. Yesterday at Nethui we were told that it’s too late for privacy, that it’s over. But the fact we’re all here and talking about it is a sign of just how wrong this is.
There’s no doubt that technology is changing how we think about privacy but it’s not as simple as saying that people these days are just giving it up willy-nilly. People don’t always get it right, but most have an intense interest in keeping certain pieces of information away from certain people.
Privacy is multi-faceted
I think it’s important to note that information privacy is not simple. People have many relationships – work, family, friends, doctors, government - and they need to be able to control who sees what and when.
Just because we give a piece of personal information to one of those, or they take it without asking, doesn’t mean that we’ve lost our privacy interest in that information. I might tell my doctor about my drug use, but still need to keep it secret from my family, employer and government.
Privacy is also about security
Part of this control is that for many people the debate about privacy is also about security. If you’re a teen questioning your sexuality in a conservative town, that information leaking out might be enough to get you beaten up or worse.
And at the same time, have you ever felt that sick feeling when someone you don’t trust has damaging information about you? What if it’s the government and they’re the ones paying you a benefit that is keeping your family fed? Information is power.
The surveillance demands of national security, the desire to know everything we’re doing, actually leads to many people feeling less secure because they don’t know what the government knows about them and they don’t know how they’re going to use that information.
That said, I’m optimistic about privacy.
When it comes to our digital peers such as friends and family we generally already have the tools to protect ourselves, even if we don’t always get it right.
If we look at the rest of the privacy problem, I split it up into three categories. The biggest risk is your own government, because they’re the ones that can put you in jail or deny you basic services. The second is the local companies you deal with to buy your power, your food, and so on. The third is the foreign companies such as Google and Facebook.
The good news is that in a democracy like New Zealand, we can control the first two. We can set limits on what data they collect and how they can use it and how they can share it. Maybe two out of three is actually good enough to say that we can continue to maintain our privacy in the internet age.
Limiting information use
And we can set those limits however we like. Some people seem to believe that once something is published, either by ourselves or leaked by others, that it’s fair game. I’d argue that just because something is out there doesn’t mean that it should be available for use.
There’s ample precedent for this: You’re not allowed to use the electoral roll for anything not to do with elections. Juries are told to ignore any information they may have learnt outside of the trial.
If we decide as a society that we don’t want the Ministry of Social Development to spy on beneficiaries on social media, we can change the law so that they are not allowed to. If we don’t want the GCSB to be able to apply for wide-ranging access authorisations to spy on New Zealanders - for our own protection of course – we can change the law so that they can’t. It’s up to us.
Changes to the law
I believe we do need changes to privacy law in New Zealand. The Privacy Act is a great base for us to work from but it needs works – and not just the new powers for the Privacy Commissioner.
It’s obvious that privacy controlled by opt-in click-through contracts doesn’t really work. I believe that the solution is to further ratchet up the baseline protections provided by the Privacy Act – and to close the law enforcement loophole.
Sadly, I fear that the government’s promised repeal and re-enactment of the Privacy Act will be going in the wrong direction. Thank you.
We've been watching the introduction of RealMe with some concern. While it appears that they have done some serious thinking around privacy, there are some real issues around unified online identities that have not been sufficiently discussed.
This introductory article talks about what RealMe is and then asks some questions about how it might be used.
What is RealMe?
RealMe is a government sponsored online identification service. In their own words: "RealMe lets you easily and securely prove your identity online, plus access lots of online services with a single username and password."
It's a renamed version of the iGovt scheme originally set up by the Department of Internal Affairs. it's now run by a combination of the Department of Internal Affairs and NZ Post (a state owned enterprise). The major enabling legislation for RealMe is the Electronic Identity Verification Act (2012).
The aim is that your verified RealMe identity will provide enough assurance that you are who you say you are that governments and commercial organisations will be able to provide products and services online that require the most stringent forms of identification such as passports, bank accounts, student loans and so on.
It's of particular appeal to financial institutions because of their new responsibilities to identify who they're dealing with after the passing of the Anti Money Laundering and Countering Financing of Terrorism Act. Both the BNZ and TSB Bank are now using RealMe with others expected to follow. Here's the full list of organisations using it.
At the end of February 2013 there were 853,100 iGovt logins (although some people had more than one).
We've heard that implementing RealMe within an organisation is both complex and expensive. There is a significant amount of software development that the organisation is required to do, plus RealMe does its own testing to ensure that standards have been met.
Ongoing costs are based on the number of transactions (typically new identifications, RealMe is not necessarily involved once the identity of the person is established the first time). RealMe refused to release details of the pricing, claiming it is commercially sensitive.
Privacy and data management.
There's no doubt that the people who created the system did it with the best of intentions and it seems they've taken privacy needs into account. One important point is that two organisations using RealMe can't share data about a person unless the person has explicitly giving them permission to do so.
However, we have to assume that this will not always be the case. It seems highly likely that at some point the IRD will get a law change to enforce access - we all want to make sure people aren't cheating the tax system, right? And it makes sense that companies might start insisting on you sharing information, in the same way that health insurance companies currently demand access to your health records. You can refuse but then they won't provide services to you.
It's also easy enough for the Police, SIS and GCSB to be able to use the powers granted by their respective laws to access any person's information across systems as well.
A digital identity card
It seems clear that RealMe is rapidly becoming a digital identity card. It's already not voluntary for a number of people who want to access some services such as Studylink. As more government departments and commercial organisations start requiring it, having a verified RealMe identity is rapidly going to become a requirement.
NZ and Australia both rejected the idea of a non-digital national identity card in the 1980s. There were significant public campaigns against them and the proposals were defeated. So far there's been no outcry against this new form of digital identity card.
Of course, there were different attitudes then. In those days the very idea of government departments sharing data about people was highly contentious due to fears that the government might snoop too much or would abuse its power. Now data sharing between govt departments is commonplace and expected. RealMe is going to enable more and better data sharing, with increased confidence about the identity of the people they're sharing information about.
But the bigger issue is - what does it mean to have one verified identity that's used for everything?
Do we actually want to use the same identity for dealing with the government, your bank, Trademe and a variety of social media sites? Will there be increasing pressure to use your 'official' identity everywhere? We see advantages in being able to present different faces to people - to the people you work with, your parents, your children, your friends, your community. Is this under threat?
We already know that the world has problems with governments over-surveilling people on the internet. We fear that this surveillance already has a chilling effect on democratic dissent. Will improving it by forcing use of a single identity and further enabling data matching be worth the gains?
What does robust and pervasive online identification enable? How will these services be used in 5, 10 or 20 years time?
For example, one of the big problems with law on the internet is proving just who did something. You can trace a downloaded file to an IP address but you don't know which person there actually did the copyright infringing download. Or maybe you want to find out who anonymously published the suppressed name of the accused in a trial.
A government of the future might look at these problems and decide that internet use should be keyed to your RealMe identity, thus undermining anonymity on the internet. It wouldn't be a trivial task but it's also not impossible and would enable the government of the day to track everything you do on the internet. We don't believe that the government needs this power and we see this level of mass surveillance as a threat to our privacy and our democracy.
RealMe has some real advantages - verified identities will make it easier for people to access government and commercial services online, helping us realise some of the promises of the internet revolution. But we're concerned about measures that increase government power over people and we fear that RealMe might be one of those measures.
Over the next few months we're planning to explore some of the issues around RealMe. In particular, we want to answer the following two questions:
- Is RealMe a threat to our liberty now or in the future?
- If so, how can we mitigate it so that we get the benefits without the costs?
Your ideas and contributions would be welcome.
The Harmful Digital Communications Bill has been reported back and the select committee has made a few changes.
The Bill has added the definition of IPAP (Internet Protocol Address Provider - roughly an internet service provider) from section 122A(1) of the Copyright Act and then in section 17(2A) gives the District Court the ability to order an IPAP to release the identity of an anonymous communicator to the court. Of course, this would only reveal the name of the person who owns the internet account that was used and not the name of the person who used it, so the utility of this will be limited.
The Approved Agency (still unnamed, still expected to be Netsafe) would be subject to the Ombudsmen Act, the Official Information Act and the Public Records Act in respect of the functions performed under the bill. This is a welcome change as it's important that any agency performing state functions is covered by the bills that help provide proper oversight.
There have also been minor changes allowing the courts to vary orders made previously, clearing up which teachers can apply on behalf of pupils, and allowing threats to be treated as possible grounds for an order to be made.
Safe harbour improvements
The major change has been to the section 20 Safe Harbour provisions of the Bill that were dumped into the previous version at the last minute.
The original proposal was terrible - content hosts (pretty well anyone who allows the public to submit comments such as on a blog or forum) would be protected from legal action if they removed material immediately after receiving a complaint. It was obvious that this would be abused by those trying to silence people who they disagreed with.
The good news is that some complaints will be changed from "takedown on notice" to "notice and notice". This means that upon receiving a complaint, the content host will forward it to the original author of the complained about material (i.e. the person who wrote the comment). If the author agrees or doesn't respond, the material will be taken down, but if they disagree with the complaint the material will be left up - and the content host will still be protected from legal action under the safe harbour.
However, this does not apply when the original author cannot be identified (or if the author either doesn't want to respond or can't respond within the 48 hour time limit). Indeed, the phrasing of the act reads as if content hosts must remove material when in reality they only need do so if they wish to be protected by the safe harbour provisions.
Disturbingly a number of other suggested improvements were not picked up by the select committee. In particular we supported the ideas that complainants should have to make their complaint a sworn statement and that complainants would have to have been harmed by the material themselves.
So while this is a significant improvement, we still fear that these provisions will be abused by serial complainers, internet busybodies and those who want to suppress their "online enemies" by any means possible.
What hasn't changed
What's more serious is what hasn't changed. You can read our articles and submissions to see our full critique of the Bill but there are three points we wish to mention.
Firstly, the Bill sets a different standard for the content of speech online and offline. While we do understand that online communications might require a different approach in available remedies, we firmly believe that the standard of speech should be the same. We note that the internet isn't only for "nice" speech, it's increasingly the place where we all exercise the freedom of expression guaranteed to us by the NZ Bill of Rights Act.
Secondly, rather than fixing the horribly broken section 19 - causing harm by posting digital communication - the penalties have been increased. This section completely fails to recognise that some harmful communications have real value to society. For example, the idea that someone might be fined or jailed because they harmed a politician by posting online proof that the politician was corrupt is just horrendous. We honestly believed that the lack of a public interest or BORA test was a mistake but it seems that the Select Committee really does want to criminalise all harmful online speech. This neutered and ineffectual internet is not one we wish to see.
Thirdly, we worry that the bill will be ineffectual where it might be needed most while being most effective where it's most problematic to civil liberties. Many of the example harms mentioned in the original Law Commission report would not be helped by this Bill - they happen overseas, or they happen too fast, or the people being harmed are just too scared to tell anyone anyway. The Approved Agency will be able to do a lot in the cases where anything can be done, but we're not convinced of the need for the more coercive elements of the Bill.
There is no doubt that some people are being harmed by online communications. There is definitely a good argument to be made that the government could do something useful to help those people. We're not convinced that the approach taken by the Law Commission and the Government is effective and we're quite sure that it includes a number of unreasonable restrictions on the right to freedom of expression guaranteed to us all by the NZ Bill of Rights Act.
It seems inevitable that the Bill will be passed in its current form if there's time before Parliament closes for the elections. We can but hope that a future government will repeal it and have another go.
This oral submission concentrated on two misconceptions that we see as underpinning the bill: that speech should never harm anyone, and that different rules should apply to speech online and offline.
We then discussed problems with the effectiveness of the bill - and how it might not be that useful for victims of digital harms but might be quite handy for people who want to suppress the views of others.
We believe that this Bill is based on false premises about the nature of freedom of expression and the differences between digital and non-digital speech. We see the Bill as being a well-meaning but misguided threat to the civil liberties of New Zealanders. We fear that the Bill will be ineffective in too many cases where it might be needed most, while being too effective in the cases which are most problematic to civil liberties.
We support the establishment of an agency to assist those harmed by harmful communications and believe that this will go a long way to resolving the types of situations that can be resolved.
We believe that the court proceedings are unfair and unlikely to be of much use. We support the discretion and guidelines given to the court in making a judgement, but believe that the procedures of the court need to better take into account the requirements for a fair trial.
The safe harbour provisions for online content hosts are unreasonable. While online content hosts do need protection from liability, the suggested mechanism amounts to a way that any person can get material taken down that they don’t like for any trivial reason. This section needs to be completely rethought in the context of overseas experiences to ensure that freedom of expression is properly protected.
The new offence of causing harm is poorly conceived and criminalises many communications that are of value to society. If not removed in its entirety, defences and an overriding Bill of Rights veto should be added.
We have also made comments on the changes to the Harassment and Crimes Acts.
As part of our ongoing look at elements of the Harmful Digital Communications Bill (general critique and safe harbours), we now turn to the new offence of causing harm by posting digital communication (section 19). This is a criminal offence and is not related to the rest of the bill with its 10 principles, Approved Agency and quick-fire District Court remedies. It's quite simple:
(1) A person commits an offence if:
- the person posts a digital communication with the intention that it cause harm to a victim; and
- posting the communication would cause harm to an ordinary reasonable person in the position of the victim; and
- posting the communication causes harm to the victim.
"harm" is defined in the interpretation section as "serious emotional distress".
Unfortunately this new offence is actually very wide and may well capture many communications that are of immense value to society - or at least shouldn't be made illegal.
Let's consider the case where someone takes a photo of a politician receiving a bribe and, shocked at their corruption, posts that photo to the internet. This communication would:
- be posted with the intention of harming the victim (the prospect of facing criminal charges or being obliged to resign could be assumed to cause the victim distress).
- would cause harm to any reasonable person in the position of the victim (any reasonable person would not like having evidence of their criminal corruption exposed to the world).
- could be easily proved to have caused harm (serious emotional distress) to the victim.
The penalty? Up to 3 months in jail or a fine not exceeding $2000.
In section 19(2) the judge gets some guidelines about how to assess whether the communication causes harm, but nowhere is there the idea that some communications that cause harm might actually have some societal value or would otherwise come under freedom of expression. There are no available defences such as that the communication may be in the public interest, counts as fair comment, or exposes criminal wrongdoing.
And just in case you thought that whether the communication is true or not should matter, section 19(4)(a) clarifies that "...or otherwise communicates by means of a digital communication any information, whether truthful or untruthful, about the victim;"
This is obviously a terrible law and will have a detrimental effect on freedom of expression and public discourse in New Zealand. How will our journalists and citizen journalists be able to expose wrong doing when broadcasting it on electronic media such as the internet, radio or TV is a criminal act if it hurts the wrong-doer's feelings?
This law wouldn't be acceptable if it applied to speech in a newspaper, it's not acceptable online.
Section 19 isn't complete worthless - it also criminalises the communication of "intimate visual recordings" in an attempt to harm someone. This seems worth keeping but the parts of section 19 concerning speech need to be either removed or significantly modified to protect freedom of expression.
The safe harbour provisions in the Harmful Digital Communications Bill are a serious threat to online freedom of speech in New Zealand.
How it works
Anyone can complain to an online content host (someone who has control over a website) that some material submitted by an external user on their site is unlawful, harmful or otherwise objectionable. The online content host must then make a choice:
- Remove the content and thereby qualify for immunity from civil or criminal action.
- Leave the content up and be exposed to civil or criminal liability.
The content host has to make its own determination about whether a piece of given content is unlawful (which may be very difficult when it comes to subjective issues such as defamation and impossible to determine when it concerns legal suppression), harmful or "otherwise objectionable".
Furthermore, there is:
- No oversight of the process from any judicial or other agency.
- No requirement for the content host to tell the person who originally posted the content that it has been deleted.
- No provision for any appeal by the content host or the person who originally posted the material.
- No penalty for people making false or unreasonable claims.
We can safely assume that most content hosts will tend to play it safe, especially if they're large corporates with risk-averse legal teams, and will take down material when requested. They have nothing to gain and plenty to lose by leaving complained about material online.
Serious ramifications for freedom of speech
Don't like what someone has said about you online? Send in a complaint and wait for it to be taken down.
This applies to comments on blogs, forums on auction sites, user-supplied content on news media sites, etc, etc. These are exactly the places where a lot of important speech occurs including discussions about politics and the issues of the day. The debates can often be heated, and some sites are well known for encouraging intemperate speech, but these discussions are becoming and increasingly important part of our national discourse.
This law will make it too easy for someone to stop arguing and start making complaints, thereby suppressing the freedom of expression of those they disagree with.
The jurisdiction problem
Of course, this will only apply to websites that are controlled by people who have a legal presence in New Zealand. Overseas websites will continue to maintain their own rules and ignore New Zealand law and standards of online behaviour.
As currently written, these safe harbour provisions are just a bad idea. They're too open to abuse and we believe they're more likely to be used to suppress acceptable speech than to eliminate harmful or "otherwise objectionable" speech. As a very minimum, the complaint should have to be approved by the Approved Agency referred to in the other parts of the Bill.
That said, the whole idea of removing "otherwise objectionable" speech is also quite worrying. The Harmful Digital Communications Bill already has an expansive set of rules about what sort of harmful speech shouldn't be allowed online and this "otherwise objectionable" seems to extend it even further. One of the principles we stand up for here is that civil liberties such as freedom of expression are as important online as they are offline, and this law goes far beyond anything in the offline world.
We hope to have more comment and analysis on other aspects of the Harmful Digital Communications Bill soon.