Category Archives: Uncategorized

Guest Post: Hacking, Data, and You

The following is a guest post from someone who has established to us that they have good reason to remain anonymous.

Update: The NZITF has released some guidelines for coordinated disclosure in NZ.


Deliberately hacking into a system like this is a criminal offence.”

Judith Collins is not alone in taking the view that any use of a computer that retrieves more than it should is a criminal act. Each time another government agency is publicly mocked for yet another failure to handle information security competently, the outcry is always directed towards the “evildoers” who found the hole and exploited it.

Information security is not a trivial matter, it’s not easy and it’s very rare that any organisation actually has the in-house skills needed to deal with the multitude of new ways systems can be attacked. Worse, as was illustrated in the breach of MSD’s network, management do not pay attention to the possible damage even when the risks are plainly pointed out to them.

Hacking

It is worth noting that “hacking” is a term often thrown around in the media or by the public for acts which barely extend beyond the normal usage of a system. “Hacking’ is, if we believe the way the term is used, literally any unintended use of a system no matter how trivial or obvious. A significant part of my job is to imagine how people can attack systems, and to weigh up the likelihood of those attacks being successful. I am, in part, a hacker by those terms.

Faced with any system, my first instinct is to poke it and notice the details most people do not – it’s my job to notice and reason about those details. Most geeks will do it somewhat instinctively, not because they’re “evil” as much as certain people want to make us out to be, but just because it’s there and it’s interesting. Given an “open file” dialogue box they’re going to see what else they can open, just like happened at MSD.

What is then done with the knowledge is where things get harder to define.

Whisteblowing

Whistleblowing is a dangerous business. The whistleblower becomes part of the story, with their motives and character questioned both in the media and by politicians and civil servants desperate to distract attention from their own failings. For some people it can be the end of their career.

It should not be taken lightly. You will note this story is published under a pseudonym, I won’t be putting my name out there to be dragged into the wrath of an embarrassed Minister’s rage. My objective as a whistleblower may have been to get a security hole fixed so that others can’t exploit it, but that won’t matter once it’s a media story.

Equally, if you are blowing the whistle, you had better be sure your own actions were honourable and can be demonstrated to be so. You should expect any and all of your interaction with the organisation will now be released/leaked for public consumption. But how should you disclose the vulnerability in such a way that it gets fixed and your name doesn’t get dragged through the mud?

What do we want?

We need to decide what the desired outcome is. Do we want information to be secure and for people who discover flaws to feel comfortable in disclosing them so security can be improved? Or do we want people to be too scared to speak up, so that those flaws live on to be discovered and traded on the black market?

It is in society’s interests that systems and information are well protected. We should expect that promises given to keep information secure are met, and that disclosures of holes aren’t responded to with yet another series of excuses and blame shifting. You might not feel that a breach of any given system affects you, but if breaches are covered up there is very little incentive to fix them.

Good disclosure

What can organisations do to encourage good disclosure? The first is to have the right attitude to information security. There are simple steps that any organisation can take to ensure vulnerabilities discovered by the public are handled properly.

  • Make it obvious where people should report any vulnerabilities that they find. This is no different from any other emergency contact details or a feedback point in a website.
  • A clear, public, policy on vulnerability disclosure. What steps will be taken with a claim of a vulnerability, how should information obtained be handled, and so forth. This is as much about ensuring you have processes internally as it is about making it safer for people to disclose to you.
  • Ensure vulnerability reports are reviewed by staff who are capable of giving them expert consideration. You don’t want a half-garbled explanation trying to be handled by people without the depth of experience to see the problem and to speak the same language.

This, however, leads us to the thornier issue of what responsible disclosure and handling looks like. What does ethical hacking, if there is such a thing, actually consist of? There are no hard and fast rules about what is acceptable.

Even within the IT security field there is significant debate on whether organisations should be notified privately or whether ‘full [public] disclosure’ is the only way to get real change in security practices. And if you do go the private route, how long do you persist with it before you give up and go public?

Unlike a theoretical exploit against a system these are breaches which involve real data. That becomes much harder to make set of ethical guidelines about because fundamentally it’s a criminal act. And as we started out this post with, there are no end of people who will attempt to convict you for it. For that reason you had better have a lawyer and I should note that none of this post is intended as legal advice.

Take too much data, or exploit the system too often and your intent will be read as a criminal act. How much is “too much” is not easily identified either. Limiting the amount of information copied and limiting how often the breach is exploited may help.

“Responsible disclosure” states that at a minimum the organisation should be notified and given a chance to correct the problem, before public or “full” disclosure takes place. The point is that organisations who value information security will have good policies and clear contact points to deal with breaches and those organisations should be rewarded for doing so. The outcome is what everyone wants, better information security.

Disclosing to journalists or competitors is much less ethical if the original organisation has not been contacted. This is less true if they have been and they have dismissed the breach or failed to respond in an adequate time. Again, there are no hard rules about how long that should be. But in either case, this is a path that is almost certainly going to result in questions about your intent.

Extending the Protected Disclosures Act?

This is not a new problem. The law already recognises that there are times when people have a duty to breach an obligation they may have, and offers legal protection when they do so. The Protected Disclosures Act 1990 allows employees and other people inside an organisation to blow the whistle provided they act in accordance with a specific set of rules.

Perhaps it is time we had an IT vulnerability disclosure law that applies to people who are not employees. It would outline rules to follow when disclosing a vulnerability, and would provide legal protection as long as those rules were followed. The outcome would be that more holes can be discovered and fixed, thus improving the security of all our information.

What outcome do we want. Do we want vulnerabilities fixed, or points to be scored? I want my information secure, and I don’t care how that breach is discovered. I just want it fixed, and for all organisations to take information security seriously.