Ongoing Encroachments Upon Privacy: An Interview With APF’s David Vaile

by
Information on this page was reviewed by a specialist defence lawyer before being published. Click to read more.
david vaile

Over the last half decade, the Coalition government has been carrying out an ongoing attack on the basic right to privacy. A series of bills has passed through parliament, each serving to erode Australian privacy rights one step further.

Currently, the federal government is in the process of rolling out My Health Record: a national health data scheme that will automatically link all citizens’ private health information to a centralised system.

The scheme is being spruiked as a way of providing doctors with convenient access to patients’ medical histories. However, critics are warning of the privacy threats it poses, as the scheme allows for the secondary use of data, as well as potentially allowing law enforcement access.

Police to access smartphones

The government recently released the draft of a bill that is designed to provide law enforcement and security agencies access to encrypted messages at their end point. This means obtaining access to devices, such as phones, after messages have been decrypted.

If passed, the Telecommunications and other Legislation Amendment (Assistance and Access) Bill 2018 would require telecommunications providers to allow authorities access to information stored on citizens’ devices, and it would allow for this to be obtained in a covert manner.

While this legislation is being sold to the public as a necessary counterterrorism and national security measure, these powers can also be invoked to “protect the public revenue,” enforce criminal laws and in “the interests of Australia’s national economic well-being.”

Your life’s in a databank

In October 2015, the government’s metadata retention regime came into effect. It requires telcos and ISPs store customers’ metadata for a period of two years. Currently, 21 law enforcement agencies led by ASIO are provided warrantless access.

Under this regime, law enforcement agencies can’t actually access the content of phone calls or emails. What they can access is metadata, which includes the time and date of calls, emails, texts and internet sessions.

Some may think there’s no problem with authorities snooping through this sort of information. However, experts warn that quite a profile can be built up about an individual by sifting through their metadata, including their political leanings, their associations and their medical issues.

Protecting your privacy

The Australian Privacy Foundation (APF) formed back in 1987 as part of the campaign against the Hawke government’s proposal to establish a national identity card. Since the defeat of the Australia Card, the APF has continued its fight against encroachments upon citizens’ privacy rights.

Today, David Vaile is the chair of the APF. The privacy expert is also co-convenor of UNSW’s Cyberspace Law and Policy Community, a research and policy group dealing with public interest issues arising in the digital realm.

Sydney Criminal Lawyers® spoke with Mr Vaile about the flaws he’s pinpointed in the My Health Record scheme, the implications of the new legislation that will allow law enforcement to look through citizens’ inboxes and the lack of privacy protections in this country.

Firstly, Mr Vaile, the Australian Privacy Foundation has warned of the privacy risks posed by My Health Record.

What are your main concerns about this scheme?

There’s a lot of them. That’s the first thing to note, that it’s not just one problem or issue. It’s on many different levels and dimensions. I’m interested in why big IT projects fail and the software development methodology.

As far as I understand it, and I’ve been following this for decades, you need a methodology that focuses on the risks: identifying risks, on floating possible solutions for diverse risks, and then essentially addressing them in a spiral approach, which is a series of disposable iterative prototypes that has been popularised as agile methodology.

It means that you can’t do a linear process. You can’t accept the promises and assurances of the vendor or the designer. You can’t listen to the experts alone. You have to basically try to break it in every possible way.

And essentially have a bit of humility and recognise that it’s hard. Most big IT projects fail – something along the lines of 75 percent of them fail. It’s not a mature industry. There’s no successful routine business as usual that gets practiced. It’s actually extremely challenging.

And as I say, the only method I see is one that focuses on risk, and in particular, identifying the worst risk as early as possible, and floating methods of disposable iterative prototyping methodology that try to determine whether you’ve got the worst things under control before you go ahead.

My example is building a brick shithouse. A toilet out the back of a country house. The technology is understood. The materials are understood. The functions, the pricing and the performance is understood. And you can get a brickie and a plumber to give you a quote. They’d use the standard methodology and it would work.

To take mature technology, which uses a relatively standard method and a linear approach to building something like My Health Record is very hard.

In this question of methodology, the typical problem – one of the reasons why big IT projects fail, particularly government projects – is the level of project management sophistication and humility that it’s necessary to use to actually succeed.

It has to be quite brutal. It has to be prepared to say we can’t do this thing or that’s too expensive or the other part is something that we haven’t solved yet. Essentially, to be able to throw things overboard if they don’t work.

Instead, what you often get with government is the idea that we can just mandate something. We can just say what we want, demand that someone do it, charge ahead in a relatively linear process and treat any obstacles as something to bypass or put off until later, because we know how to do it.

So, that’s the first observation. It has the characteristics of a classic, major, large government IT system failure. The other substantive issues are that it’s not useful enough and it’s not safe enough to use. And the opt-out process is not trustworthy.

It’s not useful enough because it is not actually a clinical record. Doctors can’t use it. It’s not designed to be accurate, complete, up-to-date, or relevant enough. And those are the statutory requirements in the Privacy Act – data integrity.

It’s not designed so that clinicians, doctors, nurses, or emergency personnel could use it with any degree of confidence or safety.

It’s an imitation of a medical record. It’s a fragmentary extract, where bits of random PBS information can be piled in. It hasn’t solved the core problems of clinical medical records, which include interoperable data format.

So, the contents of it is not a known understood structured database, like you would need for a proper record. It’s just bits and pieces. And it also doesn’t solve the problem of interoperable security.

So, when you’re doctor promises that you can tell them about all of the terrible things that have happened – you’ve been using drugs, you’ve committed a crime, you’ve got psychiatric problems and now you want some medical treatment – the doctor cannot promise that if it goes into My Health Record that they’ve got any control over it.

The federated data structure or network interoperability you would need to follow through on all the promises about security haven’t actually been designed into it. So, it’s not actually a medical record and it doesn’t have the security side of it.

The other issue is that it’s not trustworthy enough, because you’ve got 900,000 users and you’ve got no logging of the names of individuals accessing it. If something goes wrong, you can’t find the person.

I asked the security architect what the security model was. And essentially, they can’t tell you. So, it’s an invisible framework.

Originally, My Health Record was going to allow individuals to personally control their data. And the scheme was opt-in, until the government passed legislation late last year making it compulsory for all citizens to be a part of it, unless they opt-out by 15 November.

Why do you think the government decided on the opt-out model? And what do you think the implications of requiring people to opt-out are?

The relationship between a doctor and a patient is a very sensitive one. It’s much stronger than any other relationship in terms of obligation of confidentiality, privacy and respect for the issues of the patient.

A fundamental flaw in the architecture of My Health Record is that you’re giving over control of the most sensitive personal information that the law recognises – medical records – to a third party with conflicting interests.

And they’re not proper medical records. They’re not treated by doctors or anyone else as the actual medical records, but they expose all of the confidentiality problems. And in a sense, you have a built-in design problem – a conflict of interest – in allowing that very demanding and legally capable third party control over what should be confidential.

If you look at the history of this there was some pretence that it was patient controlled and it also started off using a similar sort of consent mechanism to the rest of medical practice – namely informed consent.

The problem for the designers and the government – the proponents – was that nobody wanted it. No one would sign up. The opt-in take up rates were very low.

Instead of practising user centred design, which is the only form of design in IT that actually works, they said maybe they were doing something wrong if people don’t want it. If they had commercial disciplines they would have said the customer is always right. They don’t want what we have got to offer.

My Health Record is not useful enough for anyone to want, once you know it’s also not safe enough to trust.

Instead, what they’ve said is that people have given us the wrong answer when we asked them. So, we better not ask them.

Now, you’ve got a situation with the opt-out where rather than informed consent, you’ve got no consent. They’re not asking for this. It’s not a question being posed.

And there’s no informed information. They won’t give the limitations of the functionality that people could use to make their own decisions. And they won’t tell you about the risks.

The opt-out process is fundamentally untrustworthy. It itself disqualifies a system if you can’t recruit users without not telling them.

Last week, the government released the draft Assistance and Access Bill. If passed, this will mean telecommunications providers will be required to allow authorities to access information on citizen’s devices, such as smartphones.

What are the implications of allowing authorities to access citizens’ personal information in this way?

I was at a meeting organised by Internet Australia essentially discussing encryption. And one of the global IT security experts Hal Abelson was there. He pointed out that transparency and accountability are the core questions when you look at these sorts of proposals for breaching IT security and for compromising the effectiveness of encryption.

The concern here is that the ambiguity, uncertainty and lack of clarity in the legislation leaves it more or less open-ended as to what is actually going to be required.

One of the big problems is, even on a relatively good reading, it’s unclear what’s actually being proposed. The marketing of it and the second reading speech are full of assurances that it is not a backdoor and that they’re not trying to break encryption.

But, when you read the legislation itself, it also encompasses a much greater expansion of the scope of for instance the operation of the Surveillance Devices Act. Under this, you no longer need a surveillance device. It becomes an open-ended collection of data act.

And the so-called assistance part of it leaves out all of the specific operational technologies and interventions that are involved. And it leaves it in the realm of abstraction.

Looking at whether this is good law. Is it clear what it covers? Is it clear what the limitations and purpose are? The answer would be no.

How do you see the government applying these new powers?

It’s a massive grab for an expansion of the warrantless mass surveillance approach that we’ve seen with the metadata retention regime. But, it’s a more sophisticated argument being made for it.

The first metadata retention proposal was two lines in a report. And it was rejected by the Parliamentary Joint Committee on Intelligence and Security as quite inadequate in 2013.

The second attempt for the metadata retention came in 2015. And in that case, they actually got police to say that it would be good and it was needed.

If we are thinking of metadata retention or oversight and assessment of intrusive communications surveillance mechanisms generally there are two experiences that are the gold standard.

In the US, the Privacy and Civil Liberties Oversight Board (PCLOB) review of their phone metadata collection found that in effect there was no actual benefit or not enough benefit attributable.

They basically caught one terrorist sympathiser who donated a few thousand dollars to someone in Yemen. That was it for the billions of intercepted records. And they made it illegal afterwards. They removed the legal authorisations.

The European Court of Justice in 2014 reviewed the arguments and the evidence for the data retention directive there and found again that there was no evidence that the actual benefits outweighed the systemic and widespread intrusion and risk.

They said the data retention directive was invalid as a question of law, because it doesn’t run through.

So, where we’ve got to now with these very extensive expansions of surveillance and access requirements is that they do formally reject some of the complaints made by people before there was any priority about what they were proposing by saying there is no backdoor and we are not proposing to break encryption.

But, unlike either the PCLOB or the European Court of Justice, they’re not actually saying that here’s the evidence that this would work and here’s the evidence of the risks and the side effects. In that sense, they’ve failed to give the necessary information and assess the risks of this.

One thing of interest is the heritage connection with section 313 of the Telecommunications Act, which was in two parts. The first one is something that I have been highly critical of and hasn’t been cleared up by the parliamentary inquiry.

Section 313(1) says telcos and ISPs have to “do their best” to prevent the commission of offences. So, it’s completely open-ended. It doesn’t say you’ve got to help anyone in particular. It is an open-ended obligation.

And in a sense, some of this new legislation is the worst parts of that writ large and now, expanded to include other internet and information service providers, not just ISPs and telcos. It also has an extra territorial operation, which the last one didn’t.

With the problem of immunity, it essentially tries to recruit the contracted services providers now on a much broader scale, not just ISPs and telcos, to become not the agents and servants of their customers under a contract, but to become the agents and servants of third parties, including various law enforcement.

And lastly, Australian authorities already have access to a wealth citizens’ personal information. And it seems legislation is continuously being introduced to provide them with more.

Where do you see this all heading? Do you foresee a time when the government will have the power to completely deny us our right of privacy?

Particularly at the federal level, both major political parties have shown entrenched hostility to giving enforceable legal rights to Australians to protect their privacy, their confidentiality and their information security.

There have been five reviews over the last thirty years. Former High Court Justice Michael Kirby carried out one of the first ones. And they’ve all said that there should be a law. And you should be able to sue for breach of privacy. We still haven’t got that.

I sat on the Law Reform Commission’s 2014 report. It repeated the conclusions of the 2008 report, which basically said the Privacy Commissioner model is broken. They take more than a year to respond to complaints. And they hardly ever make decisions.

You have ordinary individuals, but also journalists, human rights campaigners and dissidents, lawyers and advocates, who don’t have any substantial recognition of their security of communications and their rights to privacy.

And on the other hand, you have this endless encroachment of essentially trying to reconstruct the idea of the general warrant of the late Middle Ages. That was where the king could say, “I want everything. And my troops can go anywhere.”

The reaction to that inspired the US Fourth Amendment, which required specificity in terms of warrants.

In the US, there’s a lot of discussion about whether all the metadata surveillance and other sorts of mechanisms require a warrant. That same discussion here is weaker because we don’t have any sort of constitutional protections for privacy or limitations on warrants.

And on top of that, unlike the US and most other countries, we don’t have a right to sue when privacy is breached.

It’s not going to be a one stop thing – all or nothing. It will continue to be the gradual incremental erosion of unprotected expectations, which in Australia aren’t legally protected.

Last updated on

Receive all of our articles weekly

Author

Paul Gregoire

Paul Gregoire is a Sydney-based journalist and writer. He's the winner of the 2021 NSW Council for Civil Liberties Award For Excellence In Civil Liberties Journalism. Prior to Sydney Criminal Lawyers®, Paul wrote for VICE and was the news editor at Sydney’s City Hub.

Your Opinion Matters