Big Australian Retailers Sprung Collecting Customer’s Faceprints

by
Information on this page was reviewed by a specialist defence lawyer before being published. Click to read more.
Kmart customer data

In a move right out of former Liberal home affairs minister Peter Dutton’s playbook, a number of Australian retailers were recently found to be utilising biometric facial recognition technology to capture digital “faceprints” of customers that can be matched to other images.

In an investigation report published in late June, Australian consumer rights watchdog CHOICE outlines that of 25 leading retailers approached, three major outlets, Kmart, Bunnings and The Good Guys, were collecting customer “faceprint” data.

According to the retailers, which have since ceased using the technology, the data was being collected in order to curb shoplifting and antisocial behaviour. And customers were notified of the practice via modest instore signage and within online privacy policies.

CHOICE has since raised a complaint with the Office of the Australian Information Commissioner (OAIC), outlining that the practice is likely in breach of the Privacy Act 1988 (Cth) (the Act), and that it poses misidentification, discrimination, profiling and cybercrime risks.

The OAIC announced on 12 July that it’s investigating the retailers in relation to the use of facial recognition technology, which the federal Coalition government had wanted to incorporate into a nationwide surveillance system, despite evidence revealing that’s its highly flawed in application.

A “privacy-invasive” practice

CHOICE surveyed more than 1,000 Australians over March and April to gauge public awareness around retailers collecting customers’ faceprints, and it found that 76 percent didn’t know the practice was taking place, while 78 percent were concerned about the security issues involved.

In its complaint to the OAIC, CHOICE identified two main issues with the practice. These were the “lack of notice and consent” involved in “the collection of sensitive information”, as well as that “the stated business purpose” was “disproportionate to the privacy harms posed to individuals”.

“CHOICE asserts that facial images, and faceprints generated from them, are ‘about’ individuals, who are ‘reasonably identifiable’, under the definition of ‘personal information’ in the Act,” reads the complaint, which adds that the practice also constitutes the collection of “sensitive information”.

The consumer watchdog then explains how retailers using biometric facial technology are likely in breach of the Australia Privacy Principles contained in schedule 1 of the Act.

The first principle requires that personal information is managed “in an open and transparent way”, which CHOICE claims wasn’t happening as the retailer’s privacy policies didn’t outline how faceprints would be “collected, held, used and destroyed”.

In terms of the third principle, which deals with the “collection of solicited personal information”, the watchdog considers the practice “is not reasonably necessary”, that there’s a lack of consent involved and that it was operating in an “unfair” manner.

The fifth principle deals with the “notification of the collection of personal information”. CHOICE maintains that privacy policies posted somewhere online and small signs mentioning the technology discretely place around a store, don’t constitute proper notification of customers.

The creeping surveillance state

Former home affairs minister Peter Dutton had sought to establish a nationwide facial recognition system, known as the Capability, which would link up all federal and state citizen photo identification databases, so law enforcement could identify individuals in CCTV images in real time.

Dutton’s legislation in this regard was never voted through parliament. And various appraisals of the technology have found it is hopelessly flawed, especially when it comes to misidentifying people of colour and women. Indeed, UK police found it misidentified subjects 95 percent of the time.

Facial recognition was also being trialled by the AFP and several state law enforcement agencies, it came to light early last year. These agencies were found to be using software produced by US-based company Clearview AI, which utilises a database of over 3 billion images captured from the internet.

The OAIC then conducted a joint investigation into Clearview AI with its UK counterpart, which last November found its system was in breach of the Act and ordered the company to cease its operations and to destroy its existing images and templates collected from Australia.

“Collecting sensitive information without consent is cause for concern – it’s not just theoretical – because that sort of information can be replicated and used for identity theft,” former NSW Council for Civil Liberties (NSWCCL) president Pauline Wright told Sydney Criminal Lawyers.

“We don’t have any control over that information once it has been scrapped off the internet. We have no control over what happens with it, how securely it’s kept, the purposes it’s put to and whether any of those uses are legitimate,” she added, just after the OAIC released its determination.

Antiquated law

Australia’s privacy laws have long been considered weak. In coming attorney general Mark Dreyfus told the AFR in late June that his government is about to review the Act as it is “out of date and in need of reform for the digital age”.

In doing this, the AG is currently consulting a plethora of submissions that were received via a review consultation process relating to the Act, which was launched in December 2019 by the chief lawmaker’s department.

Dreyfus suggests that up to 70 major amendments to the Act are needed, which will be implemented during this term of parliament.

The NSWCCL submission outlines that when the Act was being drafted “privacy regulation was seen by big business as an obstacle”, and the legislation sought to balance out the right to privacy of the individual with “the interests of entities in carrying out their functions or activities”.

The rights watchdog further set out that the “unassailable and unchecked integration of digital technology into a daily life” has highlighted the problems with our privacy laws, especially in terms of the intrusive observance of the public and “the misuse of personal information”.

Strong regulations needed

Critics of the three major Australian retailers found using facial recognition technology have warned that such stored information opens the way for data breaches and identity theft, but there are also concerns about how this faceprint data could be used for financial gain.

Social media and online platforms are being used to enhance and personalise marketing and advertising practices, which consumers were not privy to when the technology was first rolled out.

So, privacy advocates are now concerned about the way that retail companies might apply the faceprint data being collected on customers, as it could be used to create personal profiles about individuals, which might then be exploited for marketing purposes.

CHOICE believes “that these retail businesses are disproportionate in their over collection of this information, which means that they may be in breach of the Privacy Act,” said the watchdog’s consumer data advocate Kate Bower in its report.

Bower further made clear that irrespective of any breaches the retailers might be a party to, the need for clearer and stronger regulations around how such companies are permitted to utilise biometric facial recognition on customers is long overdue.

Receive all of our articles weekly

Author

Paul Gregoire

Paul Gregoire is a Sydney-based journalist and writer. He's the winner of the 2021 NSW Council for Civil Liberties Award For Excellence In Civil Liberties Journalism. Prior to Sydney Criminal Lawyers®, Paul wrote for VICE and was the news editor at Sydney’s City Hub.

Your Opinion Matters