The Sinister Side of AI: Scams, Sabotage and Sexualisation

by
Information on this page was reviewed by a specialist defence lawyer before being published. Click to read more.
Artificial Intelligence

We’ve been hearing the ‘astounding’ benefits of generative Artificial Intelligence (AI) over the past couple of years and how it has the potential to ‘revolutionise” the world in which we live.

In the race to embrace the technology, little attention has focused on its more sinister potential. 

But here it is… 

Of course, AI has been around for a long time. This ‘machine-learning’ technology has been helpful, and relatively harmless – for example, cars use AI in ‘assisted parking’ or ‘assisted braking’ to detect the movement and space around the vehicle to ‘help’ the driver. 

Use of AI to bully, sabotage and scam

But the new AI – ‘generative AI – is next level. 

It uses algorithms to create content, including audio, code, images, text, simulations, and videos. 

While it offers huge potential to drive creativity and innovation in incredible new ways, there is also a downside – an increased ability for criminals to use AI in harmful way, by removing the extent of ‘human interaction’ in their operations – making them more efficient, sophisticated, and scalable.

This has enabled unscrupulous actors to develop and implement techniques to deceive and defraud members of the public, as well as to imitate companies and their personnel and thereby enable them to sabotage  operations by not only distributing false material but engaging en masse with those to whom the material is distributed.

It has also enabled these nefarious elements to add additional layers of anonymity, helping them to better evade detection. 

And there’s even more to the potential sinister side of the technology, with Australia’s eSafety Commission announcing this week that it has received reports of children using generative AI to create sexually explicit images – or as the law calls it in New South Wales, to ‘produce and disseminate child abuse material’ – in order to bully their peers. 

As if parents don’t have enough to be concerned about when it comes to their children’s use of technology, here comes another wave of issues to be on top of. 

Where does the responsibility for online safety lie? 

While Australia has fairly strong cyber safety laws in the world, technology is a rapidly moving beast, and it’s important that laws stay in line with how technology is being used.

Tech platforms, technology developers and companies, and the wider industry as a whole also has a responsibility when it launches new inventions. And now the eSafety Commission is calling on the industry to step up protections around the use of AI. 

One of the suggestions put forward by the eSafety Commission is for companies to build in visible and invisible watermarks to prove content is AI generated. That said, even though watermarks might be able to prove an image is fake, or a message has been generated by something like ChatGPT, these are ultimately solutions that won’t actually help the victims who have suffered as a result of messages or sexually explicit images being posted and shared. 

In relation to reports already received, Australia’s eSafety Commissioner Julie Inman Grant says that so far the incidents have been few, and interconnected. However she admits that the number is likely to grow, particularly as the technology becomes more sophisticated and widespread. 

AI generated child sexual abuse material 

Criminals are already using AI to generate child sexual abuse material. These ‘synthetic’ or ‘doctored’ images make it much more difficult for authorities to find the actual children who are being exploited and abused. 

Further ‘deepfake’ pornography is also being assisted by the use of AI. ‘Deepfakes’ have been around for a decade or so, started by clever tech heads who were able to use graphic design tools to generate pornography using pictures of people. 

The advances in AI have simply enabled creators to step up their game. And, so far, the main targets of this type of image-based abuse, according to the majority of authorities globally, are women. 

Does ‘ethical AI’ exist? 

And while there has been a lot of discussion about ‘ethical’ AI, so far the development of the internet has not really met expectations in this arena, and the rapid pace of advancement of technology doesn’t always give us a chance as a society to think carefully and methodically about how we want to use these tools and what regulations and protections we want to put around them. 

Of course, too much regulation stifles innovation and entrepreneurship and can also risk encroaching on our democratic freedoms – a balance needs to be found. 

The role of ‘education’ in online safety 

A lot of the social discussion around the introduction of AI so far has also centred on ‘education’, but most parents and school teachers will most likely attest to the fact that they’ve already tell got their hands full dealing with the current technology that young people have access to, without even considering the implications of what’s coming in the next few years. 

It’s also true in many households that the kids run rings around their parents when it comes to knowing what technology is available, how to access it, and how to use it, which makes it difficult for parents to feel confident having technology-related discussions with their kids. 

As parents and guardians and educators, the most we can do is ensure that young people know the difference between what’s right and what’s wrong, and aim to raise a generation which will predominantly use technology for good rather than evil. 

Receive all of our articles weekly

Author

Sonia Hickey

Sonia Hickey is a freelance writer, magazine journalist, and owner of 'Woman with Words'. She has a strong interest in social justice and is a member of the Sydney Criminal Lawyers® content team. Sonia is the winner of the Mondaq Thought Leadership Awards, Spring 2022.

Your Opinion Matters