Scams, Deepfake Porn And Romance Bots: AI Could Do Untold Damage

Last month, generative AI app Lensa came under fire for allowing its system to create fully nude and hyper-sexualised images from users’ headshots. Controversially, it also whitened the skin of women of colour and made their features more European.

The backlash was swift. But what’s relatively overlooked is the vast potential to use artistic generative AI in scams. At the far end of the spectrum, there are reports of these tools being able to fake fingerprints and facial scans (the method most of us use to lock our phones).

Criminals are quickly finding new ways to use generative AI to improve the frauds they already perpetrate. The lure of generative AI in scams comes from its ability to find patterns in large amounts of data.

Cybersecurity has seen a rise in bad bots: malicious automated programs that mimic human behaviour to conduct crime. Generative AI will make these even more sophisticated and difficult to detect.

Ever received a scam text from the tax office claiming you had a refund waiting? Or maybe you got a call claiming a warrant was out for your arrest?

In such scams, generative AI could be used to improve the quality of the texts or emails, making them much more believable. For example, in recent years, we’ve seen AI systems being used to impersonate important figures in voice spoofing attacks.

Then there are romance scams, where criminals pose as romantic interests and ask their targets for money to help them out of financial distress. These scams are already widespread and often lucrative. Training AI on actual messages between intimate partners could help create a scam chatbot that’s indistinguishable from a human.

Generative AI could also allow cybercriminals to more selectively target vulnerable people. For instance, training a system on information stolen from major companies, such as in the Optus or Medibank hacks last year, could help criminals target elderly people, people with disabilities, or people in financial hardship.

Further, these systems can be used to improve computer code, which some cybersecurity experts say will make malware and viruses easier to create and harder to detect for antivirus software.

The Technology Is Here, And We Aren’t Prepared

Australia’s and New Zealand’s governments have published frameworks relating to AI, but they aren’t binding rules. Both countries’ laws relating to privacy, transparency and freedom from discrimination aren’t up to the task, as far as AI’s impact is concerned. This puts them behind the rest of the world.

The US has had a legislated National Artificial Intelligence Initiative in place since 2021. And since 2019, it has been illegal in California for a bot to interact with users for commerce or electoral purposes without disclosing it’s not human.

The European Union is also well on the way to enacting the world’s first AI law. The AI Act bans certain types of AI programs posing unacceptable risk such as those used by China’s social credit system and imposes mandatory restrictions on high risk systems.

Although asking ChatGPT to break the law results in warnings that planning or carrying out a serious crime can lead to severe legal consequences, the fact is there’s no requirement for these systems to have a moral code programmed into them.

There may be no limit to what they can be asked to do, and criminals will likely figure out workarounds for any rules intended to prevent their illegal use. Governments need to work closely with the cybersecurity industry to regulate generative AI without stifling innovation, such as by requiring ethical considerations for AI programs.

We also need to be more cautious about believing what we see online, and remember that humans are traditionally bad at being able to detect fraud.

Can You Spot A Scam?

As criminals add generative AI tools to their arsenal, spotting scams will only get trickier. The classic tips will still apply. But beyond those, we’ll learn a lot from assessing the ways in which these tools fall short.

Generative AI is bad at critical reasoning and conveying emotion. It can even be tricked into giving wrong answers. Knowing when and why this happens could help us develop effective methods to catch cybercriminals using AI for extortion.

There are also tools being developed to detect AI outputs from tools such as ChatGPT. These could go a long way towards preventing AI-based cybercrime if they prove to be effective.

(Published under Creative Commons from The Conversation)

Recent Posts

  • Featured

Commentary: The Heat Is On, From Poll Booths To Weather Stations

Parts of India are facing a heatwave, for which the Kerala heat is a curtain raiser. Kerala experienced its first…

14 hours ago
  • Featured

India Uses National Interest As A Smokescreen To Muzzle The Media

The idea of a squadron of government officials storming a newsroom to shut down news-gathering and seize laptops and phones…

15 hours ago
  • Featured

What Do The Students Protesting Israel’s Gaza Siege Want?

A wave of protests expressing solidarity with the Palestinian people is spreading across college and university campuses. There were more…

15 hours ago
  • Featured

Eco-anxiety Soars As Planet Health Plummets

Climate anxiety, ecological anxiety, eco-anxiety, or environmental anxiety are umbrella terms used to describe a spectrum of mental and emotional…

16 hours ago
  • Featured

The Curious Case Of Google Trends In India

For nine of the last ten years, the most searches were for why Apple products and Evian water are so…

3 days ago
  • Featured

Here’s How Real Journalists Can Lead The War Against Deepfakes

Almost half the world is voting in national elections this year and AI is the elephant in the room. There…

3 days ago

This website uses cookies.