Criminals Using AI to Impersonate Loved Ones

K.C. Sabreena Basheer Last Updated : 24 Apr, 2023
3 min read

Criminals have found a new way to scam people by utilizing artificial intelligence (AI) to create convincing voice imitations of family members or friends. This emerging threat, known as “deepfake audio,” combines “deep learning” with “fake” to create highly realistic voice clones capable of deceiving victims.

Criminals use AI to create realistic voice clones of loved ones to scam people. Learn more about this emerging threat called 'deepfake audio.'

For just a few euros, numerous companies now offer high-quality voice cloning services that can easily fool unsuspecting targets. One notable example that gained widespread attention involved users of 4chan, an online forum, which used a voice cloning tool called Prime Voice by start-up ElevenLabs to replicate British actress Emma Watson’s voice reading Adolf Hitler’s Mein Kampf. Another example demonstrated the technology’s accuracy by reproducing actor Leonardo DiCaprio’s speech at the United Nations.

Also Read: AI-Generated Song Goes Viral

Defining Deepfake Technology

Deepfake technology is a subset of artificial intelligence capable of generating synthetic audio, video, images, and virtual personas. This emerging technology poses a significant risk to society as it can be used to manipulate perceptions, spread disinformation, and enable cybercrimes.

Criminals use deepfake technology to create voice clones and scam people.

As deepfake technology becomes more sophisticated and accessible, the potential for its malicious use increases. Legislators, industry leaders, and the public must work together to develop comprehensive solutions that address the growing threats posed by audio deepfakes while ensuring that AI innovation continues to progress responsibly.

Learn More: An Introduction to Deepfakes with Only One Source Video

Increase in Deepfake-Enabled Scams

The rise of audio deepfake technology has alarmed experts who warn of its potential for misuse. One primary concern is the spreading of misinformation, such as making it appear as though a politician made a shocking statement they never actually uttered. Another fear is the exploitation of vulnerable individuals, particularly the elderly, through scams involving convincing voice impersonations.

People get scammed by deepfake voice clones

In a recent case, a Vice journalist successfully accessed his bank account using an AI replica of his voice, calling into question the effectiveness of voice biometric security systems. The company behind the Emma Watson audio deepfake has since increased the price of its services and implemented manual verification for new accounts. A growing number of deepfake-enabled phishing incidents have been reported recently, highlighting the urgent need for safeguards against this emerging threat. Some other notable examples include:

  • A bank manager was scammed into initiating wire transfers worth $35 million using AI voice cloning technology.
  • A deepfake video of Elon Musk promoting a crypto scam went viral on social media.
  • During a Zoom call, an AI hologram impersonated a chief operating officer at one of the world’s largest crypto exchanges, scamming another exchange out of their liquid funds.
  • Adversaries have used deepfakes in job interviews to gain access to company systems.
  • A survey found that 66% of participants had witnessed a cyber incident in which deepfakes were used as an attack vector.

Our Say

The rapid advancement of audio deepfake technology has exposed the urgent need for effective legislation and safeguards to prevent its malicious use. Lawmakers must collaborate with industry experts to develop comprehensive regulations that balance fostering responsible AI innovation and protecting society from the dangers of deepfake-enabled scams.

Such legislation should include criminal penalties for malicious deepfake use, guidelines for responsible AI development and deployment, and support for research and development of detection and countermeasure technologies. Additionally, public awareness campaigns should be conducted to educate people about the risks associated with deepfake audio and other emerging AI threats, as well as the steps they can take to protect themselves.

Sabreena Basheer is an architect-turned-writer who's passionate about documenting anything that interests her. She's currently exploring the world of AI and Data Science as a Content Manager at Analytics Vidhya.

Responses From Readers

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details