AI is Fueling a Rise in Voice Scams

The cost of falling for an AI voice scam can be significant, with more than a third of people who’ve lost money saying it has cost them over $1,000

Everybody’s voice is unique, which is why hearing somebody speak is such a widely accepted way of establishing trust. However, with 53% of adults sharing their voice data online at least once a week (via social media, voice notes, and more) and 49% doing so up to 10 times a week, cloning how somebody sounds is now a powerful tool in the arsenal of a cybercriminal.

 

With the rise in popularity and adoption of artificial intelligence tools, it is easier than ever to manipulate the voices of friends and family members. McAfee’s research reveals that scammers are using AI technology to clone voices and then send a fake voicemail or call the victim’s contacts pretending to be in distress. Messages that are most likely to elicit a response are those claiming that the sender has been involved in a car incident (48%), been robbed (47%), lost their phone or wallet (43%), or needs help while traveling abroad (41%).

 

However, the cost of falling for an AI voice scam can be significant, with more than a third of people who’ve lost money saying it has cost them over $1,000, while 7% were duped out of between $5,000 and $15,000. “Artificial intelligence brings incredible opportunities, but with any technology, there is always the potential for it to be used maliciously in the wrong hands. This is what we’re seeing today with the access and ease of use of AI tools helping cybercriminals to scale their efforts in increasingly convincing ways,” says Steve Grobman, McAfee CTO.

 

McAfee researchers spent three weeks investigating the accessibility, ease of use, and efficacy of AI voice-cloning tools, with the team finding more than a dozen freely available on the internet. Both free and paid tools are available, with many requiring only a basic level of experience and expertise to use. In one instance, just three seconds of audio was enough to produce an 85% match, but with more investment and effort, it’s possible to increase the accuracy. By training the data models, McAfee researchers were able to achieve a 95% voice match based on just a small number of audio files. Using the cloning tools they found, McAfee’s researchers discovered that they had no trouble replicating accents from around the world, whether they were from the US, UK, India, or Australia.


Add new comment