Countering Voice Fraud in the Age of AI – TalkLPnews Skip to content

Countering Voice Fraud in the Age of AI

https://eu-images.contentstack.com/v3/assets/blt6d90778a997de1cd/blt63cccf0f1e861aa0/661df1e64939c9f32957f90e/grandmaphone-Ian_Allenden-alamy.jpg
image

COMMENTARY
Three seconds of audio is all it takes to clone a voice. Vishing, or voice fraud, has rapidly become a problem many of us know only too well, impacting 15% of the population. Over three-quarters of those targeted end up losing money, making this the most lucrative type of imposter scam on a per-person basis, according to the US Federal Trade Commission.

When caller ID spoofing is combined with AI-based deepfake technology, fraudsters can, at very little cost and huge scale, disguise their real number and location and highly convincingly impersonate trusted organizations such as a bank or local council, or even friends and family.

While AI is presenting all manner of new threats, the ability to falsify caller IDs is still the primary point of entry for sophisticated fraud. This has also posed serious challenges for authenticating genuine calls. Let’s delve into the criminal world of caller ID spoofing.

What’s Behind the Rise in Voice Fraud?

The democratization of spoofing technology, such as spoofing apps, has made it easier for malicious actors to impersonate legitimate caller IDs, leading to an increase in fraudulent activities conducted via voice calls. One journalist, who said she’s known for her rational and meticulous nature, fell victim to a sophisticated scam that exploited her fear and concern for her family’s safety. Initially contacted through a spoof call that appeared to be from Amazon, she was transferred to someone posing as an FTC investigator, who convincingly presented her with a fabricated story involving identity theft, money laundering, and threats to her safety.

These stories are becoming increasingly common. Individuals are primed to be skeptical of a withheld, international, or unknown number, but if they see the name of a legitimate company flash up on their phone, they are more likely to answer the call in an accommodating manner.

As well as spoofing, we are also seeing a rise in AI-generated audio deepfakes. In Canada in 2023, criminals scammed senior citizens out of more than $200,000 by using AI to mimic the voices of loved ones in trouble. In the same year, a mother in the US state of Arizona got a desperate call from her 15-year-old daughter claiming she’d been kidnapped that turned out to be AI-generated. When combined with caller ID spoofing, these deepfakes would be almost impossible for the average person to catch.

As generative AI and AI-based tools become more accessible, this kind of fraud is becoming more common. Cybercriminals don’t necessarily need to make direct contact to replicate a voice, because over half of people willingly share their voice in some form at least once a week on social media, according to McAfee. Nor do they need exceptional digital skills, since apps do the hard work of cloning the voice based on a short audio clip, as highlighted recently by high-profile deepfakes of President Biden and Taylor Swift.

Entire organizations can fall prey to voice fraud, not just individuals. All it takes is one threat actor to convince one employee to share some seemingly insignificant detail about their business over the phone, which is then used to join the dots and enable a cybercriminal to gain access to sensitive data. It’s a particularly worrying trend in industries where voice communication is a key component of customer interaction, such as banking, healthcare, and government services. Many businesses rely on voice calls for verifying identities and authorizing transactions, and as such are particularly vulnerable to AI-generated voice fraud.

What We Can Do About It

Regulators, industry bodies, and businesses increasingly recognize the need for collective action against voice fraud. This could include intelligence to better understand scam patterns across regions and industries, the development of industrywide standards to improve voice call security, and tighter regulation governing reporting for network operators.

Regulators around the world are now tightening the rules around AI-based voice fraud. For instance, the Federal Communications Commission (FCC) in the US has now made it illegal for robocalls to use either AI-generated or pre-recorded voices. In Finland, the government has imposed new obligations on telecommunications operators to guard against caller ID spoofing and the transfer of scam calls to recipients. The EU is investigating similar measures, primarily driven by banks and other financial institutions that want to keep their customers safe. In all instances, efforts are underway to close the door on caller ID spoofing and smishing, which often serve as the entry point for more sophisticated, AI-based tactics.

Many promising detection tools in development could in theory drastically reduce voice fraud: voice biometrics, deepfake detectors, AI anomaly detection analysis, blockchain, signaling firewalls, and so on. However, cybercriminals are adept at outpacing and outwitting technological leaps, so only time will tell what will work best.

For businesses of all sizes and sectors, cybersecurity capabilities will become increasingly important for telecom services. Aside from the communications network level, businesses should establish clear policies and processes, such as multifactor authentication that uses a variety of verification methods.

Companies should also raise awareness of the most common fraud tactics. Regular training for employees should focus on recognizing and responding to scams, while customers should be encouraged to report suspicious calls.

On a consumer level, the UK’s communications regulator, Ofcom, revealed that more than 41 million people were targeted by suspicious calls or texts over a three-month period in 2022. In other words, although brands and governments have been reiterating the message that legitimate businesses will never ask for money or sensitive information over the phone, continued vigilance is necessary.

The easy availability of cloning tools and spiraling crime levels have experts like the Electronic Frontier Foundation suggesting that people should agree on a family password to combat AI-based fraud attempts. It’s a surprisingly low-fi solution to a high-tech challenge.

https://www.darkreading.com/vulnerabilities-threats/countering-voice-fraud-in-the-age-of-ai