Trust Stamp face biometrics layer addresses voice vulnerability to deepfakes – TalkLPnews Skip to content

Trust Stamp face biometrics layer addresses voice vulnerability to deepfakes

Trust Stamp has launched a program that aims to help financial institutions fast-track their deepfake detection capabilities with multi-factor biometric authentication. A press release from the face biometrics firm positions Trust Stamp’s biometric face authentication product as a “as an alternative for, or supplement to, voice-based systems” that are vulnerable to deep fake voice attacks.

Audio and video deepfakes have become a pressing problem, as fraudsters exploit cheap and easily accessible voice cloning and generative AI software. Political figures and celebrities have had their likenesses cloned, leading to a spike in the market for biometrics software, verification tools and preventative ID authentication.

For businesses, however, perhaps no case has rattled the nerves as much as the infamous deepfake fraud story out of Hong Kong.

“In February of this year we saw a widely publicized example where a finance worker in Hong Kong paid out $25,000,000 based on a video call that included a deep fake representation of his company’s CFO,” says President of Trust Stamp Andrew Gowasack, describing what he calls  “CEO Fraud.”

“Although there should be significant focus on attacks on the interaction between the customer and the financial institution, deep fake technology can also be used for attacks within the customer enterprise resulting in the financial institution receiving instructions that have every appearance of being legitimate, having been initiated based upon a fraudulent communication within the enterprise,” says Gowasack. “Fraud of this type is typically commissioned by email via a spear phishing attack, but with voice and video deepfakes it can now be used for instructions given by Zoom or other video technologies.”

Trust Stamp says banks, credit unions and other enterprises that use voice instructions or voice-based authentication are particularly vulnerable to synthetic audio deepfakes, and should consider integrating the company’s multi-factor biometric authentication tools for a robust defense.

“We have never offered voice-based authentication because it appeared probable that it would be spoofed by fast advancing AI-technology,” Gowasack says. “Although OpenAI have stated that they are not currently releasing their Voice Engine for public use there are many alternative generative AI engines available including open-source models. Our multimodal authentication tool using facial authentication with proof of life, paired with optional device authentication, can quickly be integrated into current authentication systems as an alternative for, or supplement to, voice-based systems and can also be initiated as a stand alone service for high-risk transactions.”

NPR tests audio deepfake detection providers

The social and political implications of AI-generated deepfakes are becoming clearer. In an article for NPR, Sarah Barrington, an AI and forensics researcher at the University of California, Berkeley, describes how convincing deepfakes can rattle foundational trust.

“If we label a real audio as fake, let’s say, in a political context, what does that mean for the world?” Barrington says.  “We lose trust in everything. And if we label fake audio as real, then the same thing applies. We can get anyone to do or say anything and completely distort the discourse of what the truth is.”

In an informal experiment, Wisconsin Public Radio, a subsidiary of NPR, tested their reporters’ cloned voices against audio deepfake detection tools from three providers: Pindrop Security, AI or Not and AI Voice Detector.

A summary says that “NPR submitted 84 clips of five to eight seconds to each provider.  About half of the clips were snippets of real radio stories from three NPR reporters.” NPR used technology from the company PlayHT, to generate the rest, which cloned voices of the same reporters saying the same words as in the real clips.

Of the three, only Pindrop, which is available to businesses but not individuals, performed well, detecting all but three deepfake samples.

Barrington says this shows the need to continue developing novel approaches to deepfake detection, as fraud threatens not just the arenas of global politics and finance, but also interpersonal interactions with tactics like telephone fraud. Machine learning algorithms for detection are unlikely to have been trained on the voices of someone’s non-famous family members, so sussing out if cash-strapped Uncle Jimmy’s call is coming from a fraudster is more complex than detecting a deepfake of a Joe Biden or Taylor Swift – which is already difficult enough.

But it’s worth the effort, when the stakes are nothing less than our fundamental trust in reality.

Related Posts

Article Topics

 |   |   |   |   | 

Latest Biometrics News

 

There’ve been mixed reactions in Nigeria after the federal government unveiled plans to introduce a new multi-purpose national digital ID…

 

Vicon Industries is set to introduce Anavio, a cloud biometric software solution for facilities security, at ISC West 2024, April…

 

Digital ID holders in Iraq will be exempt from routine security checks by police, according to a report from Shafaq…

 

Cigarette smokers in the Netherlands will now have the option to buy tobacco products with age verification technology. Around a…

 

Face image morphing is a significant threat to face biometrics applications such as border crossings and identification. Since 2020, Europe…

 

Jamaica is planning a major upgrade to its border control system, including the deployment of biometrics to air and sea…

https://www.biometricupdate.com/202404/trust-stamp-face-biometrics-layer-addresses-voice-vulnerability-to-deepfakes