Deepfakes are here, and they’re coming for investors’ identities.
So-called deepfakes — false images, sound and video created with technology — make knowing your client increasingly difficult and pose a growing threat to the financial industry.
Advances in generative artificial intelligence (AI) allow aspiring fraudsters to dupe financial industry firms by either impersonating clients or creating fictitious ones — and to do so easily and cheaply. This development is opening new channels of criminal activity, intensifying challenges for the financial industry, and generating operational and credit risks.
“Deepfakes are a serious concern,” said Scott Gilbert, vice-president, risk monitoring, member supervision, with the U.S. Financial Industry Regulatory Authority (FINRA), at FINRA’s annual conference in May.
“You just need about 15 seconds of someone’s voice now to create a full lexicon — a deepfake — of that person’s voice,” Gilbert said. “Voice authentication as a mechanism of identifying customers, and ensuring the integrity of transactions, is going to be very challenged, if not impossible, in the near future.”
That’s already proving to be the case. Earlier this year, a company in Hong Kong was defrauded of US$25 million in a scheme that used deepfakes of the company’s senior executives for a video call with a junior employee, who made a series of bank transfers from the company’s accounts to facilitate a supposedly confidential deal.
The threat posed by deepfakes is becoming increasingly serious as deepfake technology becomes widely available — and no longer requires much skill.
“In the past few years, advancements in the field of [generative AI], especially the public arrival of affordable [generative AI] tools, have significantly reduced the barrier to entry for creating deepfakes,” stated a recent report from Moody’s Investors Service Inc.
“You don’t need to know much coding anymore to do all of these things; you have to have a mindset of ‘How do I use, or misuse, these capabilities in a very dysfunctional manner?’” Mario Schlener, managing partner and co-lead of Ernst & Young LLP Canada’s global risk transformation team, said at the Ontario Securities Commission’s (OSC) annual conference in May.
The criminal possibilities are endless. For example, a fake photo of a purported terrorist incident was used to touch off a temporary drop in U.S. equities markets. Other malicious uses include extortion, phishing attacks, insurance fraud and malware distribution.
“Deepfakes could also usher in a new era of cyber threats,” Moody’s warned. “Through their ability to fake identities, evade detection by human senses, and carry out sophisticated social engineering, deepfakes challenge our traditional mechanisms of trust and authenticity in the digital space.”
Deepfakes could evade KYC checks and traditional identity validation procedures, such as face- and voice-matching, allowing criminals to take control of existing accounts or to set up fake accounts for fraud and money laundering.
To combat the threat, KYC procedures will likely have to evolve. “The KYC paradigm may need to move beyond a point-in-time, one-off verification and include an ongoing tracking of transactions and account holder behaviour,” the Moody’s report said.
There is work underway on other technical solutions to help spot deepfakes, Moody’s noted, including scrutinizing digital files’ metadata for evidence of tampering, using lip-sync analysis to detect video manipulations and potentially using blockchain technology to verify the authenticity of media files.
But technological solutions are unlikely to fully neutralize the threat.
“As deepfakes evolve, researchers and technology companies are constantly involved in developing newer approaches and solutions to detect them with a fair amount of accuracy,” Moody’s said. “However, there is no perfect solution, and as with most technology-related security challenges, it will remain a case of constant catching up.”
“While there are ways to identify [deepfakes] through different safety [and] security protocols … the only way to manage them is through human intervention and very strong associated control steps,” Schlener suggested at the OSC’s conference.
Policymakers and regulators are trying to develop structures to fight the harm from deepfakes. China was one of the first countries to bring in regulations targeting deepfakes, and legislative efforts are underway in the U.S., at both the state and federal level, and in Europe and the U.K., Moody’s noted.
However, the rating agency said these efforts generally lack consistency and remain undeveloped, “with most jurisdictions still deliberating whether to enact new legislation or if existing laws are sufficient.”
Even in the jurisdictions adopting regulations to target deepfakes, these efforts are an incomplete solution, Moody’s suggested, “because of the dynamic nature of deepfake technology and because free speech concerns can constrain governments’ power to place limits on the technology.”
As the Moody’s report said, “Effectively reducing political and economic risks from deepfakes will likely require an industry-led effort that fosters cooperation between technology companies and social media platform operators.”
This article appears in the June issue of Investment Executive. Subscribe to the print edition, read the digital edition or read the articles online.