2025 Pub. 13 Issue 2

The Growing Threat of AI-Driven Fraud and Deepfakes In the evolving world of financial crime, few developments have emerged as swiftly and alarmingly as the risk of deepfakes. Deepfakes are pieces of synthetic media, generally in the form of video, audio or images, that are digitally created using AI to replicate a person’s appearance, voice or even supporting identification documentation. AI allows fraudsters to produce incredibly convincing impersonations for the purpose of identity fraud, social engineering and bypassing identity verification systems. AI-driven deepfakes have ushered in a new era of fraud, making it easier than ever for bad actors to impersonate individuals and manipulate financial systems. The implications of deepfakes pose a unique threat to identity verification and fraud detection, requiring banks to modernize their control environments to keep pace. In a speech delivered on April 17, 2025, Federal Reserve Vice Chair for Supervision Michael S. Barr highlighted the escalating threat that generative AI (Gen AI) poses to the financial sector, particularly through the proliferation of deepfakes. He noted a staggering “twentyfold increase over the last three years” in deepfake-related attacks. Gov. Barr underscored the stark juxtaposition between the low-cost, rapidly deployed synthetic media used by fraudsters and the resource-intensive, slow-to-implement controls required of financial institutions. While synthetic media can be created and circulated with minimal cost and effort, financial institutions must invest in careful review, rigorous testing and layered controls. Barr also acknowledged the challenges smaller institutions face and emphasized the need for banks to adopt scalable, thoughtful steps that can meaningfully reduce exposure to AI-driven fraud. To address this growing risk, banks should begin by evaluating and enhancing their existing controls in a manner proportionate to their size and complexity. Scalable solutions do not necessarily require high-end technology. Training front-line staff to identify red flags of synthetic identity misuse (such as unnatural movements in video calls or inconsistencies in submitted documentation) can go a long way in mitigating risk. Adding out-of-band verification (e.g., call-back procedures) for high-risk transactions, reinforcing manual identity reviews during the onboarding of a new customer, and implementing dual-authorization for account changes can also serve as practical, low-cost defenses. Some vendors now offer BY MATT JONES, Compliance Advisor, Compliance Hub Utah Banker 12

RkJQdWJsaXNoZXIy MTg3NDExNQ==