2024-2025 Pub. 19 Issue 5

The Growing Threat of AI-Driven Fraud and Deepfakes Matt Jones, Compliance Advisor Compliance Hub In the evolving world of financial crime, few developments have emerged as swiftly and alarmingly as the risk of deepfakes. Deepfakes are pieces of synthetic media, generally in the form of video, audio or images, that are digitally created using AI to replicate a person’s appearance, voice or even supporting identification documentation. AI allows fraudsters to produce incredibly convincing impersonations for the purpose of identity fraud, social engineering and bypassing identity verification systems. AI-driven deepfakes have ushered in a new era of fraud, making it easier than ever for bad actors to impersonate individuals and manipulate financial systems. The implications of deepfakes pose a unique threat to identity verification and fraud detection, requiring banks to modernize their control environments to keep pace. In a speech delivered on April 17, 2025, Federal Reserve Vice Chair for Supervision Michael S. Barr highlighted the escalating threat that generative AI (Gen AI) poses to the financial sector, particularly through the proliferation of deepfakes. He noted a staggering “twentyfold increase over the last three years” in deepfake-related attacks. Gov. Barr underscored the stark juxtaposition between the low-cost, rapidly deployed synthetic media used by fraudsters and the resource-intensive, slow-to-implement controls required of financial institutions. While synthetic media can be created and circulated with minimal cost and effort, financial institutions must invest in careful review, rigorous testing and layered controls. Barr also acknowledged the challenges smaller institutions face and emphasized the need for banks to adopt scalable, thoughtful steps that can meaningfully reduce exposure to AI-driven fraud. 26 NEBRASKA BANKER

RkJQdWJsaXNoZXIy ODQxMjUw