2025 Pub. 13 Issue 2

affordable, modular fraud detection tools, including basic liveness detection or media forensics capabilities, which can be used to supplement traditional customer due diligence. In addition to internal controls, a key risk area lies in the oversight of third-party relationships. As banks increasingly partner with vendors and fintechs to deliver services, it is essential to evaluate not only the vendor’s performance but also how AI is used in the services they provide. Does the vendor rely on AI models for customer verification, risk scoring or fraud detection? If so, what guardrails are in place to detect misuse, synthetic identities or deepfakes? Banks must remember that they remain ultimately responsible for the actions and outputs of their third-party vendors, even when those services are outsourced. This includes ensuring vendors operate within the bank’s risk appetite and regulatory expectations. To meet this obligation, banks should enhance their third-party risk management programs to include specific due diligence around AI model governance, data integrity and fraud control capabilities. Period reviews, contract clauses that require transparency, reporting on AI performance and fraud detection effectiveness are all steps that a bank may consider taking to ensure it maintains oversight of these third parties. The risks highlighted by Gov. Barr certainly aren’t new to the regulatory landscape. In November 2024, FinCEN issued an alert (FIN-2024-ALERT004) that serves to help financial institutions identify fraud schemes associated with the use of deepfake media and generative AI in fraud. The alert is part of the U.S. Department of the Treasury’s initiative to address the challenges posed by AI in the financial sector. It offers foundational awareness of the threat of deepfakes. Additionally, the alert serves as guidance for banks to review and update their risk-based procedures to address the specific challenges posed by deepfakes. The alert also provides specific red flags to help institutions identify potential deepfakes, including but not limited to anomalies in submitted images or videos, discrepancies between known customer data and new applications and unusual transaction behavior following new account openings. Further provided is SAR filing guidance, directing institutions to use the key term “FIN-2024-DEEPFAKEFRAUD” when reporting suspected activity. Banks should incorporate these indicators into their fraud programs and consider whether their current systems are sufficient to capture synthetic identity activity in a timely manner. As banks increasingly rely on AI to combat fraud, it is crucial to also recognize and manage the new risks associated with Gen AI. A robust strategy involves more than just implementing protective technologies; it requires a shift in culture and operations to effectively handle the rising sophistication of synthetic identities, the potential misuse of deepfakes to circumvent security measures, and the vulnerabilities that may arise from third-party vendors utilizing AI tools. Establishing strong AI governance, designing scalable controls and ensuring proper oversight of third-party partners are essential steps in mitigating these threats. Although the danger posed by deepfakes is significant and escalating, with careful planning and adaptation, even smaller community banks can substantially lower their risk and bolster their resilience in this evolving AI-driven landscape. Utah Banker 13

RkJQdWJsaXNoZXIy MTg3NDExNQ==