AI is equally as critical as the technology itself. In order for the utilization of AI to be beneficial and effective, the data quality and quantity need to be accurate. This involves organizing data and preparing for integration. This means that financial institutions with a core processor will have to coordinate between their core system and their AI technologies. This can often be a complex and costly undertaking and financially burdensome, especially for small financial institutions and community banks. Financial institutions may also run into a more complicated integration process if their core processors and AI solutions vendors are competitors of the same or similar products and services. This challenge often leads to increased fees and costs for integration. Even if financial institutions are able to work out all the kinks related to system integration, there is always the challenge of obtaining expertly trained staff who are knowledgeable in building and deploying AI solutions. With the rapid advancement and use of AI technologies, it has led to a shortage of skilled AI experts in the broader labor force. While this is a challenge that is expected to improve in the future, at present, it leaves financial institutions competing with large tech companies such as Apple or IBM when recruiting for AI talent. An even more challenging area associated with artificial intelligence and financial institutions is meeting compliance expectations on technologies that are surrounded by so much regulatory uncertainty. Financial institutions are expected to identify and manage all risks related to AI and how it is used within the organization. It’s not enough for financial institutions to simply employ the technologies of AI, but rather they are expected to understand the data or inputs that drive the outcomes. Financial institutions are expected to ensure that all data used within the various branches of AI align with regulatory compliance requirements. For example, if the machine learning branch of AI is used in the decision-making for credit, the bank should understand and be prepared to explain what the contributing factors were that the AI system used to make that decision (i.e., what data was inputted to receive the outcome/ decision). It is critical that financial institutions are not only able to understand and explain this process, but also that all the data used within the AI system meet regulatory requirements. This means ensuring that the AI system isn’t using information that may violate consumer or fair lending laws. Financial institutions that are utilizing AI should have processes in place that allow for the identification of risk, both new and emerging, as well as controls for managing that risk. Because of the rapidly evolving technologies of AI, there is always the challenge of changes in risk level or even unidentified risk developing. Financial institutions need to be prepared to rise to the occasion when it comes to meeting those regulatory and risk challenges, whether that be through an increased frequency of monitoring and reviewing established controls or contracting with external vendors to conduct robust third-party risk management. The use of AI technologies within financial institutions has captured the interest of regulators and policymakers alike. A couple of key concerns are always the safety and soundness of financial institutions and consumer protections. While AI is constantly growing and advancing, many of the banking laws and regulations currently on the books are still a little behind the times, leaving some areas of regulatory uncertainty. Nevertheless, regulators acknowledge the benefits of AI and support responsible innovations by financial institutions. In 2021, the agencies (Consumer Financial Protection Bureau, Office of the Comptroller of the Currency, Federal Deposit Insurance Corporation, and Federal Reserve Board) issued RFIs (requests for information) on the use of artificial intelligence by financial institutions. In 2022, the OCC (Office of the Comptroller of the Currency) issued supervisory expectations for how banks should manage risks associated with AI. And most recently, in April 2023, a joint statement was issued by the agencies on the enforcement efforts against discrimination and bias in automated systems. The 2023 statement outlines some of the challenges of AI and serves as a reminder that financial institutions must embrace responsible innovation. CONCLUSION For financial institutions to thrive in the industry and remain relevant in the market, they must continue to be forward-thinking and responsible in their innovation efforts. Artificial intelligence is an ever-evolving technology and convenience of the world in which we live. Financial institutions must engage in the balancing act of supporting new and innovative technologies for their consumers while also acknowledging the risks and challenges of such growth. It is imperative to fully understand the technologies that our institutions rely on for its operation and that we remain abreast of any arising issues in the regulatory world. Artificial intelligence is the future, and it’s filled with risks and rewards. Julia A. Gutierrez serves as Compliance Alliance’s Director of Education, developing curriculum and presentations as well as presenting at various schools and seminars, both live and in a livestream/hybrid format. Julia has over 20 years of financial industry experience with the Compliance Alliance team. Your Customers Are Too. CONTACT US TODAY! 801.676.9722 sales@thenewslinkgroup.com Advertising Space Available. QR Code Utah Banker 19
RkJQdWJsaXNoZXIy MTg3NDExNQ==