reasonable, after implementation. There are certain aspects of the model that can be validated soon after implementation, which include mapping, parameters, rules, etc. On the other hand, there are aspects of the model that cannot be validated until sufficient data has been input and run through the model, such as backtesting, alert generation, etc. Institutions should focus on two types of model validations: 1. third-party model validation and 2. a validation of the institution’s specific usage of the model. A third-party model validation should be obtained and reviewed when performing due diligence in model selection and then annually or as completed thereafter. The institution is not responsible for contracting this type of validation, but rather, it is the model provider’s responsibility to hire a third party to validate the software. The results of this validation should be available to model users to ensure they are aware of any model capabilities not functioning properly or the identification of any known limitations. Depending on the type of model, a model certificate may also be available, which certifies the model’s functionality to process and provide necessary outputs. This should not take the place of model validation but provides a determination on whether the technical aspect of the model is functioning properly. The second type of model validation that should be performed validates the institution’s specific use of the model. It is the responsibility of the institution to contract a third party or an independent individual to complete validations on a periodic basis. The current guidance does not explicitly state the frequency with which validations are to be performed, except to say that validations should be performed periodically, considering the complexity of the model and the institution’s risk profile. We typically see validations performed annually for complex, higher-risk institutions and every three years for less complex institutions. If significant updates or changes are made to the model, this should trigger whether a model validation is necessary on a more frequent basis. The model validation should focus on the model capabilities in use to determine whether the necessary information is input to produce the expected outputs. Typically, this validation would not include verification of mathematical accuracy, algorithms, or any other proprietary information. Both types of validations mentioned are equally important and validate the model from the vendor and user perspectives. It is imperative that management integrate model validations into their model risk management. WHAT TO EXPECT DURING A MODEL VALIDATION The primary source for formal regulatory guidance on model governance issued is the “Supervisory Guidance on Model Risk Management” in 2011 by the Office of the Comptroller of the Currency (OCC) jointly with the Board of Governors of the Federal Reserve, later adopted by the Federal Deposit Insurance Corporation in 2017. Model validations should focus on three major components: information input, processing and model outcomes. • The Information Input Component: how data is delivered to the model. • The Processing Component: transformation of data for monitoring. • The Model Outcomes Component: translate data into results, reports, alerts and useful information. The validation process varies depending on the specifics of the model and model type. The validation section of the guidance just referenced should be followed for each model type; however, the data analyzed, objectives, usage of the model results, etc., will vary for BSA/AML, ALM/IRR and CECL models. Assuming issues are not identified in the third-party model validation completed for the model provider or during vendor management reviews, the validation will focus on the information controlled by the institution, quality of data, controls over data, manual manipulation, assumptions, estimates made by management, parameters, etc. We frequently identify issues specific to the information input component when completing model validations. It is a best practice to understand how information is mapped from source systems to the model at implementation. Another best practice relates to default settings or parameters. Typically, model providers will activate default settings based on information they have gathered from peers or other data available. It is important that management review any defaults and either accept with an explanation to support why those fit their risk profile or adjust with an explanation to support why the change was made. The results of the validation should be remediated in a timely manner to maximize model use in managing risk. The key takeaway from this should be that model validations are important when relying on them as a tool to manage risk. Kaitlyn E. Gasper, CAMS, CFE, vice president, principal, risk advisory at S.R. Snodgrass, brings extensive experience working with financial institutions, including exceptional expertise with Bank Secrecy Act (BSA) Model Validations. She also works closely with both public and privately held corporations, nonprofit organizations, partnerships, limited liability corporations and S corporations. Founded in 1946, S.R. Snodgrass is a privately held, multi-faceted public accounting and consulting firm, known for innovative tax, assurance, technology and financial advisory services for financial institutions, nonprofits and businesses of all kinds. The firm has worked with more than 175 financial institutions in 16 states and employs more than 90 professionals. The firm is ranked among the country’s top 300 public accounting firms according to Inside Public Accounting’s 2024 list. REFERENCE Guidance on Model Risk Management. (2011) Board of Governors of the Federal Reserve System. https://www.federalreserve.gov/ supervisionreg/srletters/sr1107a1.pdf. 22 WEST VIRGINIA BANKER
RkJQdWJsaXNoZXIy MTg3NDExNQ==