Pub 4 2023 Issue 6

AI IN LENDING DECISIONING AND UNINTENDED DISCRIMINATION BY SHELLI J. CLARKSTON, SPENCER FANE, LLP With the advancements in artificial intelligence (AI) technology, businesses around the world are considering how they can use AI to improve efficiency and advance business goals. Financial institutions are no exception. While AI can bring many efficiencies and advancements to the way business is conducted, in the highly regulated financial services industry, there are many considerations that need to be addressed by financial institutions seeking to use AI. In the context of lending, there are many credit decisioning technology platforms advertised to improve, automate and eliminate bias in credit decisioning. However, the issue of bias is not so straightforward and regulatory agencies are not backing away from this issue. The Consumer Financial Protection Bureau (CFPB) stated, “Tech marked as ‘artificial intelligence’ and as taking bias out of decision-making has the potential to produce outcomes that result in unlawful discrimination.”1 On April 25, the CFPB and other federal agencies released a joint statement regarding the use of advanced technologies, including AI.2 CFPB Director Rohit Chopra stated, “Today’s joint statement makes it clear that the CFPB will work with its partner enforcement agencies to root out discrimination caused by any tool or system that enables unlawful decision-making.” The Equal Credit Opportunity Act (ECOA) of 1974, which is implemented by Regulation B, applies to all lenders. The statute prohibits financial institutions and other firms engaged in the extension of credit from discriminating against a borrower on the basis of sex, marital status, race, color, religion, national origin, age (provided the applicant has the capacity to contract), because all or part of the applicant’s income derives from any public assistance program, or because the applicant has, in good faith, exercised any right under the Consumer Credit Protection Act. So how could AI, which is designed to create efficiencies and fairness and improve the lending process, run afoul of the ECOA? To answer this question, we must consider the data being used to make lending decisions. These technology platforms rely on voluminous datasets to power their algorithmic decisionmaking. We have all heard the adage “bad data in, bad data out.” In other words, incorrect data input creates bad results. Algorithmic bias describes errors in a technology system that create unintentional unfair outcomes. As applied to lending, algorithmic bias could result in one group of applicants receiving some advantage or disadvantage when compared to other applicants, even where there is no relevant difference between the two groups. This bias is created because of erroneous assumptions in the machinelearning process. When the algorithmic bias results in different treatments or impacts disfavoring applicants based on characteristics prohibited by the ECOA, the result is algorithmic discrimination, which, even if generated by a technology platform, still violates the ECOA. Associate Member 24 In Touch

RkJQdWJsaXNoZXIy ODQxMjUw