In its January 17, 2025 special edition of Supervisory Highlights, the Consumer Financial Protection Bureau reiterated its concerns regarding fair lending risks related to new technologies. In this case, the focus is on complex automated underwriting systems, particularly those that use artificial intelligence (AI) or machine learning (ML).
The guidance highlights two major concerns. The first is whether lenders are testing their credit scoring models to determine if the models use prohibited factors, or proxies for prohibited factors, in their decisioning or if the scoring model is resulting in disparate impacts. The concern about prohibited factors is fairly clear; lending decisions should not be based on prohibited factors. That should, at least in theory, be something that AI/ML systems can be simply programmed not to do.
Disparate impact is a bit more complicated. As stated in regulatory guidance on fair lending, disparate impacts are not absolutely prohibited as prohibited factors are, but where a credit decisioning method results in disparate impacts, a lender should be able to demonstrate a business justification for the method and also that the lender is employing a model that minimizes disparate impacts while meeting legitimate business needs.
The guidance therefore indicates that lenders should be testing for fair lending concerns when using AI/ML models and, where disparate impacts are found, documenting the process for selecting the model chosen. This should include documenting the business needs the model serves, including specific standards that are used to evaluate whether the model meets those needs. In addition, the lender should be able to demonstrate that it has reviewed other models to determine whether the identified business needs can be met by another model with less discriminatory effects. Without documenting the model selection process in which models are compared based on their ability to further business needs and reduce disparate impacts, a bank may not be able to demonstrate that it has adhered to fair lending requirements.
The second concern discussed in the guidance is the use of āalternative data,ā which may include hundreds of different variables, used in more complex credit scoring models. Variables that are not clearly related to consumersā finances may invite scrutiny from a fair lending perspective, both in terms of whether the variable is truly related to creditworthiness and also whether it may be proxy for a prohibited factor. Lenders should therefore be able to demonstrate the relevance of variables that are entered into automated underwriting systems, as the agencies have discussed more specifically in other guidance.
Additionally, the guidance notes that the use of complex AI/ML credit scoring models with dozens or hundreds of variables does not alleviate a lenderās responsibility to identify the specific reasons for adverse actions. Banks must therefore be able to demonstrate that the reasons stated on adverse action forms are, in fact, the primary reasons that an adverse action was taken. The guidance states that lenders should be able to demonstrate through testing that the methods used to identify the primary reasons are reliable and accurate.