A Check on Credit Checks: CFPB Proposes Tightening Definition of Permissible Purpose
The Consumer Financial Protection Bureau (CFPB) recently issued a Notice of Proposed Rulemaking (NPRM) that would modify Regulation V, which implements the Fair Credit Reporting Act (FCRA), by adding restrictions to what is considered âpermissible purpose.â Comments on the proposed rule are due March 3, 2025.
The NPRM proposes to update or add definitions to a variety of terms and also add some restrictions to the provision and use of credit reports. Those most affected by the proposed changes would probably be data aggregators and data brokers, but there may be some concerns for banks as well.
One change that is likely to affect banks is the change to the requirements for written permission from the consumer as a permissible purpose for a credit pull. Even where banks and other lenders have at least one other permissible purpose for a credit pull, it is common to obtain written permission as an additional basis for permissible purpose as part of a belt-and-suspenders approach to FCRA compliance.
For permissible purpose based on consumer consent, the NPRM would require a written disclosure stating which CRA the information would be pulled from, who will receive the report (which may not be more than one entity), the product or service for which the report will be furnished, including limitations on the scope of such use, and instructions for revoking consent, which may not be more onerous that the process for granting consent.
This disclosure would have to be provided segregated from other material, likely on a separate page, and therefore likely could not be included in the boilerplate language on application forms. Additionally, the written consent would be effective for no more than one year after the signature date.
Because written consents would have to be revocable, some risk may arise for banks that use written consent in addition to another permissible purpose. For example, if a bank pulls a credit report regularly to ensure that the customer still qualifies for a product, but, in an abundance of caution, also obtains written consent, there could potentially be UDAAP risk if the customer is informed in disclosures that they have a right to revoke consent.
Because the bank would have permissible purpose even without written consent, it could continue to pull credit even if the customer revokes, which may lead to allegations that the disclosure was misleading or deceptive. Additionally, because creditors are prohibited from charging fees or imposing penalties based on a customerâs decision to revoke, a bank would likely not be able make the consumerâs consent a requirement for a particular product.
The NPRM also makes a clearly articulated effort to close loopholes that data aggregators and data brokers use to sell data for marketing purposes without following the procedures set out in Regulation V. It will require, for example, that a data aggregator who does not provide the information to a lender but instead sends out marketing material advertising the creditorâs products to consumers based on the creditorâs specifications, like income or credit use, would be considered a CRA even though the creditor does not receive the information.
The NPRM would similarly require permissible purpose for the sale of âcredit headerâ information such as name, address, etc. and for the sale of de-identified data. The NPRM proposes several alternatives for de-identified data, ranging from the strictest proposal, which treats de-identification as irrelevant to whether the data constitutes a consumer report, to the most lenient, which treats de-identified data as a consumer report if it is linked or reasonably linkable to a consumer, it is used to inform a business decision about that consumer, or the recipient identifies the consumer. If the NPRM is implemented without substantial changes, the closure of these loopholes would probably significantly curtail the availability of curated marketing leads based on consumer behavior or credit profiles.
As always, please reach out to our Hotline staff with any questions or concerns you may have about current or future requirements under Regulation V, the FCRA, or other compliance matters.
CFPB Report: Strengths and Weaknesses in Fair Lending
The Consumer Financial Protection Bureauâs November âMatched-Pair Testing in Small Business Lending Marketsâ report gives some hints as to how the new 1071 small business lending data could be used. The Bureau did 100 âsecret shopperâ tests on different lenders to assess racial discrimination and disparate treatment. The tests included subjective and objective criteria measuring each of four domains:
- The level of encouragement or discouragement to apply for financing;
- Information provided about products and steering to specific products;
- Overall customer service; and
- Business and credit information requested from the applicant.
While the results demonstrate that customers were treated similarly in terms of objective criteria for customer service (e.g., being greeted, thanked for coming in, etc.) and the information requested from the applicant, differences were identified in terms of both the level of encouragement to apply and the extent to which lenders discussed non-requested products.
Black testers were encouraged to apply for products about which they had not inquired, such as credit cards or home equity loans, at a higher rate than White testers with similar profiles. The report also found that although testers were encouraged to apply for loans regardless of race, the objective and subjective measures of the level of encouragement were higher for White testers. Perhaps most concerningly, the report recounts instances in which Black testers were told they would not qualify for products, although the same bank representative encouraged White testers with similar profiles to apply.
The Bureauâs conclusion encourages financial institutions to ensure lending staff adhere to fair lending requirements. It does not specifically require banks to engage in secret shopper testing. It would not be surprising, however, if regulators did focus on the identified areas of disparate impact.
The overall results may present additional insights about the effectiveness of banksâ fair lending programs. The CFPB report showed that testers were treated similarly in areas that are often addressed directly in training and procedures (information required for an application and standard customer service procedures), but that race-based differences existed in areas where staff exercise more discretion, such as the degree of encouragement to apply and the types of products discussed.
This may indicate that discriminatory bias is not originating at the institutional or management level, but rather that disparate treatment emerges in areas where bank representatives exercise more discretion. Because aspects that are more closely managed showed less evidence of discrimination, there is reason to conclude that training and documented procedures are effective in mitigating fair lending risk and, accordingly, that banks can likely reduce risk by augmenting their training and procedures based on the reportâs findings. Banks may therefore want to consider whether to add training and procedures on less structured aspects of customer engagement.
Based on the report, it appears that a particular emphasis on product recommendations is appropriate. The Bureau expressed concern that the credit card or HELOC products recommended disproportionately to Black testers may present more risk to a small business owner than a business loan. Steering an applicant who may qualify for a small business loan into a product that is riskier for the applicant may raise UDAAP as well as fair lending concerns. Steering a customer toward a product in which they have not expressed interest may, of course, also result in a loss of business for the bank if it gives the impression that the desired product is not available.
Banks looking to improve fair lending training and procedures may find our Fair Lending Toolkit helpful and, as always, our Hotline team is available to assist with any questions you may have.
Detecting Digital Deception: FinCEN Guidance on Generative Artificial Intelligence
âDeepfake mediaâ or âGenerative AIâ schemes are a newer, but rapidly developing, fraud method discussed in a recent FinCEN alert. Deepfake media fraud involves the use of generative artificial intelligence (âGenAIâ) to produce synthetic photographs, documents, audio, and even videos that look real. As FinCEN notes, although GenAI developers may attempt to build the software in a way that mitigates the risk of malicious use, as with any tool available to the general public at fairly low cost, the risk of misuse remains significant.
In a recent report on the increasing threats posed by deepfake identities, the Department of Homeland Security outlined how this technology could be used, for example, to pass through a financial institutionâs voice recognition software and access an account. Deepfake schemes are used in a variety criminal activities, including identity theft or the creation of synthetic identities, money laundering, online scams, and various types of payment fraud. The FinCEN alert asks that banks reporting any suspicious activity involving deepfake materials should include âFIN-2024-DEEPFAKEFRAUDâ in the narrative.
As the FinCEN guidance indicates, banks may want to consider training their frontline staff on how to detect AI generated documents, photographs, and videos. This may require a higher level of scrutiny than staff are accustomed to, but any inconsistency or irregularity may be an indicator of fraud. The inconsistencies may, moreover, not be visible on the face of the document but rather only when checked against information provided directly by the customer, in other documents, or from other sources, such as credit reports, or data collected by the bank, such as the userâs IP address. These inconsistencies may include:
- Identity document shows a birthdate that gives an age much older or younger than the associated photo would suggest;
- Customer uses third-party webcam plugins during a live verification check or evades a live verification check with claims of technological issues;
- Customer declines to use multifactor authentication (âMFAâ) to verify identity;
- Reverse image search for an identity photo yields results in an online gallery of GenAI-created images;
- Data on the customerâs location, including IP address, is inconsistent with the identity documents; or
- High volumes of payments or chargebacks occur on a new account or an account with low prior transaction history, particularly involving risky payees such as gambling sites or digital asset exchanges.
FinCEN notes additional resources that provide more specific guidance on how to recognize particular types of deepfake and other fraud, such as U.S. passport cards, authorized push payment fraud, mortgage loan fraud, mail-related check fraud, and virtual currency investment scams.
The guidance recommends that banks enhance their identity verification procedures both at initial account opening, particularly online account opening, and at each login to the account. There is software available designed to detect deepfakes in the media submitted to verify identity. DHS has, furthermore, started a program to test and validate the effectiveness of AI-detection software and programs, providing banks and other users with some confirmation that a particular tool is effective.
As always, Compliance Alliance offers a variety of tools to assist members in developing AML/CFT and third-party risk management programs. Our Hotline team is also available to answer additional questions you may have.