Detecting Digital Deception: FinCEN Guidance on Generative Artificial Intelligence

“Deepfake media” or “Generative AI” schemes are a newer, but rapidly developing, fraud method discussed in a recent FinCEN alert. Deepfake media fraud involves the use of generative artificial intelligence (“GenAI”) to produce synthetic photographs, documents, audio, and even videos that look real. As FinCEN notes, although GenAI developers may attempt to build the software in a way that mitigates the risk of malicious use, as with any tool available to the general public at fairly low cost, the risk of misuse remains significant.

In a recent report on the increasing threats posed by deepfake identities, the Department of Homeland Security outlined how this technology could be used, for example, to pass through a financial institution’s voice recognition software and access an account. Deepfake schemes are used in a variety criminal activities, including identity theft or the creation of synthetic identities, money laundering, online scams, and various types of payment fraud. The FinCEN alert asks that banks reporting any suspicious activity involving deepfake materials should include “FIN-2024-DEEPFAKEFRAUD” in the narrative.

As the FinCEN guidance indicates, banks may want to consider training their frontline staff on how to detect AI generated documents, photographs, and videos. This may require a higher level of scrutiny than staff are accustomed to, but any inconsistency or irregularity may be an indicator of fraud. The inconsistencies may, moreover, not be visible on the face of the document but rather only when checked against information provided directly by the customer, in other documents, or from other sources, such as credit reports, or data collected by the bank, such as the user’s IP address. These inconsistencies may include:

  • Identity document shows a birthdate that gives an age much older or younger than the associated photo would suggest;
  • Customer uses third-party webcam plugins during a live verification check or evades a live verification check with claims of technological issues;
  • Customer declines to use multifactor authentication (“MFA”) to verify identity;
  • Reverse image search for an identity photo yields results in an online gallery of GenAI-created images;
  • Data on the customer’s location, including IP address, is inconsistent with the identity documents; or
  • High volumes of payments or chargebacks occur on a new account or an account with low prior transaction history, particularly involving risky payees such as gambling sites or digital asset exchanges.

FinCEN notes additional resources that provide more specific guidance on how to recognize particular types of deepfake and other fraud, such as U.S. passport cards, authorized push payment fraud, mortgage loan fraud, mail-related check fraud, and virtual currency investment scams.

The guidance recommends that banks enhance their identity verification procedures both at initial account opening, particularly online account opening, and at each login to the account. There is software available designed to detect deepfakes in the media submitted to verify identity. DHS has, furthermore, started a program to test and validate the effectiveness of AI-detection software and programs, providing banks and other users with some confirmation that a particular tool is effective.

As always, Compliance Alliance offers a variety of tools to assist members in developing AML/CFT and third-party risk management programs. Our Hotline team is also available to answer additional questions you may have.