NASCUS Summary: FinCEN Alert on Fraud Schemes Involving Deepfake Media
FinCEN Alert on Fraud Schemes Involving Deepfake Media Targeting Financial Institutions
FIN-2024-Alert004
NASCUS Legislative and Regulatory Affairs Department
December 2, 2024
The Financial Crimes Enforcement Network (FinCEN) recently issued alert FIN-2024-Alert004 – Fraud Schemes Involving Deepfake Media Targeting Financial Institutions. FinCEN has issued this alert as a resource to assist financial institutions (FIs) in identifying fraud schemes associated with the use of deepfake media[1] created with generative artificial intelligence (GenAI) tools.
The alert details typologies associated with these schemes, outlines red flag indicators to assist in identifying and reporting suspicious activity, and reminds financial institutions of the reporting requirements under the Bank Secrecy Act (BSA).
Since 2023, FinCEN has seen an increase in SAR filings describing the suspected use of deepfake media in fraud schemes targeting FIs and their customers/members. Criminals alter or create fraudulent identity documents to circumvent identity verification and authentication methods.
Criminal Uses
FinCEN’s analysis indicates that criminals have used GenAI to circumvent FI customer identification and verification through the creation of:
- Falsified documents;
- Photographs; and
- Videos
Schemes identified that utilize deepfake media include, but are not limited to:
- Online scams;
- Consumer fraud including:
- Check fraud;
- Credit card fraud;
- Authorized push payment;
- Loan fraud; or
- Unemployment fraud
Detection, Mitigation, and Red Flag Indicators
FinCEN has identified detection and risk mitigation best practices and Red Flag Indicators that may assist FIs reduce their vulnerability to deepfake identity documents and deepfake identity fraud schemes.
FinCEN has provided nine Red Flag Indicators FIs may utilize to detect, prevent, and report potential suspicious activity related to the use of GenAI tools.
- An individual’s photo is internally inconsistent (e.g., shows visual tells of being altered) or is inconsistent with other identifying information (e.g., date of birth indicates they are much older or younger than the photo would suggest).
- An individual presents multiple identity documents that are inconsistent with each other.
- An individual uses a third-party webcam plugin during a live verification check. Alternatively, an individual attempts to change communication methods during a live verification check due to excessive or suspicious technological glitches during remote verification of their identity.
- A reverse-image lookup or open-source search of an identity photo matches an image in an online gallery of GenAI-produced faces.
- An individual’s photo or video is flagged by commercial or open-source deepfake detection software.
- GenAI-detection software flags the potential use of GenAI text in a customer’s profile or responses to prompts.
- An individual’s geographic or device data is inconsistent with the customer’s identity documents.
- A newly opened account or an account with little prior transaction history has a pattern of rapid transactions; high payment volumes to potentially risky payees, such as gambling websites or digital asset exchanges; or high volumes of chargebacks or rejected payments.
FinCEN notes FIs should reference this alert by including the key term “FIN-2024-DEEPFAKEFRAUD” in SAR field 2 (“Financial Institutions Note to FinCEN”) and in the narrative to indicate a connection between the suspicious activity report and this alert.
NASCUS has received requests for a list of FinCEN’s SAR Advisory Key Terms. The list of key terms can be found on FinCEN’s website here. The list includes each alert associated with the key term(s).
[1] Deepfake media, or “deepfakes” are a type of synthetic content that use artificial intelligence/machine learning to create realistic but unauthentic videos, pictures, audio, and text.