FinCEN Highlights Illicit Generative AI, Deepfakes as Primary Fraud Concern

FinCEN Highlights Illicit Generative AI, Deepfakes as Primary Fraud Concern

As the adoption of new and potent financial technologies have continued to revolutionize the workplace for financial service providers, specifically those operating within the compliance realm, arguably an equal number of new opportunities have been presented to fraudsters attempting to circumvent detection and facilitate financial crime as the age of artificial intelligence (AI) continues to unfold. As such, both top government officials and regulatory authorities are now warning of the increasing risk that developing technologies can pose to the integrity of the global financial system, with one prominent American regulator highlighting new and highly-successful fraud schemes being utilized by bad actors centered on “generative AI” acting against American interests with respect to current Anti-Money Laundering and Counter Terrorism Financing National Priorities.  

Generative artificial intelligence is a form of machine-learning that allows systems to be trained to create new data or content (i.e. text, images, videos) based on user prompts. Generative AI systems use neural networks to analyze vast amounts of data, allowing them to recognize patterns and relationships which can then be used to generate new content, rather than simply creating a prediction about a specific dataset as has been seen in more traditional forms of AI used in the financial sector in years past. Platforms such as ChatGPT have gained a growing degree of mainstream notoriety for their “human-like” tendencies with respect to generating content that remains adaptable and relevant based on feedback provided by users, while other GenAI platforms such as Google Gemini have begun to streamline the daily workflow of organizations across a number of industries given its propensity to be used in major facets of business including research, sales, marketing, writing and coding. Other applications of these AI developments include “deepfakes” which are defined as forms of synthetic media created using machine learning techniques to mimic or replace a person’s likeness (including appearance and voice) in an image or video. While the latter provides revolutionary content creation possibilities, it has also have created unique challenges to media integrity, data privacy and even financial security over recent years.

Given the growing sophistication and perceived legitimacy boasted by these outlets, there has been an alarming rise in use of deepfake AI in bank fraud and identity theft cases over the last several years. Last month, the U.S. Treasury Department’s Financial Crimes Enforcement Network (FinCEN) issued an alert to help financial institutions identify fraud schemes associated with the use of deepfake media created with generative AI tools. Their memo explained various typologies associated with these schemes, provides red flag indicators to assist with identifying and reporting related suspicious activity, and re-educated financial institutions on the importance of maintaining consistency with their reporting requirements under the Bank Secrecy Act (BSA). As described above, FinCEN has found that these schemes often involve criminals altering or creating fraudulent identity documents to circumvent identity verification and authentication methods frequently employed by financial institutions during the process of gaining access to a client’s accounts. In analyzing BSA data including suspicious activity report (SAR) filings from the past year, the regulator has discovered that fraudsters are using GenAI for multiple means, including to open accounts to funnel money and perpetrate fraud schemes such as check fraud, credit card fraud, authorized push payment fraud, loan fraud, and unemployment fraud.2 These individuals often facilitate these crimes by altering or generating images used for identification documents, such as driver’s licenses or passport cards and books, with some going as far as to combine GenAI images with stolen personal identifiable information to create synthetic identities which are later utilized to gain additional access to other accounts held by a target individual. Unfortunately for the victims of these crimes, once account access is gained, criminals are free to wreak havoc on their funds and any additional identifiable information held within their accounts, with the true primary account holders often not realizing their money has been vanquished until it is too late. BSA data also reveals that in other cases, malicious actors have successfully opened accounts using fraudulent identities suspected to have been produced with GenAI and used those accounts to receive and launder the proceeds of other fraud schemes.1

Further contributing to the propensity of these outlets to be abused is the lower cost and reduced resources needed to exploit the verification processes employed by banks using AI processes, which leads analysts to believe that this problem will not be going away any time soon. As such, FinCEN has identified “best practices” for detecting and mitigating deepfake identity documents in order to reduce the vulnerability of bank’s to these pervasive threats. These include use of multifactor authentication (MFA) including phishing-resistant programs and live verification checks in which a customer must confirm their identity through real time audio or video, as live checks can act as a primary deterrent for bad actors viewing the risk of detection as greater than the potential reward. FinCEN also published a lengthy list of red flags that can further help FI’s to sniff out potential suspicious activity in this regard. The Treasury Department’s alert can be read in its entirety here.

Citations

  1. “Fincen Issues Alert on Fraud Schemes Involving Deepfake Media Targeting Financial Institutions.” FinCEN.Gov, 13 Nov. 2024. 
  2. Larson, Kristen E. “FinCEN Alert: Fraud Schemes Using Generative Artificial Intelligence to Circumvent Financial Institution’s Identity Verification, Authentication, and Due Diligence Controls.” Money Laundering Watch, Ballard Spahr LLP, 19 Nov. 2024. 

Related Posts

About Us
businessman touching tablet
Our success is derived from the success of our clients. We pride ourselves in having assisted challenged financial service providers.

Let’s Socialize

Popular Post