ChatGPT for Banks? As AI Grows, Security Risks Remain for Financial Sector

ChatGPT for Banks? As AI Grows, Security Risks Remain for Financial Sector

The trend towards heavier investment and a renewed commitment of resources to the development of machine learning and artificial intelligence-backed technologies has created a modern-day gold rush across the tech sector. Given the potential of these innovations, dozens of domestic and global industries have too begun to explore the possibilities offered by these unique tools with respect to streamlining their daily practices and creating operational efficiency, while possibly cutting costs in the process. The financial sphere was an industry at the forefront of the adoption of AI in part due to the heavy regulatory constraints placed upon them by state and federal watchdogs. American banks have successfully integrated artificial intelligence into their anti-money laundering protocols, receiving a much-needed boost to data analysis practices, as well as in identifying trends and managing potential risk. More recently, firms like Global RADAR have begun to utilize these initial tools to cultivate their own artificial intelligence platforms for AML screening in real-time, which has allowed financial firms small and large to avoid compliance mishaps and subsequent financial penalties and/or sanctions as a result of what were once relatively common errors. Analysts expect these efforts to grow exponentially in the decade to come as technological solutions continue to evolve and mature, and as the United States continues to create partnerships with its major international counterparts to continue pushing the fight against financial crime. Yet while the early results of the shift towards machine learning have been tangible, there remain concerns that its widespread implementation could also create risk in and of itself.

               In 2023, Open AI’s artificial intelligence-centered chatbot ChatGPT has dominated international headlines and social media circles – for good and bad reasons – to become a popular culture staple. ChatGPT is a natural language processing tool that is driven by AI technology and fine-tuned by developers with the help of user feedback to allow for human-like interactions, discussions, and more. The language-based AI can compose entire works of poetry, essays, emails and other short forms of literature, while also providing answers to a wide array of questions given its ability to harvest and retain information found across the web – delivering responses in a conversational manner. Arguably the first AI-solution with real-world utilization across the lifespan (this as the platform is already being used by school-aged children) the platform has spurred significant development across this budding sector, this as ChatGPT itself already boasts over 100 million users and sees over 1.8 billion visitors per month. Given its allure, many have begun to wonder if such a product could hold additional value within the financial sector. Even though ChatGPT is language-based, experts believe it could be used for multiple purposes including streamlining transactional assistance, account management and other service-related tasks at banks, which would subsequently free staff to focus on providing tailored advice to clients based on the tool’s data analytics and research capabilities.2 The discussion on whether the adoption of this technology in particular holds value with respect to the AML/CFT crusade remains in its relative infancy however.

               It is undeniable that there is most certainly an upside to AI technology with respect to AML however. Using a manual system for monitoring accounts and transactions is slow and prone to error. Manual data input and assessing risk is also time-consuming. Many financial institutions are already spending a fortune with bloated compliance departments in an attempt to keep up with ever-mounting regulations. Artificial intelligence can go beyond human capabilities to take in huge sets of data, learn patterns, and retain all of the collected information at a much faster and more efficient clip than even the most robust compliance war room. Eventually, the hope for this technology is that it will be able to be trusted to make intelligent and appropriate decisions based on the information the AI takes in. Even so, the financial services industry continues to approach AI with caution, likely due to its very essence being grossly misunderstood. Clearly the United States government sees value in its adoption across this industry however, this as regulators and the Financial Action Task Force (FATF) have continued to push banks to consider using these new tools to strengthen their established AML programs.2

               As such, the question becomes what is the downside of the shift to AI-backed compliance? The main argument is that this technology still comes down to the human beings – both developers and users – behind the controls. For AI to improve and learn appropriately, it has to be managed appropriately. Unlike human beings, AI systems still do not have the capability to understand wider context of situations and utilize judgment. It is usually not possible to train AI ahead of time for every situation it might encounter – something that would be necessary in order for an organization potentially handling billions of dollars worth of assets on a day-to-day to rely on it completely. At this point, providing comprehensive rules for the AI system’s boundaries are of the utmost importance and failing to do so could create security risks and/or compliance failures. Banks would also have to guard these rules very carefully because if they were to fall into the wrong hands, criminals could use them to exploit the AI system or dig deep enough to find security weaknesses in the system. This could also create possible loopholes that could allow financial criminals to slip through the cracks with respect to avoiding detection in spite of committing serious financial crimes.

               To date, multiple significant AI-related security risks have been identified at the expense of processes already in practice. Privacy attacks coined “model inversion” see an AI attacker attempt to infer personal information about a data subject by exploiting the outputs of a machine-learning model. In these circumstances the attacker can train a separate machine learning model, known as an inversion model, on the output of the target model (in this case a bank’s AI-system), with the inversion model tasked to predict the input data, that is, the original dataset for the target model.1 This would effectively allow the attacker to learn information about a data subject (the bank itself or a customer of the firm) by analyzing the inversion model’s predictions. Machine learning models have also been found to be vulnerable to a separate form of privacy attack coined a membership inference attack (MIA). MIA’s aim to infer whether or not a specific data record was used to train a target model. This can subsequently lead to a privacy breach given that once the training data set is found, the attackers can effectively access the information used to shape the rules by which the machine structures its “thinking.”

               Given the varying risks facing numerous industries around the world with respect to the adoption of these new technologies, several international government bodies have advised caution and due diligence on behalf of individual organizations seeking to dive into this wave. Last week, the UN council announced its plans to hold the first ever meeting on the threats of artificial intelligence to international peace and security, organized by the United Kingdom which sees tremendous potential but also major risks about AI’s possible use for example in autonomous weapons or in control of nuclear weapons.3 The meeting will be held on July 18th, 2023. Furthermore, speaking at an industry conference last month, acting U.S. Comptroller Michael Hsu discussed the rapid pace of development of artificial intelligence and corporate adoption of the technology since OpenAI’s release of ChatGPT last November, while expanding on several of the risks associated with AI as well.

               All told, the fostering of potent technologies that play into financial security is undoubtedly a good development for the integrity of the global financial system. Nevertheless, financial institutions must weigh the pros and cons of relying on AI technology moving forward. The potential efficiency and cutting of costs is certainly attractive, but careless reliance on machines to make decisions could have terrible consequences.

Citations

  1. Adams, Nathan-Ross. “Model Inversion Attacks: A New AI Security Risk.” Michalsons, 23 Mar. 2023. 
  2. Faridzadeh, Leily. “Legal Brief: Impact of AI, ChatGPT on Banks, Financial Crime Risk Mitigation.” ACAMS Money Laundering, 26 June 2023. 
  3. Lederer, Edith. “UN Council to Hold First Meeting on Potential Threats of Artificial Intelligence to Global Peace.” AP News, 3 July 2023. 

Related Posts

About Us
businessman touching tablet
Our success is derived from the success of our clients. We pride ourselves in having assisted challenged financial service providers.

Let’s Socialize

Popular Post