As we enter a peak historical period of technological advancement across the globe, the adoption of artificial intelligence (AI) by the financial services industry is already in full swing. The compliance landscape has been drastically altered over the last several years as financial institutions have been quick to both invest into and develop potent technologies and machine learning processes to improve workflow processes and promote organizational efficiency. Today, the use of AI in the financial sector has transcended the standard data management protocols for which they were initially developed, now largely playing into other more intricate processes that include client onboarding, transaction monitoring, and enhanced customer due diligence (CDD). However, with the growth and widespread usage of these new means come questions regarding data safety, privacy concerns, and other potential issues that need to be addressed to allow for mainstream adoption across the global financial sector. It appears that several of the world’s top financial hubs have already begun to establish guidelines and regulatory framework that would help to bring more clarity to these persistent concerns however.
Early last week, the European Union announced a groundbreaking proposal for new regulations with respect to the use of artificial intelligence across a wide range of industries, including finance. This proposed framework specifically addresses the risk associated with AI, with the ultimate goal of the potential regulations being to transform Europe into a trailblazer for worldwide implementation of similar measures – an uphill climb to say the least. Touching on these developments, The Verge writes that the published framework covers “a wide range of applications, from software in self-driving cars to algorithms used to vet job candidates, and arrive at a time when countries around the world are struggling with the ethical ramifications of artificial intelligence”, Their article continues, noting that the measure will be “similar to the EU’s data privacy law, GDPR, the regulation gives the bloc the ability to fine companies that infringe its rules up to 6 percent of their global revenues.”2 During a recent press conference regarding the proposals, European Commissioner Margrethe reiterated her greater goal of developing Europe into a world-class leader in the development and use of secure, trustworthy, and human-centered artificial intelligence.
The regulations will seek to effectively ban the use (or misuse) of several specific outlets for AI utilization across the EU, including the use of real-time facial recognition/identification software in public spaces by law enforcement bodies, arguably the most contentious issue in this space. Reports have indicated that the update would still allow law enforcement agencies across Europe to utilize AI surveillance procedures, however doing so would require review and approval from judiciary committees before being carried out. Thus far regulators, financial professionals and citizens have lauded these landmark progressions, though certain details will need to be ironed out before the new legislation is officially rolled out. “Big tech” companies are likely sweating this announcement however, as should these proposals ultimately come to pass, the billions of dollars of artificial intelligence research and data being gathered by big names such as Google, Facebook, Amazon, and Microsoft will also potentially come under new scrutiny.
Across the pond in the United States, federal regulators are very much focused on the financial implications of artificial intelligence at current. Just recently, the Federal Reserve, Consumer Financial Protection Bureau, Federal Deposit Insurance Corp., National Credit Union Administration, and Office of the Comptroller of the Currency issued a request for information (RFI) over the use of AI and machine learning tools by banks and other financial service providers, including financial technology (FinTech) firms. In this RFI, the group of regulators gave their understanding of the use of these technologies by financial institutions while seeking to garner more detailed information from FI’s on the inner workings of their respective AI/ML tools, their levels of success in meeting targeted goals, and their plans for future development. This comes as part of a greater effort to better understand when, where, and how significant the scope of potential new regulations may be moving forward. As touched on earlier, machine learning can already provide far more efficient a way to analyze large amounts of data in real time while providing insight into patterns, trends and potential illicit activity that a team of human analysts would either miss altogether or take months if not years to identify, all while significantly cutting costs for financial institutions small and large. While we have only begun to scratch the surface of what machine learning can bring to the financial sector, regulators are more concerned about possible biases that may have developed in the infrastructure of these AI technologies, as well as a potential lack of transparency accompanying the use of unregulated or unreported programs.
The RFI acknowledges that the aforementioned regulators are currently aware of financial institutions using AI for account flagging and alert generation, as well as for customer service, conducting credit checks, and performing various cyber-security processes. Regulators appear to be concerned however with how American financial firms using AI to fulfill their BSA/AML requirements are aligning this usage with sound risk management practices. If financial institutions are trusting machines to do analysis on potential cases of money laundering or terror financing, regulators want to be sure that these systems are both reliable and secure. In the RFI, the regulators also highlighted three specific areas that they are seeking increased transparency on – the first being “explainability.” This means that that financial institutions utilizing AI need to understand the ins-and-outs of the products they are using, as well as how it/they arrived at the final result of whatever task said product is being used for.1 The second area in need of addressing is that the results of data analysisare to be measured empirically, with continual human input. Compliance Week writes that the primary concerns of regulators in this regard is that they believe “AI and ML tools may not respond appropriately when the data sets they are analyzing undergo a rapid change, or that pre-existing biases in the financial industry may be baked into the algorithms being used to analyze the data, which would taint the results.”1 The third concern is that all AI tools used/implemented for varying purposes have the capability to update their algorithms on their own in response to changes in the data, sometimes without human interaction, which is known as dynamic updating. The one constant with respect to each of these concerns is continued human oversight of the tools being used to ensure they are functioning effectively and creating no loopholes to be exploited.
The continued usage and growth of AI and machine learning processes across the financial sector is inevitable, and U.S. regulators are seeking to take a collaborative and proactive approach to better understand and manage these complex processes with the help of the institutions they govern. Time will tell if financial institutions will in fact take it upon themselves to give feedback and make suggestions that could shape new regulations in the United States moving forward.
Citations
- Nicodemus, Aaron. “Regulators Want Answers from Financial Services on AI/ML Tools.” Compliance Week, 21 Apr. 2021.
- Vincent, James. “EU Outlines Wide-Ranging AI Regulation, but Leaves the Door Open for Police Surveillance.” The Verge, The Verge, 21 Apr. 2021.