Trending: AI POSING SIGNIFICANT CYBER CRIME THREATS

Trending: AI POSING SIGNIFICANT CYBER CRIME THREATS

As the 2020’s continue to unfold, the role of artificial intelligence (AI) has become further cemented in the financial realm. New technologies are impacting financial service providers, as well as consumers, in a multitude of ways. Across the financial sector, applications providing new insights for firms with respect to data retrieval and analytics, performance measurements, financial forecasting, and even customer service are already heavily relied upon today. As new, potent technologies are developed on a regular basis, the daily operations of financial institutions small and large have been streamlined, creating an unprecedented level of operational efficiency and further supplementing the argument on cost-effectiveness with respect to returns on investment for providers small and large. While we appear to be just scratching the surface as to the long-term effects of what machine learning and AI may ultimately hold, those operating within this space have also speculated as to the extent of the risks associated with fully immersing themselves in these technologies – most notably with respect to sharing sensitive financial information, as well as the personal information of their customers, with these platforms. As such, both top government officials and regulatory authorities are now warning of the increasing risk that AI poses to the health of the global financial system.

While machine learning applications have helped to enhance network security, anti-malware, and fraud-detection capabilities by zeroing-in on potential threats quicker than their human counterparts, the misuse of AI has also created new risks with respect to cyber-security. In recent years, the risk of hacking attacks has grown exponentially due to accessibility alone. Previously, hacking was a skill that required significant intel, countless hours of training and technical know-how with respect to writing code. With artificial intelligence applications at their disposal however, this may no longer be a significant barrier to entry for many would-be criminals. Giving remarks at the 10th International Conference on Cyber Security earlier this month, Rob Joyce, director of cybersecurity at the National Security Agency (NSA), said that less capable people are using AI to guide hacking operations they otherwise would not have had the skill to carry out themselves. Joyce mentioned that on many occasions, hackers are foreign operatives who are backed by the governments of various countries who wish to do harm to the United States as well as other major world powers with significant assets at their disposal. Couple this with the fact that as the years pass, AI tools are becoming cheaper and as such far more easily accessible to the masses.  These factors alone have lead many across the industry to predict that there will be a sharp increase in prevalence of cyber-attacks both domestically and abroad in the years to come. The initial effects of AI tools becoming more mainstream are already being felt in the United States. Reuters, citing James Smith, assistant director in charge of the FBI’s New York field office, notes there has already been a year-over-year increase in cyber intrusions due to AI lowering the technical barriers to carrying them out.1

Sadly, it appears that many of the powerful AI tools originally designed with the intent of improving daily workflows from a personal and business perspective are being used maliciously to commit fraud, facilitate scams, and carry out other cybercrimes. Arguably the most rudimentary way that AI is assisting foreign hackers is in simply translating the English language. “One of the first things [bad actors are] doing is they’re just generating better English language outreach to their victims [using AI]—whether it’s phishing emails or something more elaborative,” he said. “The second thing we’re starting to see is … less capable people use artificial intelligence to guide their hacking operations to make them better at the technical aspect of a hack.”2 Another way AI is being used by hackers is to generate “deepfake” images and videos that can sometimes fool facial recognition software that banks use to verify their customers. Hackers can display these images and the bank’s technology may not be able to tell the difference – thus concealing the hacker’s identity while giving them access to the customer’s account. Of course the most significant threat comes in the form of large-scale attacks that can be optimized via these aforementioned avenues. Analysts have found cyber-attackers can use generative AI and large language models to scale attacks at an unprecedented level of speed and complexity creating potential regional impacts, especially in times of geopolitical conflict (such as those currently seen in Ukraine).

Many believe that the only way to combat the abuse of AI-powered cyber-crime is by fighting fire with fire. This entails utilization of AI-powered financial technologies and data security processes. When considering the potential ramifications of a successful cyber attack on a respective institution – substantial amounts of funds lost often followed by reputational harm and major regulatory penalties – the stakes are high for the modern financial institution. Modern problems require modern solutions, and many firms have successfully turned to cloud-based, AI-powered compliance solutions such as the suite offered by Global RADAR placing a premium on detecting and mitigating threats and enhancing security over a firm’s most pivotal financial data. However, those choosing to deploy these tools for regulatory compliance may ironically too face additional scrutiny moving forward. The Financial Industry Regulatory Authority (FINRA), Wall Street’s self-regulator, has echoed the statements of the NSA about the “emerging risk” of artificial intelligence in its annual regulatory report. FINRA claims that AI technology has the potential to affect broker-dealer operations from top to bottom. FINRA said firms looking to deploy such technologies – whether in house or using third-parties – should focus on the regulatory implications of doing so, particularly in areas such as anti-money-laundering, public communication and cybersecurity, among others, and in model risk management, including testing and data integrity and governance.3

In discussing the findings of the regulator’s annual report, Onrella Bergeron, a senior vice president in member supervision with FINRA’s risk monitoring program stated, “In risk monitoring, we have been actively engaging with firms to better understand their current initiatives, as well as their future plans related to generative AI and large language models, and honestly, from what we’ve been hearing so far, firms are being very cautious and they’re being very thoughtful when considering the use of AI tools as well as before deploying these technologies.”3 The report also highlights the increasing cybersecurity threats facing the financial sector, listing phishing scams, ransomware attacks and insider threats as chief concerns in 2024. FINRA’s report also emphasized the importance of having adequate supervisory controls for managing and monitoring technology in areas such as vendor management, system change management and business continuity.3

Citations

  1. Cohen, Luc. “AI Advances Risk Facilitating Cyber Crime, Top US Officials Say.” Reuters, 9 Jan. 2024. 
  2. Kultys, Kelly. “Hackers Use AI to Improve English, Says NSA Official.” Fordham Newsroom, 10 Jan. 2024. 
  3. Smagalla, David. “FINRA Calls Ai ‘Emerging Risk’ in Annual Regulatory Report.” Wall Street Journal , 9 Jan. 2024.

Related Posts

About Us
businessman touching tablet
Our success is derived from the success of our clients. We pride ourselves in having assisted challenged financial service providers.

Let’s Socialize

Popular Post