Automatically monitor regulatory updates to map to your internal policies, procesures and controls. Learn More
-

1558 Enforcement Actions in the U.S. over past 30 days

-

FTC enforcements decreased 55% over the past 30 days

-

SEC issued enforcements: $37,812,859 over the past 30 days

-

50 Final Rules go into effect in the next 7 days

-

49 Mortgage Lending docs published in the last 7 days

-

1670 docs with extracted obligations from the last 7 days

-

new Proposed and Final Rules were published in the past 7 days

-

11906 new docs in pro.compliance.ai within the last 7 days

-

Considering RCM Solutions?  Here’s an RFP to get started.

-
Advisor Blog Rick
Advisor Blog Rick

Advances in technology have had a significant impact on the financial services industry. For the most part, the impact has been positive. Technology has allowed us to bank from anywhere in the world and initiate payments remotely without having to write a check or go into a branch. Advancements in payments alone have changed how we shop, how we pay for meals at restaurants, and how we split our bar tab after a round of drinks after work. In general, these advances have improved speed, convenience, access, security, and the overall customer experience.

What has allowed this transformation in financial services is the digitization of data and the digitalization of processes. Banking has been around for thousands of years. The more current, branch-based, analog business model of banking remained relatively unchallenged for hundreds of years. Things began to change with the advent of digital banking in the 1960s with the introduction of cards and ATMs. In the 1980s, digital networks connected merchants with banks allowing for a more real-time banking experience. However, digital banking didn’t really take off until the 1990s with increased digitization of data, creation of online banking products, and the rise of the Internet. 

In 1994, Banque Direct and ING Direct launch as the first digital banks, 13 years before the iPhone is introduced. This represented a turning point in the industry. For the first time, customers can access their accounts remotely from a computer with an Internet connection and conduct most of their financial transactions without having to ever set foot in a branch. Since then, digital banking and neobanking have become the fastest growing segment of financial services and technology has continued to transform the industry. 

Today, we can interface with our banks through chatbots, our banks let us know when we have had unusual activity to thwart potential fraud, and loan decisions are made within minutes (even seconds) as opposed to days. This is all due to the aforementioned digitization of data and digitalization of processes and with an added layer of artificial intelligence or AI. AI has created efficiencies through automation, cost savings, and a reduction in errors. However, the risks associated with financial services transactions is not removed simply with the introduction of an algorithm. Consumers are supportive of these new capabilities when there is a benefit, but there is a concern with the growing utilization of AI in general. And not just with consumers. Regulators have taken note and have started having discussions around potentially regulating such activity.

There are 3 key players in the AI regulatory space: China, the European Union, and the United States all of whom have proposed frameworks for AI at a very general level. Currently, none of these frameworks create direct regulatory obligations for financial services firms but the fact that frameworks have been developed means regulations are under consideration. To remain competitive, financial services companies will continue to explore the use of AI. Given this, how can you prepare for the eventual regulatory oversight of this activity?

First, ensure your firm is maintaining compliance with existing requirements through a thorough mapping of regulatory obligations to your critical processes. Mitigate risks of noncompliance with requirements through implementation of key controls and monitor changes to regulatory requirements as this may create a need to change your controls.

There are many existing laws that could have an impact on your implementation or utilization of AI and represent existing requirements that must be taken into consideration. For example:

  • CCPA and GDPR
    • If underlying data driving predictive analytics is personally identifiable information of a consumer, the use of it could be restricted.
    • Underlying data critical to AI models could be deleted by a customer if data falls within CCPA or GDPR protections. This could impact effectiveness or accuracy of those models over time.
  • ECOA
    • Automated credit decisioning using AI must still conform with Equal Credit Opportunity Act requirements not only at launch but as the model learns and evolves. Thus, ongoing monitoring is key.
  • EEOA
    • There have been notable cases of unintended discrimination as part of hiring practices using AI models. Again, introduction of algorithms does not remove the risk associated with a process and all laws and regulations still apply.

Second, make sure there is a process in place to risk assess new activities, such as the introduction of AI technology to a critical process or system. Depending on the size of your organization, this process could be formal or informal but should include representation from key stakeholders and subject matter experts who can appropriately assess the risk of the new activity to the organization and who can determine collectively which controls to put in place to manage the risk. This should include testing of any models for bias or any other unintended outcome. 

Third, governance, governance, governance. Once new technology is introduced, it is important to monitor execution, perform periodic testing and audits, and provide transparency into performance to critical stakeholders and oversight groups or committees. It is one thing to catch an error prior to launch, but what will really come back to bite you are errors once in production, especially if not caught and addressed quickly. Therefore, ongoing monitoring and oversight is critical.

As you have probably determined by now, managing risks associated with the introduction and utilization of AI is not so different than managing risks associated with the introduction of any new technology or as part of ongoing risk management efforts once the technology is in production. The unknown for now are the specific regulatory requirements around AI above and beyond existing laws and regulations. For this, remain up to date with respect to emerging regulations in this space. Subscribe to a legal and regulatory change management solution such as Compliance.ai to automate this activity for you.  

Tags: , , , ,

X