Automatically monitor regulatory updates to map to your internal policies, procesures and controls. Learn More
Automatically monitor regulatory updates to map to your internal policies, procesures and controls. Learn More
CAI MS Webinar

This interactive webinar with our experts explores AI and Risk Management. They discuss how AI is used as a tool since it can be used both to manage the risks and poses some risks by itself.

Our speakers dive into how AI can be used to analyze large amounts of structured and unstructured financial data, global AI regulations, operational risk in AI models, data poisoning and preventing deep fake. The webinar also covers best practices in risk and compliance management and complementing tools and technologies that can be utilized to counter new and emerging risks.

*Read the full webinar transcript below.

Integrated Risk Management in the Age of AI: how it reduces and introduces risk

Speakers: Kayvan Alikhani(Compliance.ai, Rick Dupree (Riskliance), David Van Bruwaene (Fairly.ai), Loren Johnson (Metricstream)

Introduction

Hello and welcome everybody to Risk Management in the age of AI with Compliance.AI and Metricstream. Today we’re going to have our CEO and founder of compliance start us off, Kayvan Alikhani  introduce you to the other panelists. And just a quick note, we will doing be doing a q&a session at the end. So feel free to put your questions in the q&a box. And we’ll get those answered at the very end as we finish up the webinar. Kayvan I will leave it to you. 

Kayvan Alikhani (Kayvan)

Thank you Ronjini, and thank you to the panelists here. Rick Dupree coming from a

great level of experience on risk and operational risk management. David, both on an academic academic background and commercial promotion of AI model development and Loren, director of product marketing from Metricstream.

Today we will discuss the emergence of AI, specifically in the financial services space. What are the pros and cons? What risks should we be watching out for and a little bit of the science behind it? Risk management and some best practices around AI. While also of course talking about the risks that are associated with AI-based solutions.

Before we start, let’s level set on where you are as an organization in terms of your development and implementation of a AI based strategy.

I think you have the wrong questions Ronjini. This is ESG related. I apologize. I think the questions don’t necessarily match. But did you want to redo the poll?

But essentially trying to see where you are in your progression in terms of implementation and adoption of AIB strategy within the organization, whether you already they’re thinking about it. Suspect or questioning what the heck is it?  

Host

I apologize. Give me one second, and I will get the poll. No problem. We’ll get back to that poll. When you’re ready, just let me know. Okay.

Kayvan

And so, really coming at it from the perspective of myth versus reality. David, can you give us a level set set on the science behind AI? And what is it that we’re talking about when we say AI?

David Van Bruwaene (David) 

Absolutely. So there’s probably a lot of myths about what AI is, that can be cleared up very quickly with a few categories. Artificial Intelligence, broadly speaking, in its broadest and greatest historical form, comprises a lot of things that we’re used to, on any kind of computer system. Its computation that performs a function that is similar to what we would think of as human intelligence or animal intelligence. 

The oldest form of artificial intelligence that seems kind of freakishly human will be rules based. Its artificial intelligence, where it’s the application of logic, as understood by people since Aristotle. Where you have if this, then that, or some categories inherit and other categories. And this when it gets complex enough, can have quite a human like effect. 

More recently, when we’re thinking about artificial intelligence is, many people are thinking about machine learning. Machine learning derives its intelligence from the data that’s been trained on. So we have the concept of algorithm. This algorithm can come in many different forms. But ultimately, we’re dealing with two kinds of algorithms in most applications. One will be a supervised algorithm, where what it learns, it’s actually told whether what the correct answer is. An unsupervised algorithm is one in which a machine learning model is trained on data and it learns patterns all on its own, based upon its algorithm. So you may find no need to actually give the correct answer to a machine learning algorithm that learns, say, to distinguish between different meanings of words that has a large enough set of language, it can learn first, that the word bear is a word, but also that there’s multiple uses of it, and how they operate in different contexts. 

All of this done on very large datasets requiring heavy computation. And that is, ultimately, what’s happened in recent years is the algorithms that have been existing since say the 60s 70s had enough data, we had enough processing power, that we were able to see exponential growth in its use. And then the power of that data. 

Kayvan

Good, the cheap computations lead to good things. Yep. And, Rick, while we’re at it can you help them demystify some of the concepts or conceptions regarding AI?

Richard “Rick” Dupree (Rick)

Yeah, well, I think, first and foremost, The Terminatoris not representative of the future of AI. And also to David’s point, it’s important to think about the evolution of AI. We’re at a point today where we can utilize that data because of the processing power recovered because of digitization, especially in the financial services industry. So it’s really just another tool, another form of technology to leverage and grow your business and improve customer experience. 

I also think, as well, to upskill your employees from your kind of mundane, repetitive tasks upskill them to higher level tasks. And so I think the common myths is that it’s brand new. It’s not, as David mentioned. The Terminator is not representative of the future of AI. And it’s really just an another new technology for services to use. 

Kayvan 

New technology. So, David, you mentioned computation is made the data crunching, cheap and affordable and quick. That’s where we are today. Where are we headed?

David

There is a move towards democratization of AI so that people that don’t have PhDs can, if they have light programming skills, and sometimes even just pressing buttons with access to data, are able to train machine learning models to do things that are really quite remarkable. And also, new forms of artificial intelligence have been created and are now being used. 

One is generative. So tasks where we’re not just say, classifying something, but we’re actually producing content that could be text that could be image, that could be any number of different simulations. Related to this, and quite important as we go forward, is the training and release of large language models, models that are trained on very large amounts of data in ways that Meta and OpenAI are able to afford, and no one else can. 

But these models are able to do very generalized tasks. So one AI model that can produce poetry, but also write computer code, and a whole host of other things. And that’s an effort to generalized intelligence. 

Kayvan 

You know, we worked for a while at compliance.ai in MLG and narrative style interpretation of a regulatory change. And we eventually…I don’t want to say gave up on it. This was the past two, three years. We finally reverted back to extractive specific extractive content, that’s from the letter of the law as opposed to any type of interpretation. 

We left that to the lawyers and to counsel to do the interpretation or maybe consultants that are hired by the organization. It’s one thing to advocate that or see that in the form of poetry or art, do you see the technology evolution at this point that it can be used for legally worded, legally binding content, or you see that more of an evolution into the future? {To David}

David

I don’t know how much legally binding content will be generated without also having a human scan it. Certainly there’s quite a lot of boilerplate that can be produced and tailored automatically. And then also lookup and retrieval of informations is a large one and legal context. 

Kayvan

Right, and then obviously, this has positive and negative connotations. So can you and Rick, maybe talk a little bit. Loren, as well chime in as you, as you see fit on the pros and cons? Maybe start with cons and let’s go to the pros of the impact that it’s had today. It would then be emphasized specifically to what have we seen so far? Rick, you want to start with the cons. So surely end on a high note?

Rick

Yeah, so on the next slide, I think we have the The Good, the Bad, and The Ugly. So I’ll start with the ugly. So one of the cons of AI is, and this is also a big risk, is that if you simply apply machine learning and AI, to an existing process  and there are deficiencies within that process, there’s biasness.

There’s a bias within that data set, so everything’s not weighted equally with respect to certain factors. AI will just make that way more pronounced, exponentially more risk to the organization. So that’s where, I think we’ll get into it later, where it’s so important to manage development and implementation and administration or maintenance of AI just like any other technology, just like any other process. 

Another thing about AI is that it requires a lot of data, as David mentioned, and we’re only really at kind of a technology evolutionary period where we can start to utilize AI; we have enough data digitized to where AI is actually can be a value to an organization.

Some of the others, I think you’ll see on the slide here. It introduces new risks to the company. Blackbox decision making. So if you have AI that actually makes decisions, and you don’t oversee that and monitor those decisions that are being made to quality control or quality assurance around them, and ongoing management of the AI models. You could just have decisions being made in the black box that no one really has any insight into until after the decisions have been made. 

The good of course is it helps to drive business outcomes, it improves efficiencies, reduces errors. Again, the underlying process and the model have to be error free, in order for those fat finger type of error to be avoided going forward. So once the model is in place, you have much less risk of human error, doing the same process over and over again, manually.

Unknown Speaker  16:10  

Also, you know, so getting the model right with proper oversight, you can automate decisions. I mean think about all the decisions we make in a financial services organization, on a daily basis. A lot of those decisions can be automated, and you can use AI with respect to previous decisions, and the results of those decisions, informing additional decisions going forward. So that’s kind of the pros and cons in my pain.

Kayvan

Yeah, Loren, we were talking about this offline. You had a great example of the use of it as it related to COVID-based on modeling. Can you talk a little bit more about that?

Loren Johnson (Loren)

I don’t recall that. But okay. Yeah, I think that, in this space and GRC, in particular, there’s an awful lot of application for AI that hopefully is on the good side. There are definitely limitations, based on how it actually works and what David has talked about. You have to have the right data in the right place to be able to aggregate it all and be able to pull the data out that you really need to make these augmented decisions and intelligence decisions that you need to be able to make quickly, often in a risk environment that’s rapidly moving.

The COVID element and the, you know, the war in Eastern Europe, and the associated supply chain issues and inflation or other things that are happening around the world. This is part of what risk managers are living with every day. They also have risk issues that are much closer to their own organization. You may have issues with partner churning, or customers churning, or IT issues or otherwise, that are all in this world of risk management that’s coming toward people who are working in a risk space and GRC.

I think in the old days, not so long ago, old days, you could do risk assessments, occasionally, you didn’t have to do it all the time. Same thing with compliance, you didn’t have to process incoming regulations all the time. But now, we see 250- some new regulations every day coming in from around the world. You cannot get off the system and pause. It doesn’t risk doesn’t really play by your own calendar or your own schedule, right. You have to be on all the time, and AI helps a lot with that kind of thing. As Rick was talking about, processing things and getting things more efficient and closing gaps and loops. Where you might lose something otherwise. So there’s a lot of power in that potential to do the good.

Kayvan

Ultimately make rapid model adjustments. 

Loren

Right. Right and the reality is you don’t live in an island anymore. None of your divisions live on an island. Everybody is affected when risk happens to an organization. Cyber risk or otherwise, it can be holistic to the whole organization, and you need to have a holistic response that’s always on and AI helps a lot with that.

Kayvan

Just want to clarify this black box decision making is not a umbrella situation with AI solutions. Yes, there are certain AI based implementations and approaches that do look like black boxes and many implementations that do not. So these are some of the pitfalls that you can fall into.

David, would you like to add to the good, the bad and the ugly? What are your thoughts on these aspects? Where can we go right, what can go wrong?

David

Well, I think the easiest thing to think about is that machine learning is learning from data. And if it’s historical data that has flaws in it. And it’s a large enough data set that no human could go through and correct those flaws, then machine learning will be learning the flaws. So if we’re talking about prejudice, or if we’re talking about overlooking certain categories or features, because humans hadn’t recorded that information in the past, this will be perpetuated.

On the positive side. It’s just all the cool stuff you can do with it. It’s it’s a very powerful tool. And sometimes you don’t want humans wading through large amounts of data, they will go crazy, or they never could.

Plenty of positives I think we’ve all seen in our lives.

Kayvan

And Rick, we saw an example with Honda data and the models that were built. Do you want to talk about that, and how that ended up skewing results in the wrong direction?

Rick

Yeah, there’s a lot of examples of. I think David brings up a great point here is, there are times when you have the data, you just don’t have enough people to go through and literally look at all the data to recognize where there is a flaw. So sometimes you just gotta, you just have to test it, you test the model, you don’t put it into production. 

I think Amazon had done an automated resume screener. I believe this was a few years back, and it was a pilot or a test, so that didn’t go into production. But they found that it was filtering out female candidates as part of the process. Obviously, that you don’t want that to happen. But a more recent example is Zillow, and this went right into production. They had a whole promotional campaign around it, and they went live. And I don’t know what they did behind the scenes with respect to testing it, if at all, but they didn’t do it effectively. They introduced a new feature last year called Zestimate, which uses AI to make cash offers for homes in almost real time.

It (Zestimate) was utilizing HUMDA (home mortgage disclosure act) data, it’s data that’s collected from the federal government with respect to HUMDA. And they were using what’s called like public data sets, and that there’s a lot more risk in public data sets than your own data set. But it ended up making 1000’s of above market offers based on this biased data.

And I believe as a result, his Zestimate was close as a feature and as a division and stock was impacted. Zillow stock was greatly impacted by that.

Kayvan

Absolutely. And that’s a great example. And I think Ronjini If you’re ready for the poll, if we can take it, here we go. Let’s try again.

Where are you in your adoption implementation of an AI strategy?

We’re up there for a couple of minutes. And meanwhile, I also want to look at (Audience Q&A) Can you speak about the pros and cons of using AI in the banking sector and the possible risk associated with using AI and banking? What kind of oversight can and should be implemented? To lessen the risk? I think we’ll have some coverage for that in the upcoming slides. But we’ll keep that as a backburner to answer as well. Unless one of the panelists wants to take that all right now.

Rick

Like any technology, AI introduces new risks to the organization. We’ve spoken to some of them some examples of some very public AI kind of mistakes that you want to absolutely avoid in your organization.

Kayvan

Yeah. 11% are questioning what, what is AI? It’s not AL. It’s AI. So that’s one clarification.

And let’s see, we have deployed some efficiency, I think we are exploring strategy. So curiosity, is the leader here.

Rick

The What’s ai? 11%. That doesn’t surprise me. Because even though AI has been around for a while, and people still get confused. What’s machine learning vs AI? What’s automation vs AI? They’re all different. Going back to managing it from a risk perspective, just think of it as new technology that you need to manage within your organization. It’s a technology that depending on how it’s implemented, could be a process within your organization. 

Anytime decisions are made in an organization, just from a human, there’s risk involved with that. Are they making the right decision? Are they following the policy? Do they have experience in the space to maybe go off script from with respect to the procedure or the policy. Those are risks that exists in organizations today. I would recommend you manage those within your existing GRC ERC frameworks. There’s no reason to create a whole new risk discipline or program around this. It’s a lot of model risk management. But it’s not just model risk management. It’s beyond that. 

Loren

The other thing is that we know that that people who work in GRC are generally looking at it as a human thing. And there’s a lot of human elements to it, not just processing things. But also how do you interpret regulations? How do you express it through policies in your company? How do you enforce it internally? How do you actually, you know, build programs that work? And there’s always going to be human element GRC. 

The issue with with BFSI and a couple other industries is they’re highly regulated, you have 1000s of regulations. So when these regulations come in, updates come in. That element of being able to find things that are like in an existing library of regulations versus a new thing and being able to pull that forward and saying this is what’s changed. This is what you need to be alert to. This is how you need to then processes through your authorities in your own organization. That’s a huge time saver for you. There are a lot of people in GRC, who want to do things manually, because they feel like they need to touch it, they need to have this kind of connection to what’s happening, because it’s so instrumental to how the business runs. But this element of getting rid of some of those redundancies, making it much more efficient, and streamlining it and making it surfacing things that matter. Not only regulations, but in risk elements as well, is a very, very big benefit from from AI.

I think I like to look at it as augmented reality more than artificial intelligence, in a sense for GRC. Because people want to have that I’m involved in this, it’s not making decisions for me, It’s making recommendations for me, and then I can see that and say this is the best course forward. It helps you make those kinds of quicker in the moment decisions that often have to happen in this space. 

Rick

I can think of examples of where not implementing AI is actually a risk in itself. In particular, and I feel a lot more this in the last couple of years, especially six months or so and with what’s going on in Ukraine and other areas. Sanctions monitoring and transaction monitoring, just the volume of that has just exponentially increased. 

I’ve seen in not only smaller startup fintech’s, but also larger banks, where the process has become so unmanageable, and they’ve just had to throw bodies at it, and having people work till nine o’clock at night.

Decisioning pretty important potential sanctrions violations, right? So this is where the current process is riskier than a potential implementation of a solution. 

Loren

Totally true. There’s a couple of things that happen there, though. What we’ve seen, and I think it’s been common in this space for years is that there are separations of GRC processes and teams. And you may have your third party risk team over here, you may have your risk management team here, compliance in here. And the data systems are different that data connections don’t happen because they’re collected differently and they don’t speak to each other, they don’t get, you know, converted into something, unstructured data. Whatever we need that will actually allow it to communicate across them all. And so what happens is you lose things, you create gaps unintentionally. 

But if you can centralize the management of data and make it more consistent in a single system, and have a leadership that’s in the C suite that actually manages and sees all this kind of thing. You have a capacity to link things together that you would not otherwise be able to do. So when you use AI to identify patterns and risks. When you use AI to identify the character of a risk no matter where it’s coming from whatever division whatever location and notice that hey, this is something that’s similar to something we saw somewhere else, and you can kind of combine those. Then then these risks gets surfaced and you can be more attentive to things that are potentially harmful to the organization that you would have missed, a human would have missed.

You can do predictive analysis, you can do other kinds of things that allow you to be much more assertive with risk management. And what we always want to talk about is risk is not a dirty word risk is not necessarily a bad word. We want people to look at risk as an opportunity, as well as something that you need to be alert to and defend against. You can use risk to your advantage, and many, many, many cases. As long as you’re alert to and understand what it is. Scale, scope, severity.

Kayvan

To that point of many, many cases. Rick, can you talk about very specific use cases that you’ve seen, or you recommend attempting to use AI for risk management?

Rick

Yes, and a lot of these are just the potential of AI. But I gave one example, tt’s the broader risk management space, including compliance and sanctions monitoring. So there’s a lot of, there’s a lot of risk. Right now in the industry, it’s, it’s grown exponentially. And so you see a lot of solution providers, which is great.

But just the manual processes, and decisioning of potential sanction pits is so risky. But some other kinds of real potential use cases are saying implementation of risk appetite. If you establish a risk appetite at the entity level, and you have supporting key risk indicators, or risk metrics, and then you conduct risk assessments to inform the overall current profile of the organization, or the management business within that overcome overall kind of aggregate risk appetite that’s been established. 

If anyone’s done that manually, and I have, it’s a bear. And this is another use of meaning of the word bear, to your point, David. That’s an excellent opportunity for AI, you can automate a lot of the risk assessment. There’s a lot of opportunity in risk, and the risk assessment space with respect to automation of parts, if not most of the risk assessment process. To Loren’s point, risk assessments we used to do once every one to three years, and now you’re doing them all the time. 

So the the other one is reading risk descriptions written by risk managers, and or risk owners to classify according to frequency and impact and once you have that kind of data governance, that data classification and governance in place that Loren was speaking to. 

You can then use AI to kind of keep it in place. Rationalizing across different risk descriptions, is an example. And then you could combine that information with loss data, to potentially have more accurate predictions of future losses. So AI could then be used to not only articulate where you’ve been where you’re potentially heading.

Kayvan

Oh, my God, we have a great question coming in.

(Audience Q&A) My question is how to determine the person to blame if blackbox strategy makes wrong decisions? Can you blame it on the robot, especially in real time processing where there is no review flow? 

I think the idea of and completely unsupervised, completely without any oversight system, in my opinion, at this point, 10 plus years experience with modeling and ML based systems would be dead on arrival.

David, why don’t you comment on that?

David

Well, I think it’s difficult to figure out in all cases where to assign blame.  But you’re not you’re not sending an algorithm to jail. Giving you to find the people who are creating the algorithm testing the algorithm making the decision to use it in a specific line of business.

If they have not gone through the proper processes, and accepting that there is always a margin for error in this space. And we’ll never have perfection. But he responsibility really lies in the user of a gun, the user of an algorithm and in this case having enough investment within a company to do proper model validation. To have the experts who are capable of really putting them through the paces, generating alternate scenarios, stress testing them a lot of work. A lot of expense. A lot of time. But still necessary and where those processes are not invested in, I see some culpability. 

Kayvan

Yeah, we’ve completely agree in our approach. In fact, when you hear from Compliance.ai, that expert in the loop. Not surfacing results or decisions that have not been reviewed or assessed by human beings, even though you’ve automated it. Verify the accuracy of the information. Verify and test the accuracy of the information with which basically combines the power of ML based modeling with that of expert human beings. 

We’ve heard that over and over again, from our clients, and from our partners. The idea of unsupervised learning and the lack of culpability and that you cannot present that to an auditor because you can’t point to the machine as the decision maker. 

So speaking of risks, David, what other risks do we see with AI, this modeling and what is emerging in the form of risks associated with taking advantage of AI this model?

David

I think there’s risks that go beyond models just getting things wrong.

Emerging risks that are not always talked about, but are now being investigated at an academic level, and also security level, include what we have listed here, data poisoning.

You can train the model to do something that you want it to do that may not be the intended use simply by giving the model enough examples that have been custom tailored to your particular instance. 

You could have an employee or you could have a bad actor, who has created a vendor model, for example, who has intentionally manipulate the data to do bad things. So this is what we call data poisoning. When is trained on this data intentionally, with with data peppered throughout, to do a task that you don’t want it to do.

The second one is model extraction. And this ultimately is an IP issue. But it could also be a larger issue in certain cases where models get into people’s hands where they’re best not given those models. 

So very simply. I’ll do it by example, let’s say that we have a model that says whether something’s a hot dog, or not a hot dog. This was the show of Silicon Valley had a whole thing on this hot dog/not a hot dog, a very simple task. Showed a whole lot of pictures of hot dogs, showed a lot of pictures of not hot dogs, and it learns to tell the difference. 

Now, if you had access to that model, and you were to run it through 10,000 – 100,000 pictures, you could get labels on all of your pictures. Hot dog/ not hot dog, and if the model is any good, most of those will be good labels. Now you have training data, and you could actually train a model yourself to do the same thing as the original model. So that’s a kind of classic case where you can reverse engineer a model simply by using it and recording the result. 

Third, deep fakes. So I mentioned generative models, models that are producing images, texts, etc. When you have enough training data. For example, pictures of presidents or celebrities that is very common to be able to train a generative model again, usually, that produces what looks very much like the real thing. As it becomes more and more difficult to tell the difference between the true thing and the fake. Your imagination can go wild on how this can be misused.

Kayvan

Yeah. So from one side you have the challenges with this and now you have regulatory bodies that would come in, hopefully to save the day. I know that Rick that we look at regulations themselves as risk but what do we see in terms of the regulatory landscape that adheres to AI as relevant to AI? What do you see happening? What did what is already in place?

Rick

Yeah, so I think other emerging technologies like crypto. AI is it’s emerging, because again, we’re at a point where we can start. Not just financial services, but any industry can just start using the data set to create models to build AI enabled processes, right. So it’s emerging in that sense. But there’s three key players in the AI, regulatory space, China, the EU, and us.

That’s also the order of where they are in the process. China has rolled some rules out, a lot of them look quite restrictive. We’ll see how they play out. EU is looking at a framework on potential AI regulation. 

The US from, from what I understand, is very much looking at a similar approach as the EU is taking. But what’s interesting is, it’s not just about AI specific regulation. And more than likely, and David from your perspective you  have more insight or our opinions on this. But it’s probably gonna be more of a framework than an actual set of regs. But there are existing laws in place, and regulations that could impact AI today. So CCPA, GDPR, anything in this privacy space could potentially impact AI. 

One example, I thought it was the underlying data, the say critical to AI models that couldn’t be deleted by a customer if that data falls within CCPA or GDPR protections Does that impact the effectiveness of the accuracy of those models? So at some point, you know, 10%, that data being extracted for 10% of your customer? Does that impact your model? 

And then if so, that would come through maybe KRI for that model. Another example, is the FDA. So the Food & Drug Administration reasonably put a framework into place to promote development of safe and effective medical devices that use advanced AI algorithms. But only allows the use of locked algorithms. So David speak to what that is, but it basically doesn’t learn. Based on the data set and some initial machine learning it just executes. It’s beyond automation, but it’s a locked algorithm.That’s the the racehorse gap landscape in the discussions in these three countries around frameworks, but also to take into account existing laws.

David

I might add, some recently, as in this month, Equal Employment Opportunity Commission just came in gave some guidance on discrimination by algorithm against persons with disabilities.

The Fair Lending Act has now been widely enforced for models used in banking, for example. That may be machine learning based or not. The move comes in two different directions, one’s horizontal. And that’s large efforts globally to put together frameworks leading to standards and regulation, ultimately, which is really, I think, going to come faster than people think. Especially out of EU and Asia.

If we have the horizontal, we also need to connect it to the specific use cases, the vertical. And that’s the connection that’s still being sought. And people are working for both sides to interpret existing laws, existing applications and connect them to the broader market horizontal approach.

Kayvan

(Audience Q&A) And another question come in which is What are these regulations going? What are these regulations going to regulate? and for what reason and what end result? 

I think you just provided one example. And I wanted to clarify. Regulators typically are concerned less about the how of a software solution and more on the outcome, and the ultimately that a human being is responsible for what is being presented what’s been identified. Now, if there are privacy laws on the books, and the AI model is circumventing that. Like the one that Rick mentioned. 

Imagine you have, you know, GDPR CCPA. In the GDPR cuts right to be forgotten to users ask for that you build federalist tokenized, anonymized models based on that data. There’s a gray area now, are you in violation of GDPR? If you continue to use those models, even though the users left your your organization is no longer subscribing to your service is no longer a customer? And similarly, in terms of recognition, or data that you’ve collected on behalf of the users who are no longer using your service? 

So it’s more about the application or the circumvention of existing rules and regulations by using AI that would not have been possible otherwise. That’s at stake. 

I think that the common thread we’re seeing in the US EU, specifically is more of a forward leaning attitude by regulators suggesting and recommending that organizations use any and all opportunities for automation, including the use of advanced technologies, like AI. But of course, we’re seeing some of the pitfalls and some of the challenges in terms of the risks associated with it. So buyer beware, going into it. 

Rick

Can I just add something really quick on that. It’s with respect to the regulatory. I’ve been involved with some exams in the last year or so more where manual processes that haven’t been automated, this isn’t just AI, but those actually come back as criticism from from the examiner. So you should maybe take your AI and just start small with digitalization automation, then move up to AI.

Kayvan (Poll #2)

So how’s it figured into your specifically into your governance risk and compliance strategy?

  • We pray to the gods. 
  • We leverage it, but not for decision making.
  • Sarted our journey and it’s not factoring in any way in our decision making.

Keep the questions coming. By the way, these are great questions we’re seeing on the q&a.

Host

I’m just waiting for these answers to slow down. We’re getting a lot of participation. So thanks to everyone for weighing in here.

Kayvan

All right. It seems like a lot of people haven’t started on the journey yet. 20% on their way and 13% have drank the Kool Aid already. So pretty interesting spread there. Thank you so much for that poll.

You know, going into regulatory change management, you saw in an earlier slide, we listed regulatory change management is one of the aspects of overall governance risk and compliance that could apply.

Responsible innovation, leveraging it within regtech solutions, and where do you start?

Loren,  what are your thoughts on an organization that wants to really take advantage of rec rec tech solutions? I’m going to go into a deep dive of the impact that it can have on a step by step basis. But high level, what do you see organization’s ability to to leverage rec tech solutions within their compliance practices?

Loren

I think that the first survey the the highest number was about curiosity. And even this one we just did, people are not quite ready to go yet. And I think that some of us based on the need to build some trust into the systems. 

We already us AI to some degree and augmented intelligence in our day to day lives. It’s in our recommendation engines, Amazon, it’s in the sentence completions and Outlook, things like that. Where it has this kind of basis of understanding about what’s out there and what you have already and helping you make those decisions faster and more accurately. So I think when people look into it, we’re not talking about AI coming in and doing a 90% of your job in GRC.

We’re looking at 10%, you know, 15%, something like that. It’s not a deep penetration of change your life. It’s hopefully changing your life for a better. But I think there’s some hesitation because they see it as like a Pandora’s box. 

But kind of the opposite where there’s a lot of fear, and there’s one or two good things you can get out of it. And that’s not really the case here. I think that there’s too much of, as the question mentioned, “how do you blame the robot?” This isn’t a robot, right? This is an augmentation. This is intelligence to help you do your job better.

So, I see in regtech solutions, especially in regulatory change management, that’s a very straightforward application of it that can be measurably impactful to how you actually process this high level of regulations coming into your organization, especially in finance.

You know, risk is not slowing down. It’s not going away. The GRC industry is getting busier all the time, the scope of what people in GRC have to cover is getting increasingly large.

This there is some urgency for this kind of automation of decision making, well, of augmenting decision making. That needs to be put front and center. People need to get ready for this and start implementing it. Where it makes the most impact. And I think that for RCM it’s really there. In regulatory change management is really in identifying a like risks. In an earlier slide, we talked about measuring risk culture and things. There’s many, many elements in GRC that that provide data, that you can link together to find insights and take actions that you would not find an a personal scale. 

Kayvan

Yeah, let’s double click on what you said, as it relates to regulatory change management. I thought it’d be useful to talk about the status quo, and then talk about what the overlay of AI driven approach could do to that status quo. 

You know, regulatory change first starts with knowing what those regulatory changes are. Reading, scanning, horizon scanning today, highly manual process, did you miss anything? What about the different locations I have to go to? Once you receive them, throw them into some funnel of classification, relevance analysis, analysis and summarization. We’ll talk about

this specific item a little bit more in detail, ultimately routed to the right person. Often times time consuming and error prone. 

And then ultimately the assessment of risk the analysis of the impact. And then finally, change of the business process that result as a result of regulatory change and rinse and repeat. Every time there’s a new regulatory change go through that. Specifically as it relates to that analysis and summarization. Typically, you say, get the document, find out how it’s changed from the prior version. Analyze it line by line. Assess applicability. Is it applicable and relevant to me and my role in the organization? What are the obligations I need to consider as a result of this change, start taking action, map it and analyze these next steps? All of this happens today on a predominantly, I would say, well over 85% of organizations on a manual homegrown basis taken advantage of human capital to get there. 

Now let’s look at what impact it could have, obviously, auto aggregation and establishing the taxonomy that standard across regulatory documents. That’s completely possible and doable. And we’d have done that purpose built ML models that now standardize the understanding of; What is a paragraph? What’s the sentence? What’s a header? What’s a footer? What’s an obligation? And this obviously, is saving you time, it’s all about return on investment. Here, whatever it is that you were doing to collect this content and to normalize it; you don’t need to do that anymore. And that’s savings, both in terms of time, increases consistency and then of course, classification. Text based content documents lend themselves quite nicely to modeling if you’ve done adequate training, now you have a much better way to classify changes that are relevant to anti money laundering versus changes that are related to privacy versus those that are related to cybersecurity. 

And even the auto extraction of key information from the documents. Assessing risk and impact analysis can very much be modeled and then ultimately, something a little bit more controversial would be auto assessment of risk and auto assignment. But also, association of regulatory changes with policy changes. This regulation change and impacts these four policies within your organization. Be aware of that and be able to automate that trust. You look at a status quo and you compare it with a AI impacted approach towards that overall rate change management, you see that the savings from an ROI perspective and the impact are quite positive. 

You’re really setting the company up the organization up for a level of growth. 

One of the questions we got was I’m gonna go through the questions.

(Audience Q&A)

Have you seen or expect a big move back to the de-identification of personal data, so that attributes are still valid and usable?

Absolutely scrub the data of any reference to information about an organization. But still, we deal with and I’m sure other solution providers, David, you can talk to that. Are very, very paranoid and skeptical about any use of their data outside of the use by their organization, and they don’t want their data to be used for modeling that then benefits one of their competitors. 

So it’s less about regulatory risk in that case, and more about the preferences from a competitive perspective. And I should also add that that’s typically coming from much larger organizations with a data set. Oftentimes, could be revealing, and it’s very specific to their organization. David, have you seen efforts in terms of the identification of personal data, so it doesn’t have  that can be used without privacy risk?

David

There’s a lot of techniques. And the fact is, none of them are perfect, because there is ambiguity. And in some cases, we have structured data, it’s a lot easier than others. So there’s always going to be a concern that proper data handling sometimes can allow for it to be processed in the safe environment as a way to get around it. Or sometimes imperfection of anonymization techniques isn’t such a big deal. But it’s always a work in progress.

Kayvan 

(Audience Q&A) Another question we had was, Are there tools outside of GRC that can help identify the risks, or bias and potentially help with risk and compliance management? It seems to me that one must combat AI risk with AI tools in order to create some efficiency. 

So tools outside of GRC.

Rick

think I haven’t seen anything, quote unquote, AI specific. But you spoken to regulatory change management, I think GRC solutions in general. All of this, you have to make sure that you’re classifying your data correctly. There’s tools out there that can assist with respect to the management of the risk associated with with AI or the change management of the risk associated with AI. But I haven’t seen anything specific to the AI space. Maybe Loren can speak more with respect to positioning current GRC solutions as AI, potentially AI solutions. But I haven’t seen anything specific today. 

Loren

Yes, I think we’re really pretty early in terms of actual application has an awful lot of promise. And I mean, because GRC is getting increasingly data driven. And, as I mentioned before, if you can get your data to work together across those different divisions of GRC, then you have a goldmine there that you can tap into.

There’s some progression. I think that because of the risks that we’ve seen in the last few years, there’s more urgency than it was before. And GRC professionals don’t need to be told that. They’re getting stretched. They have limited resources, and something like this really could answer a lot of those needs that they have. 

Kayvan

There was another question I think we answered. (Audience Q&A) Generally speaking, is there any low hanging fruit ROI for eye segmentation, and projection or odd projects or automation. 

So think about BFSI, segmented out bank credit union, asset management, fund manager, FinTech, insurance company, service work. You know, the application of it being quite different or generalized. What are your thoughts in terms of the differentiation from a segmentation perspective? Have you seen different appetite for this? 

Loren, when you’re talking about this or discussing this with your clients? Or are you seeing this kind of across the board independent of segment that this would result in the right type of ROI? 

Loren

I mean, there are definitely GRC teams that are more busy than others. I think, as we’ve talked about before, the finance space and healthcare and insurance and others are have more regulatory elements and others may have, especially if they’re global organizations.

I think that the segmentation of these if you’re not using AI already, and you want to be able to get these to process faster and more more cleanly like you were showing with your charts earlier. You can take some steps yourself initially to get this to work better, and then initiate an AI kind of program to to enable it to work more seamlessly and more accurately. There’s many steps to get into the the elements here.

To me, regulatory change, management is an obvious step. Linking your risks together is an obvious step.

It’s understandable, but takes a little bit of time to get there. 

Kayvan (closing)

With that Ronjini, you wanted to bring up something that’s coming soon. Go ahead, please. 

Host

Yes. So we are going to be hosting our second annual expert loop forum or EITL Forum, which we’ve talked a little bit about EITL today in today’s webinar. It’ll be on September 7 and 8th. So please mark your calendars. The event brings together Risk and Compliance Management executives, leaders, from regulatory agencies, focusing on Banking, insurance, and fintechs, specifically, and we will be hosting sessions on best practices, discussing regulatory directions and trends. So please mark your calendar or you can visit our website to put it on your calendar and register early. 

Kayvan

We did have one last question about whether this event has been recorded and whether the recording would be available. 

Host

We will be sending out the recording with the transcript later this week. You can if you don’t get the email for whatever reason, all registered participants should get it. It will also be on our website on our blog section. And if you are interested in a demo, as Carol, requested, you can see the link right there in the chat. Right there. Request a demo. 

Kayvan

Well, I want to thank the panelists. I think we just went one minute over time. So Rick, I held my promise and great talking to you, David and Loren as well. Thank you Ronjini as well. Thank you so much!

Tags: , , , ,

X