Automatically monitor regulatory updates to map to your internal policies, procesures and controls. Learn More
-

1558 Enforcement Actions in the U.S. over past 30 days

-

FTC enforcements decreased 55% over the past 30 days

-

SEC issued enforcements: $37,812,859 over the past 30 days

-

50 Final Rules go into effect in the next 7 days

-

49 Mortgage Lending docs published in the last 7 days

-

1670 docs with extracted obligations from the last 7 days

-

new Proposed and Final Rules were published in the past 7 days

-

11906 new docs in pro.compliance.ai within the last 7 days

-

Considering RCM Solutions?  Here’s an RFP to get started.

-
Why Its Dangerous for AI to Regulate Itself

Why It's Dangerous for AI to Regulate Itself

In this Forbes Article Compliance.ai’s CEO and Co-Founder, Kayvan Alikhani, explains why it is imperative that companies and regulatory agencies work together to create a framework for regulating AI.

Here are some of the key takeaways:

  • Fears of AI making mistakes is overblown. Humans are error prone too.
    “The key difference is that when an expert makes a mistake, the human expert is responsible for it. As of now, the same liability does not apply to AI.”
  • AI is moving at a faster pace than regulatory agencies can keep up.
    “Due to the novelty of the technology and the pace of change, however, progress has felt slow and ad-hoc, with many regulatory bodies apparently taking a wait-and-see approach.”
  • Regulating AI starts with transparency, auditability, and accountability.
    “Business leaders should at a minimum be able to show how AI is used by experts to make critical decisions (transparency), while also demonstrating that they’re able to work back from the final decision through the processes, technologies, and data that were used to arrive at those decisions (auditability).”

Read the full post on the Forbes Technology Council.

Other Resources:

X