M&G Group Services
Back to Insights
AI Compliance8 min readFebruary 28, 2026

The NIST AI Risk Management Framework: A Plain-English Guide for Business Leaders

The federal government published a framework for governing AI risk. Most businesses have never heard of it. Here's what it says, why it matters, and what you should be doing about it.

In January 2023, the National Institute of Standards and Technology published the AI Risk Management Framework — a voluntary guidance document for organizations developing, deploying, or using AI systems. If you haven't read it, you're not alone. Most business leaders outside the federal contracting space haven't.

That's changing fast.

Why the AI RMF Matters Even If You're Not a Government Contractor

The NIST AI RMF is becoming a reference standard for AI governance the same way the NIST Cybersecurity Framework became a reference standard for general cybersecurity — starting as voluntary guidance and gradually becoming the basis for regulatory expectations, insurance requirements, and client due diligence assessments.

If your organization:

  • Processes personal data of EU residents (GDPR, EU AI Act)
  • Operates in a regulated industry (healthcare, finance, insurance)
  • Works with enterprise clients that require security questionnaires
  • Plans to pursue SOC 2 certification
  • ...then AI governance frameworks will be part of your compliance posture within the next two years. Getting ahead of it now means avoiding the scramble later.

    The Framework's Core Structure: GOVERN, MAP, MEASURE, MANAGE

    The NIST AI RMF organizes AI risk management into four functions. Think of it as a cycle, not a checklist.

    GOVERN: Build the Foundation

    The GOVERN function is about establishing the policies, processes, and culture that allow your organization to manage AI risk consistently. This includes:

  • Defining who is accountable for AI decisions (not just technically, but legally)
  • Documenting your organization's risk tolerance for different AI use cases
  • Training employees on responsible AI use
  • Establishing processes for raising and escalating AI-related concerns
  • Most organizations skip governance entirely in the rush to adopt AI tools. This creates a situation where no one owns the risk — and everyone is surprised when something goes wrong.

    MAP: Understand What You're Working With

    The MAP function involves identifying and categorizing the AI systems in your environment. For most businesses, this is eye-opening. You may have:

  • AI models embedded in your CRM or marketing automation tools
  • Third-party AI APIs you've integrated for content generation or classification
  • AI-assisted features in your cloud infrastructure (anomaly detection, cost optimization)
  • AI decision support tools used in hiring, lending, or healthcare
  • Mapping means understanding: what does each system do, what data does it use, what decisions does it influence, and what are the consequences if it fails or is manipulated?

    MEASURE: Quantify the Risks

    Once you know what AI systems you have and what they do, you can assess the risks. The AI RMF suggests evaluating AI systems across multiple dimensions:

  • **Accuracy and reliability**: Does the system perform as intended? Under what conditions does it fail?
  • **Fairness and bias**: Does the system produce systematically different outcomes for different groups?
  • **Robustness**: How does the system behave under adversarial conditions or distribution shift?
  • **Privacy**: Does the system expose, infer, or leak sensitive personal information?
  • **Security**: Is the system vulnerable to manipulation, extraction, or evasion?
  • **Explainability**: Can you explain why the system produced a given output?
  • You don't need to score every dimension for every system. The depth of assessment should match the risk level — a customer service chatbot requires less scrutiny than an AI system making credit decisions.

    MANAGE: Treat and Track

    The MANAGE function is about responding to the risks you've identified. This includes:

  • Implementing controls to reduce high-priority risks
  • Defining thresholds that trigger human review or override
  • Monitoring deployed systems for drift, anomalies, and incidents
  • Establishing incident response procedures specific to AI failures
  • A key principle: AI risk management is ongoing, not one-time. Models drift as the world changes. Threat actors develop new attack techniques. Regulatory requirements evolve. Your management practices need to keep pace.

    Practical First Steps for Business Leaders

    You don't need a dedicated AI ethics team to start. Here's a realistic sequence:

    Week 1–2: Take inventory. List every AI tool or service your organization uses, including tools embedded in products you've purchased. Note who owns each one and what it does.

    Week 3–4: Identify your highest-risk systems. Which systems make consequential decisions? Which process sensitive personal data? Which have the broadest access to your infrastructure? Start there.

    Month 2: Build a governance baseline. Establish a simple policy: before deploying a new AI system, it must be reviewed for data access, output validation, and logging. Assign an owner. Document the decision.

    Month 3+: Assess and remediate. Work through your highest-risk systems methodically. For each one, document what you know about its risks, what controls are in place, and what gaps remain.

    The Compliance Connection

    The NIST AI RMF does not replace HIPAA, PCI-DSS, or SOC 2. It complements them. As auditors and compliance frameworks begin incorporating AI-specific controls — and they are — organizations that have already built an AI risk management practice will be able to demonstrate compliance much more easily than those starting from zero.

    The time to build that foundation is now, before the audit asks for it.

    Ready to apply this to your business?

    Our team can assess your current security posture and show you exactly what to prioritize — at no cost.

    Get a Free Security Audit