Statutes are local. Frameworks travel. When a Colorado court asks what a reasonable deployer would have done, when a Singapore Monetary Authority examiner reviews a bank's model governance, when a procurement officer writes an AI clause into a federal contract, they tend to reach for the same small set of references. This is our reading of that set.
Published January 2023 by the National Institute of Standards and Technology. Voluntary by design. Increasingly mandatory in effect.
The AI RMF is organised around four functions: Govern, Map, Measure, and Manage. Each is decomposed into categories and subcategories, producing a taxonomy of roughly seventy outcomes an organisation is expected to demonstrate. Unlike a compliance checklist, the framework is iterative. It asks operators to run the cycle continuously over the AI lifecycle, to document decisions, and to calibrate risk tolerance to context.
What makes AI RMF consequential is not its legal status but its diffusion. The Colorado AI Act references risk management aligned with a nationally recognised framework as a safe harbour. Federal procurement clauses require it. ISO/IEC 42001 maps to it. Insurers underwrite to it. When a plaintiff in a US negligence action asks whether a defendant acted reasonably, the RMF is the document closest to hand.
NIST released the Generative AI Profile (NIST AI 600-1) in July 2024, extending the framework to foundation models and agentic systems. It introduces twelve specific risks, including confabulation, human-AI configuration, and value chain traceability, that operators of autonomous agents should map and manage before deployment.
Published December 2023 by the International Organization for Standardization and the International Electrotechnical Commission. The first certifiable management system standard for artificial intelligence.
ISO/IEC 42001 occupies a different position from NIST. It is a certifiable standard. An organisation can be audited against it by an accredited certification body and hold a certificate recognised across its supply chain. For operators doing business in jurisdictions where statutory obligations are ambiguous, certification provides a defensible baseline and a document that procurement teams can point to.
The standard adopts the familiar management-system architecture used in ISO 27001 and ISO 9001. Context of the organisation. Leadership. Planning. Support. Operation. Performance evaluation. Improvement. Annex A lists 38 controls specific to AI, covering governance, lifecycle, impact assessment, data quality, system documentation, and human oversight. Annex B provides guidance; Annex C maps AI-related organisational objectives; Annex D links to sector-specific considerations.
The practical value of 42001 is the documentation scaffold. An organisation that runs a real AI management system produces the records an EU operator needs for Article 26, a Colorado deployer needs for SB 24-205, a Singapore bank needs under MAS FEAT, and an insurer needs for underwriting. One system, multiple jurisdictions.
Developed by the Infocomm Media Development Authority and the AI Verify Foundation. Practical testing over prescriptive rules.
Singapore's approach began with the Model AI Governance Framework in 2019 and evolved into AI Verify, an open-source testing toolkit that operationalises eleven internationally recognised governance principles. Rather than codify duties in statute, Singapore offers a combination of principles, self-assessment instruments, and a software toolkit that produces a testing report.
The Model Framework's second edition (2024) adds specific guidance for generative AI. Nine dimensions are proposed: accountability, data, trusted development and deployment, incident reporting, testing and assurance, security, content provenance, safety and alignment research, and AI for public good. Deployers are expected to treat these as an operational agenda.
For cross-border operators the signal is strategic. Singapore's framework is narrower than the EU Act and less prescriptive than the Colorado statute. But because financial, healthcare, and platform regulators in the region reference it, compliance in Singapore typically satisfies the minimum expected in much of Southeast Asia, and meaningfully supports a global posture.
Adopted 2019, updated 2024. The foundation text most national frameworks cite when they explain themselves.
The OECD AI Principles are not law. They are the vocabulary the G7 Hiroshima Code, the Council of Europe Framework Convention on AI (2024), the UNESCO Recommendation on AI Ethics, and most national strategies borrow from. Five values: inclusive growth, human-centred values, transparency, robustness, and accountability. Five policy recommendations for governments. They were updated in May 2024 to address foundation models and misinformation risks.
The Council of Europe Framework Convention on Artificial Intelligence, opened for signature September 2024, translates several of these principles into binding obligations among signatory states. It is the first international treaty on AI. Its operative articles impose duties around equality, privacy, accountability, safe development, and remedies. Ratification is ongoing.
For a global deployer, these instruments matter less for immediate compliance than for interpretation. When a Colorado court or a Singapore regulator reaches for the normative content behind a statutory term, they are likely to find it here.
A reduced mapping between the core duties under the EU AI Act (for high-risk deployers) and the parallel outcomes in NIST AI RMF, ISO/IEC 42001, AI Verify, and the OECD Principles. Read as reference, not legal equivalence.
| EU AI Act duty | NIST AI RMF | ISO/IEC 42001 | AI Verify / GenAI | OECD |
|---|---|---|---|---|
| Risk management (Art. 9) | Map, Measure, Manage | A.5 Impact assessment | Testing & assurance | Robustness |
| Data governance (Art. 10) | Map 2.3 | A.7 Data for AI | Data | Transparency |
| Transparency (Art. 13) | Map 5 / Manage 4 | A.8 Interested parties | Content provenance | Transparency |
| Human oversight (Art. 14) | Govern 4 / Measure 2 | A.6.2 Lifecycle | Accountability | Accountability |
| Operator logs (Art. 26(6)) | Measure 4 | A.6.2.8 Logging | Incident reporting | Accountability |
| FRIA (Art. 27) | Map 1.1 / 3.1 | A.5.2 Impact | AI Verify test report | Human-centred values |
The mapping is analytical. Equivalence is jurisdictional and does not substitute for local legal review.
The Global Desk complements the EU-focused regulatory desk and the certification, insurance, and operator-guide sites.