Future ProofThe Authority Stack
Independent Cross-Jurisdictional ReviewJurisdictions · Global DeskUpdated April 2026
Agent LiabilityGlobal Desk
Section 02 · Jurisdictions

Ten jurisdictions are codifying operator duty at once.

This is a reference page, not an argument. For each jurisdiction we record the primary instrument, the legal standard it imposes, the activation window, and the first authority a deployer should contact or read. Edits are dated. Each entry links to the operative text.

01

European Union

Horizontal rule

Regulation (EU) 2024/1689, the AI Act, is the first horizontal statute on AI. It classifies systems by risk, places the main operational duties on deployers of high-risk systems, and activates in waves. Operator provisions (Article 26) enter application on 2 August 2026. The revised Product Liability Directive (2024/2853) treats AI software as a product subject to strict liability, with rebuttable presumptions of defect where an AI system was non-compliant with the Act. National supervisors will enforce alongside the European AI Office.

For agents: deployers in the Union, or deployers outside the Union whose outputs are used in the Union, carry the full operator duty set. Extraterritorial reach is explicit in Article 2.

Instrument
Regulation (EU) 2024/1689
Companion
Directive (EU) 2024/2853 (PLD)
Activation
2 August 2026 · 9 December 2026
Authority
European AI Office, national supervisors
Further reading
agentliability.eu
02

United States · Federal

Framework + sectoral

There is no comprehensive federal AI statute. The operative federal instruments are the NIST AI Risk Management Framework 1.0, the Generative AI Profile (NIST AI 600-1), the 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI, OMB guidance for federal procurement, and sector-specific rules issued by the FTC, CFPB, SEC, EEOC, FDA, and HHS. Existing bodies of law (tort, consumer protection, products liability, civil rights, securities) apply to AI systems as they would to any other operational decision.

For agents: the legal risk profile is defined by the sector, the use case, and the standard of care courts will adopt. NIST AI RMF is the reference most frequently cited.

Instruments
NIST AI RMF 1.0 · EO 14110 · sectoral rules
Activation
Continuous
Authority
NIST, FTC, CFPB, SEC, sectoral regulators
Further reading
NIST AI RMF article
03

United States · Colorado

Comprehensive statute

The Colorado AI Act (SB 24-205) is the first comprehensive US state AI statute. It imposes a duty of care on developers and deployers of high-risk AI systems to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination. It requires risk management programmes, impact assessments, public disclosures, and consumer notifications. It creates a rebuttable presumption that a deployer used reasonable care if it adopts a nationally recognised risk management framework (NIST AI RMF is the leading candidate).

For agents: deployers doing business in Colorado or whose outputs affect Colorado consumers are covered. The Attorney General holds exclusive enforcement authority. Activation 1 February 2026.

Instrument
SB 24-205 · C.R.S. § 6-1-1701 et seq.
Activation
1 February 2026
Authority
Colorado Attorney General
Further reading
Colorado AI Act article
04

United Kingdom

Sectoral

The UK government published its AI Regulation White Paper in 2023 and committed to a sectoral approach led by existing regulators (the ICO, FCA, CMA, Ofcom, MHRA, and others). Five cross-sector principles frame regulator activity: safety and robustness, transparency and explainability, fairness, accountability and governance, contestability and redress. The AI Safety Institute focuses on frontier models; the ICO has published detailed guidance on AI and data protection.

For agents: there is no single statute. Duties are found in existing law (data protection, consumer rights, equality, financial conduct, online safety) as applied by the sectoral regulator. The Labour government's 2024-2025 consultations signalled a narrower statutory intervention focused on frontier AI; agentic systems fall under existing regulator remits.

Instruments
White Paper (2023) · regulator guidance
Activation
Rolling
Authority
ICO, FCA, CMA, Ofcom, MHRA, AISI
05

Singapore

Framework

Singapore operates through IMDA's Model AI Governance Framework (2019, revised 2024 with a GenAI edition), the AI Verify testing toolkit, and sectoral rules from the Monetary Authority of Singapore (FEAT principles) and the Personal Data Protection Commission. The government's posture is cooperative rather than prescriptive. AI Verify provides a testing report that operators, auditors, and regulators can read in common.

For agents: compliance in Singapore is substantially about demonstrable governance, testing evidence, and incident readiness. Regulators are empowered under existing statutes (PDPA, Banking Act, MAS regulations); no horizontal AI statute is in force.

Instruments
Model AI Governance Framework · AI Verify
Activation
Continuous
Authority
IMDA, MAS, PDPC
06

Japan

Framework + statute

Japan passed its Basic Act on the Promotion of AI in 2024. The act is promotional in character, requiring the government to set a basic plan and establish an AI Strategy Council. METI and MIC have published operator guidelines. Japan's approach emphasises voluntary commitments from developers and deployers, consistent with the Hiroshima AI Process and G7 Code of Conduct.

For agents: primary operational duties remain in existing law (civil code on torts, Personal Information Protection Act, consumer contracts, and sectoral financial rules). The voluntary code is de facto mandatory for large actors.

Instruments
AI Promotion Act (2024) · METI guidelines
Activation
2025 rolling
Authority
METI, MIC, PPC
07

Republic of Korea

Statute

The AI Basic Act (2024) is Asia's first comprehensive AI statute. It introduces a tiered classification of high-impact AI, mandates notification to users, and requires risk management measures by deployers. Extraterritorial reach applies to systems whose effects reach Korea.

For agents: the Korean statute sits closer to the EU model than the US sectoral approach. Implementation decrees and Ministry of Science and ICT guidance will carry much of the operative detail through 2026.

Instrument
AI Basic Act (2024)
Activation
January 2026 with phased implementation
Authority
Ministry of Science and ICT
08

Canada

Statute pending

The Artificial Intelligence and Data Act (AIDA), introduced as part of Bill C-27, has been through extensive consultation and redrafting. The 2025 reframing narrowed scope toward high-impact systems and clarified operator obligations around mitigation and documentation. At the time of writing the bill is moving through final parliamentary stages. Existing federal privacy law (PIPEDA) and provincial consumer protection statutes apply in the interim.

Instrument
AIDA (Bill C-27, amended)
Activation
Expected 2026 to 2027
Authority
ISED, proposed AI and Data Commissioner
09

Brazil

Statute pending

The Marco Legal da InteligĂȘncia Artificial (PL 2338/2023) is under deliberation in the Brazilian Congress. The text draws from both the EU AI Act and the OECD Principles. It proposes a risk-tiered regime, rights for affected individuals, and supervisory authority allocated among existing regulators including the ANPD (data protection). Approval and activation dates remain uncertain.

Instrument
PL 2338/2023 · Marco Legal da IA
Activation
Under deliberation
Authority
ANPD and sectoral regulators
10

China

Regulation in force

China's regulatory stack includes the Interim Measures for the Administration of Generative AI Services (2023), the Algorithmic Recommendation Regulations (2022), and the Deep Synthesis Regulations (2023), administered by the Cyberspace Administration of China alongside other authorities. Large-model providers register with the CAC; deployers face content, labelling, and security obligations. A comprehensive AI Law has been under drafting and is expected to consolidate the existing measures over the 2026 to 2027 window.

Instruments
GenAI Measures · Algorithm Rules · Deep Synthesis Rules
Activation
In force
Authority
Cyberspace Administration of China
The Network

Five properties, one framework.