The Asia-Pacific region is often described as lighter on AI regulation than the EU or the US. The description is outdated. By 2026 the region includes Asia's first comprehensive AI statute, the world's most operationalised voluntary framework, and a set of sectoral regulators whose AI expectations in financial services and health are among the most specific anywhere. For an operator deploying autonomous agents regionally, the relevant question is not whether regulation exists. It is how each jurisdiction has chosen to write it.
Key takeaways
- Korea's AI Basic Act (2024) is Asia's first comprehensive AI statute. Effective January 2026 with phased implementation through 2027.
- Japan's AI Promotion Act (2024) takes a promotional approach. Operator duties flow from existing law and sectoral guidance, with METI and MIC issuing detailed operator playbooks.
- Singapore's Model AI Governance Framework, now in its GenAI edition, plus the AI Verify testing toolkit, creates the most operationalised voluntary regime in the region.
- China's regulatory stack (GenAI Measures, Algorithm Rules, Deep Synthesis Rules) is in force and enforced. A comprehensive AI law is expected to consolidate the measures in 2026-2027.
- Regional sectoral regulators (MAS, HKMA, APRA, FSA, FSS) apply AI expectations to financial services with sector-specific detail.
Korea. The first comprehensive statute in Asia.
The Republic of Korea passed the AI Basic Act in December 2024. It entered force in January 2026 with phased implementation through 2027. The statute defines high-impact AI, imposes duties on developers and deployers, requires user notification when individuals interact with AI systems, and establishes a national AI policy framework coordinated by the Ministry of Science and ICT. The extraterritorial provision reaches foreign providers and deployers whose systems have effects on Korean individuals.
The structural resemblance to the EU AI Act is deliberate. Korea's drafting consulted the EU text and adopted the risk-tiered model. Differences are in scope (Korea's high-impact categories are narrower than the EU's Annex III), enforcement (Korea operates through MSIT and sectoral regulators rather than an AI-specific enforcement body), and detail (subsidiary decrees are still being issued through 2026). For a deployer, the operational duties are recognisable from the EU regime: risk management, user notification, impact mitigation, documentation. The implementation detail lives in subsidiary regulations and MSIT guidelines.
Penalties under the Act are administrative. Heavier fines apply to specific violations (failure to implement safety measures, failure to notify users). The Korea Internet and Security Agency (KISA) and the Personal Information Protection Commission play adjacent roles on security and data-protection dimensions.
Japan. Promotional framework, sectoral operation.
Japan passed the Basic Act on the Promotion of AI in 2024. The act is promotional in character. It requires the government to set a basic plan, establishes an AI Strategy Council, and directs public research support. It does not impose substantive operator duties. Those flow from existing law (the Civil Code's tort provisions, the Personal Information Protection Act, consumer contracts, financial regulations) and from sectoral guidance issued by METI, MIC, and industry-specific regulators.
METI's AI Guidelines for Business (2024) consolidate the operator expectations across the principal use cases. The guidelines draw heavily from the Hiroshima AI Process and the G7 Code of Conduct, both initiatives led in substantial part from Tokyo. For financial services, the FSA has published its own operator expectations, focused on governance, validation, and explainability. For healthcare, the Ministry of Health, Labour and Welfare has issued guidance on AI-assisted medical devices.
Japan's approach assumes that large corporate actors will self-regulate to the published expectations and that sectoral regulators will use existing authorities to enforce where they do not. The assumption has held for most consumer-facing deployments. The pressure points are in financial services, healthcare, and employment, where sectoral enforcement is active even without horizontal AI statute.
Singapore. Voluntary, operationalised, influential.
Singapore's approach centres on two instruments. The Model AI Governance Framework (MAGF), published in 2019 and revised substantially in 2024 with a Generative AI edition, sets the governance and operational expectations. AI Verify, published by the AI Verify Foundation in 2023 and continuously extended, operationalises those expectations into a testing toolkit. The 2024 revision of MAGF introduced nine dimensions for responsible generative AI: accountability, data, trusted development and deployment, incident reporting, testing and assurance, security, content provenance, safety and alignment R&D, and AI for public good.
MAGF is not a statute. Its operational force comes from the sectoral regulators that reference it. The Monetary Authority of Singapore's FEAT (Fairness, Ethics, Accountability, Transparency) principles were an early sectoral expression of the same vocabulary. MAS has since issued detailed guidance on AI in financial services (the Veritas Initiative), explicitly referencing the MAGF as the higher-level governance standard. The Personal Data Protection Commission's AI guidance treats MAGF alignment as the expected baseline for personal-data processing involving AI.
For cross-border operators, the strategic significance is this: compliance in Singapore typically signals readiness for much of Southeast Asia. Several ASEAN member states have adopted the MAGF vocabulary for their own frameworks. Thailand's Royal Decree on Electronic Transactions for AI, Malaysia's AI governance principles, and Indonesia's AI ethics guidance all borrow from the Singapore text. Building to MAGF is both a Singapore compliance strategy and a regional deployment strategy.
China. Layered administrative regulation.
China operates through a layered stack of administrative measures issued by the Cyberspace Administration of China (CAC) and other authorities. The Algorithmic Recommendation Regulations (2022) set expectations for recommender systems. The Deep Synthesis Regulations (2023) regulate deepfake technologies. The Interim Measures for the Administration of Generative AI Services (2023) impose duties on providers and users of generative services, including large-model registration, content-safety review, labelling, and data-source requirements.
The combined stack is one of the most detailed in the world. Large-model providers must register with the CAC, submit to a security assessment, and maintain ongoing compliance with content rules. Foreign providers offering generative services to Chinese users must comply with the same framework. Enforcement has been active, including service suspensions, registration revocations, and public penalty notices.
A comprehensive AI Law has been under drafting since 2023 and is expected to consolidate the existing measures. Drafts circulated through 2024 and 2025 indicate a structure similar to the EU AI Act with a risk-tiered classification, mandatory registration for high-risk and foundation systems, and enhanced duties for providers and deployers. Activation is expected in the 2026 to 2027 window.
Regional sectoral layer.
Beneath the national frameworks sits a regional sectoral layer that in many cases carries the operational weight for autonomous agents. The Monetary Authority of Singapore (MAS), the Hong Kong Monetary Authority (HKMA), the Australian Prudential Regulation Authority (APRA), the Japan Financial Services Agency (FSA), and the Korean Financial Services Commission (FSC) all issue AI-relevant guidance for banks, insurers, and capital markets. The common elements are governance, explainability, validation, model inventory, incident reporting, and operational resilience. A financial-services deployer regulated by any of these will meet most of the horizontal AI framework requirements in the region as a by-product of sectoral compliance.
Healthcare, employment, and critical infrastructure have parallel sectoral structures. Japan's Ministry of Health has issued AI-specific guidance on medical devices. Singapore's Ministry of Health treats AI-enabled clinical decision support under the Health Products Regulations. Korea's Ministry of Food and Drug Safety regulates AI-based medical software. Australia's Therapeutic Goods Administration has published guidance on software as a medical device. Each extends AI operator duties to sector-specific deployers.
Implications for cross-border operators.
Three patterns define cross-border deployment in the region.
First, Singapore first. For organisations entering multiple APAC markets, Singapore provides the cleanest compliance starting point. The Model AI Governance Framework is well-documented, AI Verify gives a testable evidence base, and MAS sectoral guidance is detailed. A deployment that passes Singapore scrutiny is typically prepared for entries into neighbouring markets.
Second, Korea requires specific attention. The AI Basic Act is live, user notification obligations apply, and the extraterritorial reach is explicit. Operators entering Korean markets should expect to register systems falling within the high-impact category and maintain documentation responsive to MSIT requests.
Third, China requires market-specific architecture. Content, data-source, and registration requirements are not satisfiable through a global compliance programme. A deployer serving Chinese users needs a compliant-in-China approach, typically involving a local operator entity and a localised technical stack. Retrofitting a global deployment to Chinese rules is expensive and often incomplete.
What to hold on file in the region.
For a deployer operating across Singapore, Korea, Japan, and Australia, the defensible baseline file includes: a risk management programme description aligned with either NIST AI RMF or ISO/IEC 42001 as the organising reference; an inventory of AI systems in use with risk classification under each national framework; an AI Verify testing report (or equivalent) for each high-impact system; a user-notification workflow aligned with Korean Basic Act requirements where applicable; and sectoral documentation specific to the regulator in each market. Adding China requires a separate, market-specific file.
The regional picture is not lighter than the US or EU. It is differently organised. An operator that approaches Asia-Pacific as a single compliance surface will be surprised by the detail. An operator that treats each jurisdiction separately and documents accordingly will be in position to deploy across the region with confidence.
Related reading
For the regional framework detail, see our reading of AI Verify and the Model AI Governance Framework. For the jurisdiction-by-jurisdiction map, see the jurisdictions page. For the cross-regional comparison, see US, EU, UK: three approaches to the same question.
Frequently asked questions
Does Asia-Pacific have comprehensive AI statutes?
Korea does (AI Basic Act, 2024). Japan has a promotional framework, not substantive operator duties. Singapore uses voluntary frameworks plus sectoral regulators. China uses layered administrative measures with a comprehensive law expected in 2026-2027.
What is AI Verify?
An open-source testing toolkit operationalising eleven governance principles into technical and process tests. The output is a testing report readable by operators, auditors, and regulators.
Does Singapore's framework have legal force?
Indirectly, through sectoral regulators that reference it. MAS FEAT, the PDPC's AI guidance, and the Ministry of Health's frameworks all incorporate the MAGF vocabulary.
What is the Korean AI Basic Act?
Asia's first comprehensive AI statute. Tiered high-impact classification, user notification, risk management duties, extraterritorial reach. Effective January 2026 with phased implementation.
Can a single compliance programme work across the region?
Substantially yes for Singapore, Japan, Korea, and Australia. China requires a market-specific architecture.
References
- AI Basic Act (Republic of Korea, 2024).
- Basic Act on the Promotion of AI (Japan, 2024).
- Model AI Governance Framework (Singapore, IMDA, 2024 edition).
- AI Verify Foundation, AI Verify Toolkit.
- MAS FEAT Principles and Veritas Initiative.
- Interim Measures for the Administration of Generative AI Services (China, 2023).
- Algorithmic Recommendation Regulations (China, 2022).
- Deep Synthesis Regulations (China, 2023).
- METI AI Guidelines for Business (Japan, 2024).