On 1 February 2026, Colorado became the first US state with a comprehensive AI statute. Senate Bill 24-205 imposes a duty of care on developers and deployers of high-risk AI systems, introduces a risk-management and impact-assessment regime, and gives the Colorado Attorney General exclusive enforcement authority. This article reads the deployer provisions of the Act as they now operate.

Key takeaways

  • SB 24-205 is the first comprehensive US state AI statute. It applies to developers and deployers of high-risk AI systems doing business in Colorado.
  • The core duty is a reasonable-care standard aimed at protecting consumers from algorithmic discrimination in consequential decisions.
  • A risk management programme aligned with NIST AI RMF or an equivalent nationally recognised framework creates a rebuttable presumption of reasonable care.
  • Deployers must complete an impact assessment annually and on any intentional material modification, maintain disclosures to consumers, and notify the Attorney General of discovered algorithmic discrimination within 90 days.
  • Enforcement sits exclusively with the Colorado Attorney General under the Colorado Consumer Protection Act. There is no private right of action.

Scope. What the statute covers.

The Colorado AI Act is codified at Colorado Revised Statutes section 6-1-1701 and following. It applies to two categories of actor. Developers are persons doing business in Colorado that develop or intentionally and substantially modify a high-risk AI system. Deployers are persons doing business in Colorado that deploy a high-risk AI system. A person does not need to be headquartered in Colorado to fall within the statute; doing business in the state is the threshold, following a standard familiar from Colorado's consumer-protection jurisprudence.

The term that does most of the work is high-risk AI system. Section 6-1-1701(10) defines it as any AI system that, when deployed, makes or is a substantial factor in making a consequential decision. Consequential decisions are defined in section 6-1-1701(3) and cover education enrolment and opportunity, employment and employment opportunity, financial or lending services, essential government services, healthcare services, housing, insurance, and legal services. The domain list is narrower than the EU Act's Annex III but overlaps heavily in the areas that matter most to deployers: hiring, credit, insurance, healthcare, and essential services.

A number of carve-outs apply. Systems intended to detect decision-making patterns rather than drive decisions, anti-fraud technology, anti-malware, basic productivity tools, calculator-class functions, cybersecurity applications, databases and data storage, internet search technology, and spam filters are excluded. The carve-outs are narrower than they appear. A generative model used to screen applicants or classify customer complaints does not fall into any of these categories and will typically be a high-risk system within the statutory meaning.

The duty of care.

Section 6-1-1702 places the central obligation on developers. They must use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination arising from intended and contracted uses of the high-risk system. Developers must supply deployers with documentation describing the system, the data used to develop it, the evaluation methods used, the known limitations, and how the system should be used and monitored.

Section 6-1-1703 places a parallel duty on deployers. A deployer of a high-risk system has the same obligation to use reasonable care in deployment to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination. The duty is ongoing. It does not attach only at procurement and fall away after. It runs across the deployment lifecycle.

The definition of algorithmic discrimination in section 6-1-1701(1) captures any condition where the use of an AI system results in unlawful differential treatment or impact that disfavours individuals or groups on the basis of actual or perceived age, colour, disability, ethnicity, genetic information, limited English proficiency, national origin, race, religion, reproductive health, sex, veteran status, or any other classification protected by Colorado or federal law. The definition is protected-class-based and effects-based. A system that produces disparate outcomes on a protected axis, even without intent to discriminate, can trigger the duty.

The rebuttable presumption.

Section 6-1-1706 sets out the safe-harbour structure that has attracted most of the legal analysis. A deployer is presumed to have used reasonable care if it adopts and maintains a risk-management programme that satisfies the statutory criteria. The programme must be an iterative process, cover the entire lifecycle of the system, and be reasonable in light of guidance and standards issued by recognised bodies including the National Institute of Standards and Technology and the International Organization for Standardization. The AI RMF 1.0 is the obvious candidate, with 42001:2023 as a complementary or alternative reference.

The presumption is rebuttable. If the Attorney General or a plaintiff in a consumer protection proceeding can show that the programme was not actually implemented, or that a specific decision fell outside the programme's scope, the presumption falls and the underlying duty of care is examined directly. For most deployers, the practical effect is to make the existence of a documented, maintained programme the first question a regulator will ask.

Reasonable care and the NIST framework. The statute does not require NIST AI RMF by name. It requires a programme aligned with nationally recognised standards. NIST is the default because it is free, updated, and explicitly cited in federal procurement. A deployer that implements NIST in substance will satisfy the statutory threshold even if the programme is internally branded differently.

Impact assessments.

Section 6-1-1703(3) requires deployers to complete an impact assessment for each high-risk AI system. The assessment is annual, and an additional assessment is required within 90 days of an intentional and substantial modification. The content mirrors the EU's fundamental rights impact assessment. It must include a statement of the purpose, the intended benefits and uses, the categories of data processed, the metrics used to evaluate performance, the known limitations, a description of transparency measures, and the post-deployment monitoring and safeguards.

Assessments must be retained for at least three years and made available to the Attorney General on request. A deployer that outsources the assessment to a third party remains responsible for its accuracy. The statute does not require publication, but the Attorney General's office has signalled that requests for assessments will be a routine first step in any investigation.

Disclosures and consumer rights.

Section 6-1-1703(4) requires deployers to make disclosures to consumers who are subject to a consequential decision made by or with substantial reliance on a high-risk system. The disclosure must identify that an AI system is in use, describe the purpose and nature of the consequential decision, and provide contact information. If the decision is adverse to the consumer, the deployer must explain the principal reasons for the adverse decision, describe the data processed in reaching it, and explain the opportunity to correct inaccurate data and to appeal.

The disclosure requirements read as a practical adaptation of the rights that data protection law has familiarised over the past decade, now attached to AI-specific decisions rather than general data processing. Deployers already subject to the Colorado Privacy Act or GLBA will find much of the infrastructure already in place. The gap is usually in the explanation of principal reasons, which most existing adverse-action procedures do not produce at the necessary granularity for an AI-driven decision.

The algorithmic discrimination notification.

Section 6-1-1703(7) creates a duty to notify. Where a deployer discovers that a high-risk system has caused algorithmic discrimination, it must notify the Attorney General within 90 days. The notification is not voluntary. It is a statutory duty that attaches on discovery and runs regardless of whether the discrimination was intentional.

Discovery is a factual question. The statute expects deployers to have a monitoring process capable of detecting the discrimination in the first place. A deployer that chooses not to monitor is not insulated from the duty; it is simply in a weaker position when the Attorney General asks how the discovery was made. The interaction of the 90-day duty with the reasonable-care standard is the most consequential enforcement mechanism in the statute. Deployers that maintain documented monitoring, identify issues quickly, notify promptly, and correct efficiently will be in substantially better positions than those that do not.

Enforcement.

The Attorney General holds exclusive enforcement authority under section 6-1-1708. There is no private right of action. Violations are treated as unfair trade practices under the Colorado Consumer Protection Act and carry the civil penalties attached to that regime, up to USD 20,000 per violation and higher for violations involving elderly consumers. The absence of a private right has limited the near-term litigation exposure but has concentrated attention on how the Attorney General's office intends to sequence investigations and settlements.

The Attorney General's office has indicated that it will use its guidance authority to translate the statutory criteria into operational expectations. Early indications suggest that the office will prioritise clear cases of algorithmic discrimination in protected-class contexts, consumer complaints with demonstrable adverse impact, and sectors where existing federal enforcement is absent.

Comparison with the EU AI Act.

A deployer subject to both statutes faces two partially overlapping regimes. The Colorado Act is shorter, narrower in domain scope, and focused on algorithmic discrimination in consequential decisions. The EU AI Act is longer, broader, imposes more detailed procedural duties on deployers, and covers a wider range of use cases. Both require risk management, impact assessments, transparency to affected persons, monitoring, and incident response. A deployer that builds to the EU operator file will satisfy most Colorado requirements. The reverse is not true. Colorado's file is necessary but not sufficient for EU compliance.

For organisations deploying across both jurisdictions, the rational strategy is to design a single programme that meets the higher standard and map outputs to each regime's specific documentation. The core documents (risk record, oversight register, impact assessment, monitoring plan, incident protocol) are common. The jurisdiction-specific layers are the disclosures, the notification duties, and the regulator-facing submissions.

Practical preparation.

For a deployer active in Colorado after 1 February 2026, five documents form the defensible baseline. First, a risk-management programme statement aligned with NIST AI RMF, documenting the four functions. Second, an inventory of high-risk AI systems in use, with a brief description of each consequential decision they support. Third, impact assessments for each inventoried system, on file and refreshed annually. Fourth, a consumer-disclosure procedure integrated with the adverse-decision workflow. Fifth, a monitoring and notification playbook that identifies the triggers for the 90-day duty and the evidence preservation steps to follow.

These documents do not constitute compliance on their own. They are the file the Attorney General will request first. A deployer holding them can show that it engaged with the statute in substance. A deployer without them is in a procedurally weaker position from the opening of any inquiry.

Related reading

For the federal framework most likely to satisfy the Colorado safe harbour, see NIST AI RMF and the emerging US standard of reasonable care. For the cross-jurisdictional comparison, see US, EU, UK: three approaches to the same question. For the EU counterpart operator duty set, see the operator provisions of the EU AI Act.

Frequently asked questions

When does the Colorado AI Act take effect?

The Act takes effect on 1 February 2026. Developers and deployers of high-risk AI systems must be in substantial compliance at activation.

Who counts as a deployer?

A person doing business in Colorado that deploys a high-risk AI system. Small businesses with fewer than 50 FTE receive partial accommodations but are not exempt.

What is the duty of care?

A reasonable-care duty to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination in consequential decisions.

Is there a private right of action?

No. The Colorado Attorney General has exclusive enforcement authority under the Colorado Consumer Protection Act.

How does the statute interact with the EU AI Act?

A deployer subject to both faces overlapping regimes. The EU file is broader and generally sufficient for Colorado; the Colorado file alone does not meet EU requirements.

References

  1. SB 24-205, Colorado Consumer Protections for Artificial Intelligence (2024).
  2. C.R.S. § 6-1-1701 et seq., Artificial Intelligence Article.
  3. C.R.S. § 6-1-1701(3), Consequential decision.
  4. C.R.S. § 6-1-1701(10), High-risk artificial intelligence system.
  5. C.R.S. § 6-1-1702, Developer duty.
  6. C.R.S. § 6-1-1703, Deployer duty.
  7. C.R.S. § 6-1-1706, Presumption of reasonable care.
  8. C.R.S. § 6-1-1708, Enforcement.
  9. NIST AI Risk Management Framework 1.0 (January 2023).
  10. ISO/IEC 42001:2023, Artificial intelligence management system.