For the last few years, states were busy writing and amending privacy and AI laws. This year, for some state, those laws switch from theory to execution. Colorado’s high‑risk AI regime is coming online, Texas’ Responsible AI Governance Act is in effect, and New York’s synthetic performer disclosure rules kick in this summer.

At the same time, neural privacy is moving from law review to compliance obligation, and the federal government is still trying to slow states down with Trump‑era deregulatory moves. The result is a bit messy but very real regulatory environment that can’t be ignored if you build, buy, or deploy AI.


Colorado: High‑Risk AI Rules Turn On

Colorado’s AI Act (SB 24‑205) is the clearest example of policy decisions turning into obligations. The law takes a risk‑based approach similar to the EU AI Act and focuses on “high‑risk AI systems” used to make consequential decisions about people’s lives—employment, housing, credit, education, healthcare, and access to essential services.

Key milestones and effects:

  • The Act’s enforcement date has been adjusted, but the core obligations for high‑risk AI developers and deployers take hold in 2026, with duties tied to “reasonable care” to prevent algorithmic discrimination.
  • Developers must publish public statements describing the high‑risk systems they make available and how they manage known or reasonably foreseeable algorithmic risks.
  • Both developers and deployers must implement risk‑management programs, conduct impact assessments, notify consumers, and report to the Colorado Attorney General if a high‑risk system causes algorithmic discrimination.
  • Violations are treated as unfair trade practices under the Colorado Consumer Protection Act, with penalties that can reach $20,000 per violation, plus the usual reputational fallout.

If your AI touches Colorado residents in these high‑impact areas, you are in scope regardless of your physical location.


Texas: TRAIGA and SB 2610

Texas’ Responsible Artificial Intelligence Governance Act (TRAIGA) went into effect on January 1, 2026. Texas is now the second state, after Colorado, with a broad AI framework, and it combines two things: a governance structure and hard restrictions on certain uses.

What TRAIGA does:

  • TRAIGA establishes a baseline framework for AI development, deployment, and oversight in Texas, with obligations on government use and a list of prohibited private uses.
  • It bans development or deployment of AI systems for specific purposes, including behavioral manipulation, unlawful discrimination, generating or distributing child sexual abuse material and certain unlawful deepfakes, and uses that infringe constitutional rights.
  • The law creates a Texas Artificial Intelligence Council and a regulatory sandbox that gives approved participants up to 36 months to test AI systems under relaxed enforcement, in exchange for detailed applications, impact assessments, and mitigation plans.
  • Enforcement sits with the Texas Attorney General, with meaningful civil penalties and structured opportunities to cure non‑compliance.

What about SB 2610?

Texas SB 2610: Cybersecurity Safe Harbor for Small Businesses

Alongside TRAIGA, Texas added another piece in the stack that matters for AI‑heavy environments: SB 2610, a cybersecurity safe harbor for small businesses that took effect September 1, 2025 and is fully in play in 2026.

SB 2610 does not regulate AI directly. It targets breach liability, but it creates strong incentives for smaller orgs to formalize security and governance around the data that AI systems consume and generate.

What SB 2610 actually does:

  • Who it covers
    SB 2610 applies to Texas businesses with fewer than 250 employees that own or license computerized data containing “sensitive personal information” under Texas law, things like SSNs, government IDs, financial account data with access codes, and certain health and medical payment information.
  • The safe harbor
    If a covered small business maintains a documented cybersecurity program that meets recognized industry standards (for example, NIST CSF, ISO/IEC 27001, or CIS Controls), the business is shielded from exemplary (punitive) damages in civil lawsuits arising from a breach of system security.
    It can still be hit for actual damages, but plaintiffs cannot stack punitive damages on top if the company can prove it had a qualifying program in place.
  • What the program must look like
    The law requires administrative, technical, and physical safeguards designed to:
    • Protect the security and integrity of personal identifying information and sensitive personal information.
    • Protect against threats or hazards to that information.
    • Protect against unauthorized access or acquisition that creates a material risk of identity theft or fraud.
      Requirements scale by size: micro‑businesses can meet simplified controls, mid‑size firms must align with CIS Controls IG1, and 100–249‑employee firms must fully conform to a recognized framework.

Why is this important?

  • Most AI systems in small and mid‑sized Texas organizations are trained on or integrated with exactly the kinds of “sensitive personal information” SB 2610 is meant to protect.
  • SB 2610 pushes these orgs to implement real security governance—policies, technical controls, training, and framework alignment, that naturally extends to AI systems and data pipelines. An AI system that leaks or exposes PII will be evaluated against whether the company had a SB‑2610‑compliant program in place.
  • In practice, it pairs with TRAIGA: TRAIGA tells you what AI behaviors you can’t engage in; SB 2610 tells small businesses how to harden their environment and what they get in return if something still goes wrong.

Because of Texas’ size and extraterritorial pull, TRAIGA’s behavior‑based bans, especially around biometrics, manipulation, and illegal content, are already influencing how vendors design and market AI systems nationwide, and SB 2610’s cybersecurity safe harbor for businesses under 250 employees is pushing those same organizations to formalize security and compliance programs around the personal data their AI systems process in exchange for protection from punitive damages after a breach.


New York: Synthetic Performer Disclosures Start June 2026

New York spent last year passing AI rules aimed at what people actually see and feel: AI companions and synthetic performers. This year, those rules start to matter for advertisers and media producers.

2026 activation:

  • New York’s “synthetic performer” disclosure law amends General Business Law 396‑b and takes effect on June 9, 2026.
  • Anyone who creates or produces advertisements that use AI‑generated synthetic performers (digital entities made or altered by generative AI to simulate human performances) must clearly disclose that fact.
  • Failure to disclose can trigger civil penalties of $1,000 for a first violation and $5,000 for subsequent ones.

Combined with New York’s earlier moves on AI companions and digital replicas, this is turning “AI transparency” from a nice‑to‑have into a legally enforceable label requirement for commercial content.


The Trump Executive Order and Ongoing Federal–State Conflict

The failed 10‑year ban on state AI laws and Trump’s late‑2025 Executive Order trying to deter state AI regulation. By 2026, that conflict is no longer theoretical.

Here’s what it looks like now:

  • The 10‑year ban died in Congress, but its logic reappears in ongoing efforts to push a single national AI framework that would preempt “dozens of state AI laws.”
  • Trump’s Executive Order doesn’t have the power to erase state statutes, but it is being cited by opponents of state AI laws as a policy signal favoring minimal regulation and as support for preemption arguments in litigation and lobbying.
  • The White House rhetoric frames state AI rules as a drag on innovation and argues for one federal standard; states are ignoring that and refining their own laws anyway, as Colorado’s 2026 rulemaking and Texas’ activation of TRAIGA show.

This year, organizations have to plan for a world where the federal government is pushing deregulation while states like Colorado, Texas, and New York continue to harden their own frameworks. Betting on federal preemption to save you from state requirements is not realistic.


Operational Shifts for AI and Privacy Programs

Together, these changes and implementations highlight a shift that leaders in privacy, security, and AI governance now have to address directly.

  1. “High‑Risk AI” Is Now a Concrete Regulatory Category
    Colorado’s law, and Texas’ broad framework, treat high‑risk and consequential AI systems as something you must inventory, assess, and govern separately from generic analytics. The days of hiding decisioning models under “internal tools” are over.
  2. Behavior‑Focused AI Bans Are Now 100% Real
    Texas now has explicit bans on specific AI behaviors—social scoring, unlawful biometric identification, manipulation, child sexual abuse material and certain deepfakes. That’s a different enforcement posture than just “do a DPIA.”
  3. Disclosure Is Becoming a Default Expectation
    Colorado demands public statements for high‑risk systems; New York requires synthetic performer disclosures and AI‑interaction transparency in certain contexts. In 2026, failing to tell people when AI is involved is now an increasingly legal risk.
  4. CPO/DPO Roles Now Implicitly Include AI Governance
    CPO and DPO responsibilities have expanded to cover AI, without corresponding budget or headcount. 2026 enforcement of Colorado, Texas, and New York‑style rules just adds to that pressure.

Hope you found this post helpful and informative. Thanks for stopping by!

Leave a Reply

Discover more from root@cybercasta:~$

Subscribe now to keep reading and get access to the full archive.

Continue reading