New State Laws Impact AI Governance, Risk, and Compliance
好的,我现在需要帮用户总结这篇文章的内容,控制在100字以内。首先,我得仔细阅读文章,抓住主要信息。 文章主要讲的是美国各州,特别是纽约州,正在推动AI合规法规的制定。纽约通过了RAISE法案,要求公司提高AI系统的透明度和安全性。其他州也在效仿,这意味着企业将面临复杂的合规环境。文章还提到联邦政府不太可能干预各州的法规,所以企业需要应对不同州的法律。此外,合规不仅仅是大公司的问题,所有使用AI的企业都受到影响。 接下来,我需要把这些要点浓缩成100字以内的内容。要确保涵盖纽约的RAISE法案、其他州的跟进、联邦政府的态度、对企业的影响以及合规的重要性。 最后,组织语言,确保简洁明了,直接描述内容,不需要开头语。 </think> 纽约州通过《责任AI安全与教育法案》(RAISE Act),要求企业提高AI系统的透明度和安全性。其他州也在效仿这一做法,推动AI合规法规的制定。随着各州和联邦政府加速相关立法进程,企业将面临复杂且不断变化的合规环境。 2026-1-5 16:28:6 Author: levelblue.com(查看原文) 阅读量:3 收藏

3 Minute Read

New York has started a movement to reshape the AI compliance landscape for companies doing business in the state. Other states are following suit making Governance and AI Compliance an increasingly critical endeavor.

What’s Happening?

In June 2025, New York’s legislature passed the Responsible AI Safety & Education (RAISE) Act, which targets “frontier” AI models with new transparency and safety requirements. This is not theoretical regulation; it is enforceable law that will affect how companies of all sizes use AI tools.

The RAISE Act mandates detailed safety documentation, public transparency about AI capabilities and limitations, and comprehensive developer safety plans for high-risk systems. Several companion bills add requirements for visibility into training data, developer disclosures, and impact assessments for workforce and consumer-facing applications.

Signed into law, New York would become one of the first U.S. states to establish enforceable obligations around advanced AI, including oversight mechanisms, audit requirements, and civil liability provisions.

New York is not alone as more states join in. The AI regulatory environment will change dramatically in 2026, and companies big and small need to understand what’s coming.

All 50 States Are Moving on AI

According to the National Conference of State Legislatures, every state in the 2025 session, plus Washington D.C., Puerto Rico, and the U.S. Virgin Islands, introduced AI-related legislation. This is not just talk. States are moving beyond advisory task forces and passing laws with real enforcement power.

Texas signed the Texas Responsible Artificial Intelligence Governance Act (TRAIGA) on June 22, 2025. While it focuses on government use, it establishes frameworks that private-sector vendors will need to meet. California and Colorado are passing similar measures that create obligations around disclosures, guardrails, and transparency for private companies.

So, will the Federal Government override states? Probably not. In a critical move, the U.S. Senate recently removed a proposed 10-year moratorium on state and local AI regulation from a major spending bill. This means states retain broad authority to regulate AI, and the federal government will likely not pre-empt them. The patchwork of state laws may soon be the long-term reality, which will be a mess for Compliance.

Meanwhile, the federal government is accelerating its own efforts. The America’s AI Action Plan, released in July 2025, outlines more than 90 policy actions across innovation, infrastructure, export controls, and national security. Executive Orders now emphasize procurement standards, bias controls, ideological neutrality, and AI infrastructure development.

Companies working with large models, global markets, or federal contracts should expect layered obligations at both state and federal levels.

Additionally, regulation is becoming more targeted. New York’s proposed “AI Companion” bill addresses systems that simulate human-like relationships, requiring them to detect self-harm risks and clearly identify themselves as AI. Health-related AI systems are also facing new rules at the state level as regulators move faster than federal agencies like the FDA.

Why Companies Should Care

AI governance is not just for companies that build AI systems. It affects every company that uses them due to vendors, compliance obligations, overlaps, and insurance.

If your vendors use AI in services they provide to you, you may need to collect disclosures, maintain logs, and verify safety practices, especially in states with new regulations. This means procurement contracts and vendor due diligence processes must be updated now.

Companies with customers or employees in multiple states will face different obligations depending on location. Each state may require unique disclosures, audits, or vendor standards.

Because there is no federal pre-emption, state laws will multiply faster than federal regulation. Businesses must plan for a complex, overlapping compliance environment.

Insurers are already asking about AI governance during renewals, and plaintiff attorneys are preparing AI-related claims. The RAISE Act’s emphasis on safety and transparency sends a clear message: companies without documented AI governance may face higher premiums or denied coverage.

Governance Investment Is Cheaper Than Crisis Cost

Establishing AI inventories, assessments, and documentation today costs far less than retrofitting them after an incident or regulatory audit. For low-maturity or low-budget organizations, starting small provides a competitive advantage and prepares them for inevitable compliance demands.

If your company operates across multiple states, here’s what to expect:

  • Documentation requirements will grow. Regulators and auditors will request proof of how AI is used and managed.
  • Vendor diligence must include clauses requiring transparency about data sources, safety testing, and incident reporting.
  • Frontier models will face additional oversight for explainability, safeguards, and audits.
  • Operational controls will become essential. You’ll need logs, access controls, and data inventories to show accountability.
  • Federal obligations may add layers, especially for organizations tied to government contracts or exports.

The Board conversation is simple: “We must implement an auditable AI governance framework to meet emerging state and federal requirements. This is an operational control issue that protects valuation and enables innovation. The cost to start is minimal, but the cost to retrofit later or face enforcement could be substantial.”

Proper AI governance supports innovation. It does not have to slow it.

When employees know the rules, they can use AI tools with confidence for automation, reporting, and knowledge capture. Governance reduces risks from privacy breaches and non-compliance while improving accountability.

Well-governed companies also gain competitive benefits such as lower insurance hurdles, smoother vendor onboarding, and stronger due diligence in M&A or investment reviews.

The Bottom Line

AI governance is no longer optional. It is the foundation for safe, scalable innovation. New York’s RAISE Act is just the start. With all states (and many international movements) introducing AI legislation and federal actions underway, the compliance patchwork is permanent. Companies that act now will operate with confidence as regulations expand. Those that wait will scramble to retrofit, renegotiate, and defend their practices under pressure.

The organizations that will thrive are not the ones that avoid AI. They are the ones that embrace it responsibly, with governance that grows alongside their innovation. The time to start was yesterday, and getting started is easier than you think. How behind are you?


文章来源: https://levelblue.com/blogs/levelblue-blog/new-state-laws-impact-ai-governance-risk-and-compliance/
如有侵权请联系:admin#unsafe.sh