For years, California’s privacy regime—anchored in the California Consumer Privacy Act (CCPA), Cal. Civ. Code § 1798.100 et seq., and expanded by the California Privacy Rights Act (CPRA)—was understood as the most aggressive consumer privacy framework in the United States. It gave individuals rights over their data and imposed obligations on businesses to disclose how that data was used.
As of January 1, 2026, it became something more consequential.
It is now a comprehensive governance regime for data-driven systems, blending privacy, cybersecurity, and artificial intelligence into a single regulatory architecture.
The relevant regulations—formally adopted by the California Privacy Protection Agency (CPPA) in September 2025—are available here.
The full regulatory text (including cybersecurity audit, risk assessment, and automated decision-making provisions) is available here.
What the California Law Has Become
The CCPA/CPRA framework still exists. It still provides consumer rights—access, deletion, correction, and opt-out—and applies to businesses meeting statutory thresholds (generally $25 million in revenue or large-scale data processing).
But the 2026 regulations fundamentally change the character of the law.
They introduce three interlocking requirements:
1. Formal risk assessments for high-risk data processing.
2. Regulation of automated decision-making technology (ADMT).
3. Mandatory cybersecurity audits.
These regulations took effect January 1, 2026.
They do not merely add obligations. They redefine compliance. As one analysis put it, they represent a shift away from “notice-and-choice privacy toward… operationalized governance.”
What the Law Now Does
The central function of the California regime is no longer disclosure. It is risk justification.
Beginning January 1, 2026, businesses must conduct risk assessments when processing activities present a “significant risk” to consumers. These assessments must evaluate the benefits of the processing against the risks to consumers and must be documented, maintained, and ultimately attested to regulators.
At the same time, the law regulates automated decision-making. If a company uses algorithms or AI to make “significant decisions”—for example, in employment, credit, healthcare, or housing—it must provide notice, transparency, and, in some cases, opt-out or human review rights.
Overlaying both is a requirement for cybersecurity audits. Businesses whose data processing presents a significant risk must conduct independent, objective audits of their security programs and certify completion annually to the CPPA.
The timing of these obligations is phased:
January 1, 2026: Regulations effective; risk assessment obligations begin.
January 1, 2027: ADMT requirements apply.
December 31, 2027: Deadline for completing assessments for legacy processing.
April 1, 2028 and beyond: Audit certifications and reporting begin.
This is not incremental regulation. It is structural.
Who it Regulates
Formally, the law applies to “businesses” under the CCPA—entities that meet thresholds related to revenue or data volume and that do business in California.
Practically, it applies to any organization that:
Processes personal data at scale,
Uses algorithms or AI to make meaningful decisions.
Operates systems where data misuse or compromise creates risk.
This includes not just technology companies, but retailers, financial institutions, healthcare providers, SaaS platforms, insurers, and increasingly any company whose operations depend on data.
The key shift is that what triggers regulation is no longer just data collection—it is risk and decision-making.
The Business Impact: Compliance Becomes Operational
The most important effect of the 2026 regulations is organizational.
Under the old model, privacy was largely a legal function and cybersecurity an IT function. AI, if addressed at all, was a product issue.
That separation no longer works.
Risk assessments require understanding how systems function, what data they use, and what harms they may create. Cybersecurity audits require demonstrable, measurable controls aligned with those risks. ADMT regulation requires insight into how decisions are made and whether they are fair and contestable.
This forces integration.
Legal, engineering, product, data science, and security must operate together. Compliance is no longer a layer on top of operations. It is embedded within them.
The law effectively creates a new internal discipline: system governance.
How You Actually Comply
Compliance under the California regime is not a checklist exercise. It is an infrastructure exercise.
A company must first understand its systems—what data is collected, how it flows, where it is used, and how decisions are made. For many organizations, this is the most difficult step because data ecosystems are fragmented and poorly documented.
Once those systems are understood, the company must determine which activities create “significant risk.” That determination triggers the requirement to conduct a formal risk assessment—either before initiating the activity or, for legacy systems, by the end of 2027.
That assessment must weigh benefits against risks, document mitigation strategies, and justify the decision to proceed.
Simultaneously, the company must evaluate its cybersecurity posture through formal audits. These audits must be independent, objective, and capable of being certified to regulators.
If automated decision-making is used, the company must build transparency and control mechanisms into those systems—notice, opt-out where required, and meaningful human review where mandated.
And critically, all of this must be documented in a way that can withstand regulatory scrutiny.
Compliance is no longer about what policies say. It is about what systems do—and whether that can be proven.
What This Reflects From a Policy Standpoint
The California framework reflects a series of policy conclusions.
First, regulators have concluded that transparency is insufficient. Consumers cannot meaningfully evaluate complex data systems, no matter how detailed the disclosure.
Second, they recognize that harm is driven by systems, not just data collection. The use of data in automated decision-making creates risks that cannot be addressed through notice alone.
Third, they have concluded that cybersecurity and privacy are inseparable. A system that is insecure is not merely vulnerable; it is non-compliant.
Fourth, they are moving toward ex ante regulation—requiring companies to assess and mitigate risk before deploying systems, rather than penalizing harm after the fact.
The result is a shift from a rights-based model to a risk-based governance model.
The Bottom Line
California’s law is no longer just a privacy statute. It is a regulatory framework for how modern digital systems are designed, deployed, and governed.
It requires companies to understand their systems, assess the risks those systems create, secure them appropriately, and justify their decisions to regulators.
That is a fundamentally different obligation than publishing a privacy policy or responding to consumer requests.
And it carries a broader implication.
What California has done—merging privacy, cybersecurity, and AI into a unified compliance regime—is likely to be replicated elsewhere. For now, it is a state law.
In practice, it is already becoming a national standard.
And it delivers a simple message: If your business runs on data, compliance is no longer about what you disclose.
It is about how your systems behave—and whether you can defend that behavior under scrutiny.
Recent Articles By Author
Mark Rasch AI Governance, Automated Decision-Making Technology (ADMT), California Privacy Protection Agency (CPPA), CCPA 2026, CPRA Regulations, Cybersecurity Audit Mandate, data privacy compliance, High-Risk Data Processing, Operationalized Privacy, Privacy Risk Justification., Risk Assessments, System Governance
