On Monday, California signed into law the Transparency in Frontier Artificial Intelligence Act, legislation that state leaders say is designed to install “commonsense guardrails” surrounding development of the emerging technology.
It’s the first law of its kind in the nation, and it comes amid increasing debate concerning how much regulation municipalities should have over AI. The Trump administration has discouraged increased oversight, with President Donald Trump’s AI Action Plan calling for “a removal of onerous Federal regulations” in AI deployment. Gov. Gavin Newsom vetoed a “broader” version of the current law last year, Politico reported.
“California has proven that we can establish regulations to protect our communities while also ensuring that the growing AI industry continues to thrive,” Newsom said in a statement regarding the new law. “This legislation strikes that balance.”
California is home to some of the largest AI developers in the country, including Google, Apple and Nvidia, and the state had the most AI job postings in 2024.
Here are five ways TFAIA, which goes into effect Jan. 1, will impact AI development in the state.
1. Increase transparency from large AI developers
Large “frontier” AI developers are required to publish on their websites how they incorporate AI standards and industry best-practices into their AI frameworks and make safety and security protocols publicly available.
2. Require safety incident reporting
If a frontier AI developer discovers safety risks with one of its models, TFAIA establishes a mechanism to report it to the Office of Emergency Services. Incidents can range from AI cyberattacks to “deceptive techniques” observed in an AI model. “Critical” safety risks are required to be reported within 15 days of discovery and within 24 hours if the incident poses “an imminent risk of death or serious physical injury.”
3. State protection for whistleblowers
The new law establishes protections for whistleblowers in AI companies, preventing retaliation against workers who disclose information and safety concerns to the attorney general or other authority.
4. Civil penalties for noncompliance
TFAIA opens the door to imposing fines on AI frontier developers that fail to meet the new state regulations. The civil penalty will depend on the severity of the violation but will not exceed $1 million per violation.
5. Establish a public consortium for AI development
A public consortium within the Government Operations Agency — dubbed CalCompute — will be established with the goal of advancing “the development and deployment of artificial intelligence that is safe, ethical, equitable, and sustainable by fostering research and innovation.”
“As artificial intelligence continues its long journey of development, more frontier breakthroughs will occur,” said a group of AI academics who advised the state on the new law. “AI policy should continue emphasizing thoughtful scientific review and keeping America at the forefront of technology.”