With Gov. Gavin Newsom’s signature Monday, California approved “first-in-the-nation” legislation aimed at limiting the risk of accidents, cybercrimes and other catastrophic outcomes of artificial intelligence.

But a parallel effort pending approval by New York Governor Kathy Hochul may have better withstood the power of intense lobbying by giant AI companies, which have spun up a nationwide effort to limit regulation.

Both bills include compromises, some made through 11th-hour amendments removing the powerful liability, transparency and financial remedies their sponsors originally proposed.

Elected leaders in California weakened key provisions, which critics said left loopholes in the state’s ability to police the safety of current and emerging AI technologies.

New York lawmakers largely preserved stiff fines and demanded more transparency, while widening the net to cover not just the cutting-edge chatbots but also derived spin-offs that can be just as risky.

Key issues in the California law, which the Legislature sent to the governor two weeks ago, include:

  • Hacking of code: California lawmakers weakened a provision to require AI companies to report hacking incidents, except when physical harm occurs. The New York bill mandates reporting even when there is only a potential for serious harm.
  • Trade secrets: The California law lets AI companies conceal sensitive information in safety plans, while New York’s bill allows officials to inspect even redacted sections except those protected by federal law.
  • Fines slashed: California reduced — by an order of magnitude — maximum fines for events that injure or kill 50 people, or cause at least $1 billion in damage. Penalties now stand at $1 million per event, down from $10 million in the original draft. New York imposes penalties up to $30 million for repeat offenses. 

The California law was the state’s second attempt to hold AI companies accountable.

This time last year, Newsom vetoed a bill that would have enacted stricter transparency and liability rules, empowering the state attorney general to investigate violations. The veto followed ferocious pushback by industry, with 165 companies and organizations registering opposition.

This summer the legislation’s author, state Sen. Scott Wiener, a San Francisco Democrat, tried again. Senate Bill 53 was strongest when referred to the Assembly Committee on Appropriations in July, along with 260 other bills in the rushed final month of the legislative session.

In a twice-annual gatekeeping ritual, the committee secretly deliberates whether bills will go to the full Assembly for debate or held indefinitely in the “suspense file.” 

The AI bill emerged from committee on Aug. 29 with an amendment that weakened it. By Sept. 5 it had been watered down again, with dozens of deletions and additions, before reaching the Assembly floor. The changes resulted from discussions involving the governor’s office, tech companies, advocacy groups, safety experts and academics, said Erik Mebust, Wiener’s communications director.

In a Sept. 17 interview, Wiener said he could not recall many of the details in the wake of the end-of-session frenzy. The San Francisco Public Press asked his staff about the reasons for the amendments and who was involved in making individual line edits of the bill. A spokesperson replied by email: “Unfortunately we are unable to answer these questions.”

Wiener put a positive spin on the final bill’s form, touting his achievements in a press release: lowered fines and reduced enforcement powers.

Mebust said San Francisco-based Anthropic, one of the world’s largest AI firms, provided technical assistance, and Wiener held discussions with TechNet, a trade group representing the interests of large companies including AI innovators.

Some significant reforms survived. The final version of SB 53 mandates that the biggest AI companies publish their safety plans online. The bill also requires firms to report critical safety incidents to the California Office of Emergency Services.

On Sept. 12, the Assembly passed the bill, 57-7. The next day, the Senate passed it, 29-8.

In his signing statement, Newsom said the bill demonstrated California’s leadership role on AI policy by providing meaningful oversight and ensuring public safety.

“In enacting this law,” he wrote, “we are once again demonstrating our leadership, by protecting our residents today while pressing the federal government to act on national standards. The future happens here first.”

Hacking of code: no harm, no need to report

While the July draft of California’s legislation required AI companies to divulge incidents of intrusion involving theft or unauthorized modification of core code that could lead to safety risks, a small change made on Sept. 5 required reporting only if such events resulted in injury or death.

The secret sauce driving advanced chatbots is “model weights.” Large language models like ChatGPT, Claude, Gemini or Grok acquire the humanlike ability to converse and display expertise on many topics by reading and encoding vast troves of data, mostly scraped from the internet. Safety rules are added to that training to prevent chatbots from revealing dangerous information, such as step-by-step instructions for making a virus that a terrorist could use to unleash a deadly pandemic. Modifying the model weights can remove safety constraints.

Thomas Woodside, a machine learning expert at the Secure AI Project and co-sponsor of California’s bill, said theft of model weights could enable catastrophic outcomes.

“The model is trained to have safety guardrails,” Woodside said, adding that if core code is stolen, “you probably still have a model that has some safeguards on it. But if you can fully control it, then you can train those away.”

Alexandra Tsalidis, an AI policy researcher at the Future of Life Institute, a think tank that studies “existential risk” to civilization, said New York’s Responsible AI Safety and Education, or RAISE Act, closed that safety loophole, while California’s bill fell short.

“Exfiltration of model weights should be a huge, huge red flag for authorities and for the public, because it makes the entire model insecure,” Tsalidis said. “And that’s what causes the injuries. That’s what will cause the creation of a bioweapon. That’s what could be a signal of a loss-of-control incident as well.”

In New York, safety incidents are reportable if they occur “in such a way that it provides demonstrable evidence of an increased risk of critical harm.” The bill anticipates a variety of scary scenarios, including models acting autonomously without a user’s request, theft or “escape” of a model, the loss of human control or other unauthorized use.

“This would certainly include actual critical harm,” said Bill Richling, communications director for New York State Sen. Andrew Gounardes, the bill’s author, “but it could also include events that haven’t yet led to harm.”

Trade secrets: ‘trust but verify’ — or just ‘trust’?

When Newsom vetoed Wiener’s previous, tougher bill, SB 1047, he commissioned leading AI experts to write a report guiding California’s AI policies. The Public Press previously reported that Wiener crafted SB 53 in response to that guidance. One high-level recommendation was to balance demands for transparency for the sake of public safety with flexible rules encouraging innovation, which they summarized as “an ethos of ‘trust but verify.’”

SB 53 allows AI firms to redact trade secrets from published safety plans, as long as they provide a reason. It does not grant the government the authority to review the redactions. Last year’s bill gave the state attorney general access to unredacted plans. But the statewide Chamber of Commerce pushed back against the process, saying no disclosures at all would be better than redactions that needed explanation.

Richling said New York’s bill pierces the corporate veil of secrecy, granting access to the state’s attorney general and Division of Homeland Security and Emergency Services.

Fines slashed: bargaining over cost of lives

Proposed fines for safety violations in California’s bill pale next to the value of AI companies and their yearly earnings. The corporations SB 53 targets have gross annual revenues in the billions of dollars.

The combined market value of the five Bay Area companies covered by SB 53 is nearly $5.7 trillion. Only Anthropic, OpenAI, Google, Meta and xAI are subject to any regulation, based on the bill’s high minimum thresholds for both annual revenue and computing resources used in training their chatbots. 

Clearly they have the wherewithal to pay significant fines, which is why critics are surprised at the liability limitations. The $1 million maximum penalty for causing a mass-casualty event — defined as one that injures or kills 50 people — pencils out to $20,000 per life.

Prior to the Sept. 5 amendment, the number of fatalities was set at 100. In a committee hearing, Assembly member Rebecca Bauer-Kahan questioned Wiener about that. “I guess we are both Jewish and we believe that every life is sacred,” she said. “I don’t know where you’d get a hundred? That seems like a lot. So maybe that’s a conversation we can continue.”

Mebust said officials had to make AI companies understand the consequences of large-scale safety events, recalling, “We had to pick a number.”

Tsalidis said tech companies operate in a “very transactional” way, so even a low fine sends a message: “I think if there’s no financial penalty, then there’s absolutely no reason to comply.”

Other provisions ported over from last year’s bill disappeared in the final weeks before passage. No one in the Capitol has taken credit for the edits.

One change limited the number of people qualifying for whistleblower protection. The current bill covers only employees in critical safety roles and excludes contractors.

A major last-minute deletion stripped one of the top recommendations in the June report to Newsom: third-party auditing of safety plans and practices.

High stakes for California

A recent survey by TechEquity, a group advocating for economic justice in the tech sector, found 70% percent of Californians support strong AI laws. Yet SB 53’s final form reflects harsh political realities: Imposing stiff accountability could pose economic risks.

The Little Hoover Commission, a state government watchdog group, reported that the emerging industry could add more than $400 billion to the state’s economy by 2030.  

In a Sept. 11 hearing, Wiener sparred with state Sen. Diane Dixon, a Republican from Newport Beach. Dixon talked of balancing economic growth with safety, asking Wiener: “Maybe this is really a rhetorical question, but how do we know that we will not stifle innovation?”

“The two are not mutually exclusive,” Wiener responded. “In fact, promoting public safety and safe deployment of these very powerful models is in the interest of innovation. Because if we don’t do anything and something goes badly awry, there will be a strong reaction. And then you could really see the squelching of innovation, which I don’t want, and I think most people don’t want. So, we can do both. This bill is very light touch.”

Politico reported that Silicon Valley power players have been trying to “cash in on the influence they built with the Trump administration.” That lobbying seems to have paid off. In July, Trump issued an antiregulation executive order threatening to punish states that even try to pass AI laws.

Reactions from the industry have been mostly negative, with only Anthropic publicly supporting SB 53. In a letter to Newsom, OpenAI said any company that complied with federal or international frameworks should be deemed in compliance with state regulation. Google has not taken a public position. Meta, Facebook’s parent company, launched a super-PAC to resist regulation.

But oversight advocates say new enforcement approaches are needed for rapidly the advancing technology. Woodside, SB 53’s lead supporter, said transparency requirements serve as “evidence-generating policy” to help target future regulations and encourage best practices without stifling innovation.

“We are in a situation with AI where we don’t really know what the harms are going to be,” Woodside said. “Some harms are already starting to emerge, and then others are less certain.”

Jason Winshell is a photojournalist and investigative reporter. The current focus of his reporting is AI and technology policy. His prior four-decade career as software engineer informs his reporting. In 2010, his photography was nominated by the San Francisco Museum of Modern Art’s SECA award, which recognizes the work of emerging artists in the Bay Area. His photo essay book, "Street," documents every day life in San Francisco through 45 color photographs shot on the city’s streets.