For two years, state Sen. Scott Wiener worked to enact regulation to limit the risk of accidents, cybercrimes and other catastrophes posed by technologies emerging in San Francisco and Silicon Valley. On Sept. 29, his efforts culminated in what he touted as a first-in-the-nation law, the Transparency in Frontier Artificial Intelligence Act (Senate Bill 53).

A key provision was the protection of whistleblowers. Advocates for accountability who backed the reform had high hopes after a task force that Gov. Gavin Newsom convened in June highlighted the importance of corporate insiders in “surfacing misconduct, identifying systemic risks, and fostering accountability in AI development and deployment.” The panel recommended shielding all employees, contractors and third parties to ensure “stronger accountability benefits.” 

While SB 53 does include whistleblower protections, qualifying for them poses a high hurdle. In bargaining with industry stakeholders in the final weeks before passage, Wiener made or agreed to amendments that water down access to those protections by restricting who qualified and the kinds of problems they could report without fear of retribution.

The final wording narrows the definition of whistleblowers to employees in critical safety roles, excluding thousands of low- and mid-level staff, freelancers, temps, outside partners and board members. And unlike in established parallel laws that apply to other industries in the state, employees receive protections only if safety issues they surface have already led to injury or death, predict a rogue AI that risks killing or injuring more than 50 people or cause more than $1 billion in damage.

Under these stringent requirements, many high-profile corporate insiders who have spoken out about unsafe practices inside California-based AI giants could have been legally disciplined, sued or fired — either because their job title was not covered or because they caught a problem before it had unleashed physical, social or economic harm.

Several early supporters of SB 53 expressed regret at seeing whistleblower protections limited in the enacted version.

The Signals Network, a national nonprofit group representing whistleblowers in high-profile tech industry cases, supported an early draft. In an email, Margaux Ewen, director of the organization’s Whistleblower Protection Program, wrote that the network “finds it concerning that the scope of who can be a protected whistleblower was narrowed, thereby threatening transparency and accountability in a booming industry with already little regulatory framework.” 

The group, Ewen continued “believes the broadest possible scope of who can be considered a whistleblower is always the most protective, both of the would-be whistleblowers but also of the people impacted by this technology.”

Tracy Rosenberg, advocacy director of Oakland Privacy, a grassroots transparency and oversight organization, supported an early draft of SB 53 in part for its broad whistleblower provisions, but found the final text disappointing.

“We wanted those to be fairly broad and fairly standard,” she said. “Instead, it was winnowed down to only certain people that work in certain parts of an AI company — and only about certain kinds of threats that related specifically to catastrophic property damage or catastrophic property loss.”

But other safety organizations, including some directly funded by AI firms, say states need to exercise caution because mandating too much transparency could present competitive barriers to companies leading development of large language models and related tools.

Sunny Gandhi, vice president of political affairs at Encode AI, a youth-led association focusing on responsible development of so-called frontier models, co-sponsored Wiener’s legislation. He said AI companies pressed Wiener’s team to consider the public relations and economic harms from shielding insiders who might reveal proprietary information.

“If you go too far, and allow too many people to have access to these whistleblower protections, then the number of them that might take advantage of this in an unfair way to the companies goes really high,” Gandhi said. “Our whole point was trying to find the balance of trying to protect the public — but we also take seriously the concerns of a lot of the frontier labs, that they want to make sure that their trade secrets are protected. And they want to make sure that they’re not going to be forced to be dragged in public or in private because of whistleblowers.” 

Asked just after SB 53’s passage to explain the reasons for late-session amendments, Wiener said he could not recall details from the end-of-session lawmaking frenzy. In a follow-up email request for the names of organizations and individuals involved in the process, a spokesperson replied: “Unfortunately we are unable to answer these questions.”

The San Francisco Public Press requested state legislative records that might reveal who influenced the legislation. The Senate Rules Committee provided only an archive of letters sent to other committees. The file contained correspondence from dozens of organizations and companies. Opponents strongly objected to early draft whistleblower protections, which were edited heavily in the final law.

The committee did not, however, disclose any records of direct communication with Wiener or his staff, citing a legal exemption that permits but does not obligate withholding them.

A bearded man in a sport coat and glasses stands outside in a crowd in front of a sign saying "Save science, save lives"
State Sen. Scott Wiener of San Francisco indicated in an interview at a Nov. 10 event that he could not recall key details of last-minute changes to the September whistleblower protection bill he sponsored. Credit: Jason Winshell / San Francisco Public Press

The protection of whistleblowers was only one of several provisions in the final law. All were weakened in varying degrees. The version Newsom signed slashed fines for “catastrophic” incidents to $1 million from $10 million, allowed companies to redact trade secrets from safety reports, removed requirements to report code theft unless physical harm occurred, and stripped mention of third-party auditing that would have ensured AI companies followed their own safety plans.

Protections buried in fine print

The law’s dense legal language creates a misleading impression of broad whistleblower protections. The fine print omits many job roles and circumstances from legal shielding.

In early drafts, any employee, contractor, board member or corporate officer could qualify as a whistleblower. In the AI industry, contractors often outnumber employees.

The law’s final amendment narrowed protections to cover only company employees “responsible for assessing, managing, or addressing risk of critical safety incidents.” This narrowing delivered what Gandhi said companies wanted: reduced risk of bad publicity or trade secret exposure.

Between Sept. 2 and Sept. 5, dozens of lines in SB 53 were replaced with language that restricted access to whistleblower protections.

Of the many last-minute changes to Senate Bill 53 in September, the definition of whistleblower was narrowed significantly, excluding most people who work for artificial intelligence companies and limiting what kinds of safety problems are reportable under legal protection from corporate reprisal. Read the bill and our annotations of the changes on DocumentCloud.


The bill’s last-minute changes also restricted what safety issues may be reported. The amendment provided whistleblowers protections for only four types of “critical safety incidents,” three of which require that injury or death has already occurred. The fourth would be when a whistleblower accurately predicts a catastrophic mass casualty event or extreme financial damage.

Wiener seemed unfamiliar with significant revisions of his own bill. At a Nov. 10 press event in San Francisco, the Public Press asked him if he knew whether the law protected employees who report safety concerns before the harm occurs.

“I think it would be before or after,” Wiener said. “Obviously before is better, because we want to avoid harm to the public. But I don’t believe anything in the law would take away that protection, if it were after.”

That is true only in the catastrophic scenario, critics noted.

“Someone has to have a crystal ball and to say, ‘I am going to whistleblow because I’m confident I can demonstrably prove that this particular safety risk that I am identifying would cause at least this much damage,’” Rosenberg said. “Because that’s what you have to do to protect yourself under this law.”

A Sept. 5 amendment also exempts catastrophic harm caused by “lawful activity of the federal government.” This means that whistleblowers are unprotected from retaliation for voicing safety concerns if federal agencies accidentally kill people with an out-of-control AI system. The carve-out comes on the heels of a July Department of Defense announcement of new partnerships with Anthropic, Google, OpenAI and xAI aimed at accelerating the adoption of advanced artificial intelligence tools by the military.

The Legislature does not publish online records of who changes bills. The Public Press asked several of Wiener’s top staff members — Chief of Staff Krista Pfefferkorn, Legislative Director Severiano Christian and Communications Director Erik Mebust — details on who made the changes and why. None responded to repeated emails requesting this information.

Parallels to other industries

In a recent interview, Wiener said SB 53 broke ground by protecting whistleblowers who surface safety issues that are not specifically violations of law. 

“For the first time that I’m aware of,” Wiener said, the legislation “provides whistleblower protections, not just for a violation of the law but a safety practice that may not violate the law. And so, it’s a very bold new step in whistleblower protections.”

It might or might not be bold but it is hardly new. Federal and state laws regulating other safety-critical industries have for years protected whistleblowers who report problems whether or not they break the law, in order to prevent harm. Those laws also protect contractors and other third parties.

The Public Press compared SB 53 with federal whistleblower laws affecting the aviation, automotive and railroad industries. In each case, Wiener’s AI law was the clear outlier, protecting fewer workers, in narrower circumstances and only after the occurrence of harm. On every key measure, it offers a weaker shield for sounding the alarm.

California’s health care whistleblower law also is more protective, covering workers and patients who report unsafe conditions that do not necessarily violate any law.

Last year, California was poised to regulate the AI industry more robustly. In September 2024, Newsom vetoed a previous iteration of AI regulation Wiener sponsored, Senate Bill 1047, which would have protected virtually all workers who come in contact with AI development; broadly covered risk of serious harm; and would not require harm to have already occurred. The governor’s veto came after intense pressure from 165 industry-side opponents who flooded Sacramento warning of regulatory overreach.

In the same move, Newsom commissioned some of the world’s leading AI experts to take a second look and write a policy report guiding California’s approach to regulating frontier models. The report, released in June, recommended most of the provisions of the law the governor had vetoed, plus other steps to protect public welfare.

Wiener maintained that he based SB 53 on that report’s recommendations. Yet on every measure, SB 53 offers less protection than did SB 1047 or the panel’s recommendations.

Who is protected What is reportable Before harm occurs?
SB 53: AI safety (passed Sept. 2025) Only an employee in a narrow safety role Death or injury, predicted rogue
AI catastrophe,
law violation
Not protected
Aviation, motor vehicle
& rail industries
Any employee or contractor Any safety hazard or law violation Protected
CA Health & Safety Code Any employee, medical worker, contractor or patient Any suspected unsafe patient care or condition Protected when unsafe conditions suspected
SB 1047: AI safety
(vetoed Sept. 2024)
Any employee contractor, adviser or board member Any risk of serious harm Protected when risk identified
June 2025 AI expert panel recommendations Any employees or contractor Any safety risk Protected

Tech lobbying predicted

A year before SB 53 passed, AI policy expert David Evan Harris warned Congress that tech industry lobbyists would demand rewrites that make the proposed laws “meaningless.”

U.S. Sens. Richard Blumenthal, a Connecticut Democrat, and Josh Hawley, a Missouri Republican, co-led a Sept. 17, 2024, Judiciary Committee hearing on establishing federal safety guardrails for AI, expanding on their 2023 framework on artificial intelligence legislation.

The four expert witnesses had held high-level positions at tech firms: Harris at Meta, Margaret Mitchell at Google, and William Saunders and Helen Toner at OpenAI. Mitchell, Saunders and Toner had resigned or been fired over disagreements about company safety practices or ethics. Harris, a former researcher on Meta’s Civic Integrity and Responsible AI teams and now a senior policy adviser at the California Initiative for Technology and Democracy, went on to help lawmakers craft AI regulation in California and Arizona and overseas.

In the hearing, Blumenthal sought Harris’ advice on federal AI regulation, asking what kind of cooperation could be expected from tech firms. 

Harris responded: “I have been surprised in my work in the California Legislature by the way in which tech industry lobbyists — sometimes hiding behind industry groups, sometimes from individual named companies — are able to arrive at legislators’ doors with requests to remove ‘shall’s and replace them with ‘may’s, to take legislative language that was very well intentioned and, at the 11th hour, turn it into something that is meaningless.”

He predicted that legislators who are too bold “will be told, ‘This isn’t going to work and you’re going to have to weaken it.’ And sometimes that comes in many rounds of weakening, and it can be very painful to watch.”

In July, Politico reported the growth in the last year of AI company lobbying nationwide to derail or reshape regulation, with influential Silicon Valley tech and finance leaders focused primarily on California.

Wiener’s staff acknowledged that before his colleagues voted on SB 53, the senator held discussions with TechNet, a trade group representing AI companies. Spokesman Mebust said Anthropic, one of the large firms subject to the regulation, provided “technical assistance” and later publicly supported the law.

Anthropic engaged Sacramento lobbying firm Niemela Pappas & Associates. The firm touts its specialty in navigating “the legislative and regulatory processes to deliver results for our clients.”

Politico reported in July that shortly after the release of the governor’s report and after the initial drafting of SB 53, Wiener sent letters to several AI companies asking for “feedback and suggestions to improve this bill.”

Perils facing whistleblowers 

In two hours of testimony at Blumenthal’s hearing, the four witnesses described the financial and career threats corporations use to silence AI whistleblowers.

“Essentially, if you’re considering whistleblowing, it’s you and you alone against a company making a ton of money, with lawyers who are set up to harm you if you make the smallest move incorrectly, whatever it is,” said Mitchell, a machine learning expert and former co-lead of Google’s Ethical AI Team. Google fired her after she protested the dismissal of colleague Timnit Gebru, the author of a paper warning of AI’s environmental effects.

A woman in glasses and wearing a red scarf in front of a screen saying "TechCrunch Disrupt"
Timnit Gebru, an artificial intelligence researcher who led Google’s ethical AI work, became a prominent critic of bias and environmental costs in large language models before her departure from the company in 2020, which she has said followed internal disputes over research and dissent. Credit: Kimberly White / Getty Images for TechCrunch

In a 2023 interview in The Information, Mitchell reflected, “I was just so terrified that I was never going to get a job in tech again, that my name had been scarlet lettered.”

Saunders, a former member of OpenAI’s now-disbanded Superalignment Team, resigned in February 2024 after becoming increasingly concerned about the company’s aggressive pursuit of “artificial general intelligence,” a term referring to advanced systems possessing human-level or better cognitive capacities.

In June 2024, Saunders joined 12 former employees of OpenAI and Google’s DeepMind lab in publishing “A Right to Warn About Advanced Artificial Intelligence,” exposing OpenAI’s culture of recklessness as it raced for market dominance. The signatories urged the end of practices that discourage whistleblowing. At stake, they said, was nothing less than the risk of “human extinction” from uncontrolled powerful future systems.

He told the committee of industry’s use of restrictive and secretive nondisparagement agreements to silence potential whistleblowers by threatening financial penalties.

“You would lose all the equity you had in the company if you didn’t sign this agreement,” Saunders said, “where you had to effectively not criticize the company and not tell anybody that you’d signed this agreement.”

SB 53 protects salary and benefits but says nothing about the equity clawbacks Saunders described. The law does specify that if a company’s stock value drops after a disclosure, that decline does not count as “loss of property” the whistleblower could sue to recover.

While AI whistleblowers who testified before Congress or signed the “Right to Warn” letter have continued their careers, high-profile status offered no guarantee of protection. Bad publicity brought by whistleblower revelations has not stopped Big Tech from wielding considerable resources against those who have spoken out. In December, The Washington Post reported on Meta’s use of aggressive legal tactics to derail the careers of and financially threaten employees who exposed the company profiting from political propaganda and courting the Chinese government for market access. Despite extensive media coverage of their allegations, those whistleblowers faced years of unemployment and financial hardship.

High stakes for California

Wiener repeatedly emphasized that his challenge was to weigh public safety against innovation in the AI industry, which one think tank predicts will add more than $400 billion to the state’s economy by 2030. The state’s gross domestic product was measured at $4.1 trillion, the world’s fourth-largest, in 2024. In a Sept. 11 hearing, Wiener told members of the Standing Committee on Privacy and Consumer Protection that SB 53 was a balanced and “light touch” bill.

At a Nov. 10 press event — shortly after Wiener announced his run for Congress to fill Nancy Pelosi’s seat — he said he had no plans to update SB 53.

In the absence of federal AI regulation, California plays an outsize role because so many companies are based in the state. Because Wiener ultimately weakened his own proposed whistleblower and related accountability provisions, the law sets a precedent for other state legislatures.

This month, New York Gov. Kathy Hochul signed the Responsible Artificial Intelligence Safety and Education, or RAISE Act. 

Andreessen Horowitz, a Silicon Valley venture capital firm, and OpenAI President Greg Brockman funneled millions of dollars into a super PAC, Leading the Future, to water down New York’s AI safety bill. Rolling Stone reported recently that the company worked to weaken California’s SB 53 as well.

In the run-up to passage of the RAISE Act, Leading the Future and OpenAI  lobbied Hochul to align the New York legislation with SB 53. According to reporting by City & State New York, a government insider publication, the lobbying paid off; Hochul delivered a last-minute draft amendment to New York lawmakers that “crosses out the entirety of the bill passed by the Legislature and replaces it with new language taken nearly verbatim from California SB 53.” 

However, in down-to-the-wire negotiations with Hochul, the bill’s sponsors, state Sen. Andrew Gounardes and Assembly member Alex Bores, resisted tech lobbying pressure to rewrite their legislation to emulate California’s, ultimately preserving its stronger accountability and liability provisions. New York will rely on its existing state law to shield AI company employees who report safety concerns that they reasonably believe pose “a substantial and specific danger to the public health or safety.”

Now the federal government is further tipping the scales in favor of AI companies. On Dec. 11, President Donald Trump signed an executive order aimed at preventing “a patchwork of 50 different regulatory regimes.” The order directs the U.S. attorney general to sue to overturn state AI regulations already on the books. The order threatens to withhold funding for high-speed internet access in underserved communities in states that do not comply. 

In contemporaneous press releases touting SB 53’s passage in September, Newsom and Wiener highlighted its “first-in-the-nation” AI safety guardrails. Yet critics said that by appearing to protect whistleblowers while in reality protecting tech firms from many insiders who might air dirty laundry, the state could provide a template favoring the industry.

Rosenberg of Oakland Privacy expressed regret about the final form of Wiener’s law. “If this is the only piece of AI safety that California will ever sign,” she said, “then I would say it’s nowhere near good enough.”


This reporting was supported by a grant from the Tarbell Center for AI Journalism.

Jason Winshell is a photojournalist and investigative reporter whose current focus is AI and technology policy. His four-decade career as software engineer informs his reporting. In 2010, his photography was nominated by the San Francisco Museum of Modern Art’s SECA award, which recognizes the work of emerging artists in the Bay Area. His photo essay book, “Street,” documents every day life in San Francisco through 45 color photographs shot on the city’s streets.