Skip to main content

Following Governor Gavin Newsom’s veto of SB 1047, California’s Joint Working Group has released comprehensive policy recommendations for future AI regulation. The 52-page report, authored by experts from Stanford, UC Berkeley, and the Carnegie Endowment for International Peace, surveys existing regulatory approaches and synthesizes them into recommendations for California policymakers.

The report emphasizes transparency and “trust but verify” principles—a solid foundation.

  • It is fantastic to see the working group identify and highlight the lack of whistleblowing protections as a critical gap – expanding on the current California Labour Code (Link) protecting whistleblowers.
  • However, the whistleblowing protections discussed for California fall short of international standards.
  • These international standards already govern to most AI Companies today – just in different jurisdictions.

What the Working Group Got Right

The Working Group calls for protection of individuals making reports that go beyond strict legal violations, recognizing that AI risks may be difficult to predict and codify in advance – this is crucial and good to see it highlighted.

They further call for establishing a reasonable threshold for protection, requiring only “good faith” rather than definitive proof of wrongdoing.

Where the Recommendations Fall Short

What is confusing is that the whistleblower protection recommendations claim to reflect international best practices in certain parts but then overall are very aligned with US/California status-quo regulation.

Areas where this gap is most obvious:

1. Personal scope (WG call for protection of few individuals – when best practice goes much further)

2. Public disclosure rights for imminent threats (none called for by WG)

3. Retaliation protections (no strengthening of burden of proof reversal called for)

4. Timelines for handling and responding to reports (none called for)

5. Training & transparency requirements for covered persons and company “internal” whistleblowing channels (not mentioned)

6. Toothless penalties: Violations not punished sufficiently

Personal Scope: Who Can Report on Misbehaviour? California WG stays (far) Behind Global Best Practice

The Working Group recommends coverage for at least employees but frames coverage of further parties as contested: “However, a central question in the AI context is whether protections apply to additional parties, such as contractors.”

They then continue to argue that “broader coverage may provide stronger accountability benefits but also imposes greater cost: To extend protections to contractors and third parties, developers may need to implement additional reporting channels and legal frameworks.”

We do not believe this reasoning holds up:

1. The EU Whistleblowing Directive goes much further and is already in effect: It extends personal scope to essentially all individuals interacting with a covered company in a “professional context” – that is suppliers, customers, shareholders, board members, unpaid advisors, evaluation providers, and facilitators (e.g. non-profits) helping insiders.

2. The EU Directive already mandates all companies with over 50 employees to maintain internal whistleblowing channels for reporting misconduct accessible to these broader stakeholder groups.

3. The vast majority of AI companies covered by the proposed policy already comply with the EU Whistleblowing Directive today and several companies, including Meta and OpenAI, already have public channels for reporting concerns that could easily be extended to e.g. the US.

In the EU, introducing these channels and scope extensions has not produced overwhelming reports – In fact, company satisfaction with expanded internal whistleblowing channels has been very high. (EQS, 2021).

No Public Disclosure Allowed: A Critical Gap to Prevent Imminent Harm

The Working Group does not discuss allowing individuals to go public with concerns under any circumstances.

This leaves a gap for individuals who suspect imminent harm, when internal or regulatory channels fail, or when their expected to be ineffective.

This falls below international best practice. The EU Whistleblowing Directive protects individuals who disclose issues to the public when other channels fail. California had the opportunity to match this standard but chose not to.

Retaliation Protections: Inadequate Standards

The report does not mention improvements to the burden of proof reversal process for demonstrating retaliation.

This is not ideal, as existing California standards on burden of proof reversal are decent, but not best practice globally: California requires individuals who were (allegedly) retaliated against to prove that retaliation was at least a contributing factor in e.g. their dismissal from the company. This is not trivial for a whistleblower to prove without access to e.g. internal communications and decision-making processes.

Individuals do not have to prove this in other jurisdictions. At this point, you might already be able to guess in which jurisdiction the duty is fully on the company to prove that retaliation did not occur.

Keeping Whistleblowers Informed on Report Progress: Missing Basic Requirements

Whistleblowers, whether internal or external, largely act out of moral obligation and really care about the progress or outcomes of their reports.

The Working Group does not recommend setting timelines for acknowledging reports or providing feedback to whistleblowers in either the internal or regulator channels.

This leaves insiders motivated by moral duty “up in the air,” potentially driving undesirable public disclosures where people feel powerless.

The EU Whistleblowing Directive establishes such timelines (7 days and 3 months). Precedent exists.

Enforcement: Toothless Penalties

Current California whistleblower law provides civil penalties “not exceeding ten thousand dollars ($10,000) per employee for each violation.” This is insufficient given that frontier AI company employees frequently earn compensation in the medium-to-high six figures.

The penalty structure also provides no meaningful deterrent relative to the stakes involved.

The Working Group could have called this out.

Training and Education of Covered Persons: The Missing Foundation

The best policy system is worthless if employees don’t know about it. California labor code sections (link) on whistleblowing require companies to provide information on external whistleblowing rights to employees.

Yet ongoing proprietary research (our survey) and published anecdotal evidence (link) show that insiders are consistently unaware of their rights and internal reporting opportunities.

“I’m not well-informed about our company’s whistleblowing procedures (and it feels uncomfortable to inquire about them directly).”

– A frontier AI company insider from the US (source: Proprietary survey, rephrased)

This is clearly inadequate. Ample precedent exists for placing duties on employers to inform their employees of available options.

An ideal solution would require companies to demonstrate effectiveness of their training. For example, requiring 90% of a random sample of employees to answer 80% of non-trivial questions about internal and external whistleblowing options and rights correctly.

Implementation Reality: Systems Don’t Work Without Trust

Despite existing California whistleblower protections, preliminary results from our research study show extremely low percentages of current AI company insiders trust the government to act with speed and knowledge on whistleblowing reports, especially on complex topics.

“Without knowing the appropriate contact person or agency, I wouldn’t attempt to reach out [to a regulator] currently.”

– A frontier AI company insider from the US (source: Proprietary survey, rephrased)

What’s needed?

Plans for effectively staffing recipient bodies with technical expertise or at least resources and rights to consult independent experts swiftly.

Besides psychological and financial aid, some European countries in fact already provide e.g. independent advisory services to potential whistleblowers, helping them understand if their concerns fall within policy scope.

Here, California could actually lead the way, as lacking AI expertise in regulators is a struggle faced even more strongly in the EU vs. the US.

Our survey remains open to current and former Frontier AI company employees. If that is you, we encourage you to participate and share this opportunity with colleagues. Your insights into both internal company reporting systems and external regulatory pathways are essential for understanding the real barriers to effective risks reporting and driving evidence-based policy reform. Join the effort! Take the 10-minute anonymous survey here.

Why This Matters

California’s position as the epicenter of AI development gives it unique influence over global standards. The companies developing the most advanced AI systems are headquartered there. The talent pipeline runs through its universities. The investment capital flows through its venture firms.

Moving Forward

The Working Group’s report provides policymakers with a comprehensive survey of existing approaches and solid analytical framework.

The question is whether California will choose to adopt the more ambitious standards that international experience has proven workable, or settle for approaches that haven’t kept pace with the global state of the art.

The infrastructure exists. The precedents are established. The need is clear. California has the opportunity to demonstrate that leading innovation and robust safety protections aren’t just compatible—they’re mutually reinforcing.

Whether the state seizes this opportunity will signal how seriously it takes its role in shaping the future of AI governance worldwide.