In a press conference to reintroduce her bill, Bauer-Kahan said the Trump administration’s stance on AI regulation changes “the dynamic for the states.”
“It is on us more,” she said, pointing to his repeal of an executive order influenced by the AI Bill of Rights and the stall of the AI Civil Rights Act in Congress.
The tale of two administrations in Paris
Dueling perspectives on how the U.S and the rest of the world should regulate AI were on display earlier this month in Paris at a summit attended by CEOs and heads of state.
In comments at a private “working dinner” hosted by President Emmanuel Macron at the Elysee Palace, alongside people like OpenAI CEO Sam Altman and German Chancellor Olaf Scholz, AI Bill of Rights author and former director of the Office of Science and Technology Policy Alondra Nelson urged business and government leaders to discard misconceptions about AI like that its purpose is scale and efficiency. AI can accelerate growth, but its purpose is to serve humanity.
“It is not inevitable that AI will lead to great public benefits,” she said in remarks at the event. “We can create systems that expand opportunity rather than concentrate power. We can build technology that strengthens democracy rather than undermines it.”
By contrast, Vice President J.D. Vance at the same event said the United States will fight what he called excessive AI regulation. The U.S. refused to sign an international declaration to “ensure AI is open, inclusive, transparent, ethical, safe, secure, and trustworthy.”
The Trump administration’s position that regulation is a threat to AI innovation mirrors the talking points of major companies such as Google, Meta, and OpenAI that lobbied against regulation last year.
Debate about whether to regulate AI comes at a time when Elon Musk, President Trump, and a small group of technologists seek to build and use AI within numerous federal agencies to improve efficiency and save money.
Those efforts risk cutting benefits to people who depend on them. A report released in late 2024 by California-based nonprofit TechTonic Justice found that AI influences government services for tens of millions of low-income Americans, often cutting benefits they’re entitled to and making opportunities harder to access.
The majority of global venture capital investment and lots of talent and major companies are in the Bay Area, so California has more to gain or lose in regulatory debates than anywhere else in the world, said Matt Regan, a vice president for Bay Area Council, an advocacy group for more than 300 companies including tech giants Amazon, Apple, Google, Meta, and Microsoft. The Bay Area Council hasn’t taken a position on bills proposed in this session, but last year opposed Wiener’s AI testing proposal and the anti-discrimination bill proposed by Bauer-Kahan.
Regan said California regulators have proposed “over engineered protections and audits” that make the technology functionally useless and hamper businesses. The business group Chamber of Progress estimates that compliance with anti-discrimination bills in California, Colorado, and Virginia, could cost businesses hundreds of millions of dollars.
The political landscape has moved toward the center since California lawmakers proposed AI bills a year ago, which is why he thinks Assembly Speaker Robert Rivas urged his colleagues to focus on pocketbook issues. Due to those shifts, he thinks that in order for bills to avoid a veto like the kind that killed Wiener’s measure, Regan said lawmakers must draft bills that reach a “goldilocks zone,” balancing consumer protections with buy-in from business leaders. The forthcoming report from the working group convened by Gov. Newsom may offer tips on how to reach a goldilocks zone between making AI useful and punishing bad actors for abusing the technology.
AI regulation with teeth
A 2024 Carnegie California report found that a majority of Californians support an international agreement on AI standards as a way to protect human rights. But virtually every international agreement signed by tech companies is voluntary or has no legally-binding bite, said David Evan Harris in a presentation at an AI governance symposium held by UC Berkeley earlier this month.
That’s why he encourages civil society groups who want to make change to speak with California lawmakers. Harris is on the advisory board member at the California Initiative for Technology and Democracy, a group that cosponsored laws to protect people from deepfakes that is getting challenged in court by Elon Musk’s company X, formerly Twitter. Previously he was part of responsible AI and civic integrity teams at Meta.