Thursday broke my heart. California’s SB-1047, not yet signed into law, but on its way to being one of the first really substantive AI bills in the US, primarily addressed to liability around catastrophic risks, was significantly weakened in last-minute negotiations.
The technology ethicist Carissa Veliz posted a succinct summary of much of the damage:
The bill no longer allows the [Attorney General] to sue companies for negligent safety practices before a catastrophic event occurs; it no longer creates a new state agency to monitor compliance; it no longer requires AI labs to certify their safety testing under penalty of perjury; and it no longer requires “reasonable assurance” from developers that their models won’t be harmful (they must only take “reasonable care” instead)
None of that is to the good.
None of it is surprising either. As I argue in my forthcoming book, Taming Silicon Valley, the tech world often speaks out of two sides of its mouth (there are two whole chapters on their Jedi mind tricks, in fact).
When Sam Altman stood next to me in the Senate, he told the Senate that he strongly favored AI regulation. But behind the scenes his team fought to water down the EU AI Act. Most or all of the major big tech companies joined a lobbying organization that fought SB-1047, tooth and nail, despite broad public support for the bill. Many startups did too. The prominent venture capital firm Andreessen Horowitz and others (including at least one prominent academic they have heavily invested in) fought it continuously. To my mind, many of these arguments were based on misrepresentation, as I have discussed here in earlier essays. As Garrison Lovely noted at The Nation, it was a “masks off” moment for the industry.
We, the people, lose. In the new form, SB 1047 can basically only be used only after something really bad happens, as a tool to hold companies liable. It can no longer protect us against obvious negligence that might likely lead to great harm. And the “reasonable care” standard strikes me (as the son of a lawyer but not myself a lawyer) -as somewhat weak. It’s not nothing, but companies worth billions or trillions of dollars may make mincemeat of that standard. Any legal action may take many years to conclude. Companies may simply roll the dice, and as Eric Schmidt recently said, let the lawyers “clean up the mess” after the fact.
There’s another problem with this bill too – it’s too narrow, focused mainly on catastrophic harm that causes over a half a billion dollars damaage, and does too little to address other issues, which I fear people may forget, once it is – or is not – passed. Congresswoman Zoe Lofgren, who opposes the bill (often seen as supportive of Big Tech), noted in a letter to Scott Wiener, who has sponsored the bill, argued (in an otherwise somewhat problematic letter) that
SB 1047 seems heavily skewed toward addressing hypothetical existential risks while largely ignoring demonstrable AI risks like misinformation, discrimination, nonconsensual deepfakes, environmental impacts, and workforce displacement.
In a reply to Lofgren, Wiener himself happily acknowledged the fact that this bill is just one piece in a puzzle, and noted that SB-1047 (despite all the attention, positive and negative, heaped upon it) is just one bill among many still actively being considered in California (many of which, e.g., on deepfakes, he has supported):
Whether or not SB-1047 passes, so much of what those bills are about, ranging from political advertising and disclosure to deepfake porn still needs to be addressed. There will be a temptation, if the bill passes, for legislators to say “Great, AI has been taken care of”, but SB-1047, by design, only addresses a small part of what it needs to be addressed.
Nobody should forget that, whether or not SB-1047 passes.
§
Still I support the bill, even in weakened form. If its specter causes even one AI company to think through its actions, or to take the alignment of AI models to human values more seriously, it will be to the good. Companies like OpenAI that have talked big about investing in safety research and then stinted on their promises may finally be incentivitized to walk the walk that their talk has promised. For now, instead, we have CEOs of companies like Anthropic blithely saying that AI may well kill us all, maybe within a few years, and at the same time proceeding as fast as possible. If the bill makes companies think more carefully about these contradictions, I will be glad that it passed.
The bill also sends a message to big tech that the quality of safety protocols, and internal investigations of safety, are relevant to the reasonable care standard — which can help to avoid Ford Pinto-like situations in which companies that should have known better fail to act.
Which is to say that even if nobody is ever penalized under SB-1047 as such, it’s good that the AI industry realize sooner rather than later that are potentially liable, so that they more carefully consider the consequences of their actions. As Yale Law Professor Ketan Ramakrishnan told me on a phone call this morning, in many ways with respect liability SB-1047 mostly clarifies what would likely be already be true, but isn’t yet widely known. But there is considerable value in such clarification. As he put it, “The duty to take reasonable care and tort liability apply to AI developers, just like everyone else. [But] Tort law can’t deter people from behaving irresponsibly if they don’t know the law exists and applies to them.“ SB-1047’s strongest utility may come as a deterrent.
It also provides important whistleblower protections, which as many ex OpenAI employees have made clear, are critical.
Most importantly, we cannot afford to be defeatist here. If the bill doesn’t pass, the view will be that Big Tech always wins; nobody will bother. Future state and federal efforts will suffer. We can’t afford that.
So it must pass. I hope the full CA Assembly will support it, and that Governor Newsom will sign it into law, rather than veto it.
But it cannot be the last AI bill to pass, either in California, or the United States, if American citizens are to be protected from the many risks of AI, because it is but one link, already weakened, in the armor that we need in self-defense.
As I emphasized last year in front of the Senate, there is not one risk but many: bias, disinformation, defamation, toxicity, invasion of privacy, and so much more, to say nothing of risks to employment, intellectual property, and the environment, precisely because recent techniques can be applied in so many ways, still poorly understood.
We absolutely need a comprehensive approach, and SB 1047 is just a start. And we need federal legislation; a messy state-by-state patchwork is one thing that could actually stifle AI development. Nobody should want that.
Once the bill passes, as I hope it will, we will see, contra Andreessen Horowitz (aka a16z) and others in Silicon Valley, that life goes on — that innovation will continue just fine, that no major tech company actually leaves California, and that modest regulation that encourages companies to do the right thing is not fatal to the industry as they alleged. So much of what a16z and others alleged in their fight against SB-1047 will be debunked.
Hopefully, SB 1047 will be signed into law, and the perfectly rational, and in fact entirely ordinary, notion of regulating AI will be normalized, and the absurd idea that such regulation would kill off AI will have vanished. As Yoshua Bengio said to me in an email, passing the law will “show that you can continue having innovation while forcing some safety precautions.”
At that point, maybe we can finally get down to the great deal of work still left to be done.
Share
0 Comments