California Introduces Bill to Ensure Safe Application of AI

Author:
Jason Lomberg, North American Editor, PSD

Date
06/12/2024

 PDF

Jason Lomberg, North American Editor, PSD

­A new California bill designed to check the growth of artificial intelligence has the industry in a state of panic. We’ve all been privy to AI creative endeavors – pictures, audio, poetry, and even videos – to the point where the tech has almost become a punchline. But just how dangerous is it, and do we really need overarching legislation to coral it?

Throughout the 20th century – and into the 21st – we’ve been bombarded by Science Fiction fears of the robopocalypse and humanity’s ensuing extinction.

But AI has advanced considerably in the last couple decades. And even in its present state, AI has caused problems.

We just covered a story where a major App apparently used AI to write a variety of news items, some of them erroneous (or with fictitious authors). Alarming, but not exactly Skynet (though misinformation can have dramatic repercussions under the right circumstances).

Whatever the true danger, California recently advanced a bill, SB-1047, the “Safe and Secure Innovation for Frontier Artificial Intelligence Models Act.”

For “frontier AI systems”, i.e., models trained with more than 1026 floating-point operations (FLOP) of compute, developers would have to submit their system for a “limited duty exemption,” proving that, amongst other things, it “does not have a hazardous capability, as defined and will not come close to possessing a hazardous capability” (like utilizing WMDs or performing cyberattacks or other criminal activity totaling at least $500 million in damages).

Before training nonderivative covered models, the developers would have to make sure they adhere to strict requirements, including having a “kill switch” and submitting annual certification under penalty of perjury of compliance with the provisions (at least until it qualifies for a “limited duty exemption”).

Naturally, the industry was none-too-pleased with a bill that, for better or worse, will create significant obligations for companies, big and small.

“If someone wanted to come up with regulations to stifle innovation, one could hardly do better,” said Andrew Ng, a computer scientist who led AI projects at Alphabet’s Google and China’s Baidu, and who sits on Amazon’s board. “It creates massive liabilities for science-fiction risks, and so stokes fear in anyone daring to innovate.”

For what it’s worth, the politician who introduced the bill, Democratic state Senator Scott Wiener, said that, “Fundamentally I want AI to succeed and innovation to continue, but let’s try and get out ahead of any safety risks.”

Arun Rao, lead product manager for generative AI at Meta, claimed that the bill was “unworkable” and would “end open source in [California].”

AI is still in its infancy, and legislation like this could either help prevent malignant applications, stifle innovation, or some combination thereof.

RELATED