Manage episode 514923748 series 3530279
California governor Gavin Newsom recently signed into law the country’s first comprehensive regulatory framework for high-risk AI development. SB 53, or the Transparency in Frontier Artificial Intelligence Act, is aimed at the most powerful, “frontier” AI models that are trained with the highest computing and financial resources. The bill requires these developers to publish information on how they evaluate and mitigate risk, report catastrophic or critical safety incidents to state regulators, maintain protocols to prevent misuse of their models, and provide whistleblower protections to employees so they can report serious risks. SB 53 is significantly narrower in scope than the controversial SB 1047, which was vetoed by Newsom in 2024. Nonetheless, it is adding fuel to a burning debate over how to balance federal and state AI regulation.
While California’s AI safety bill is targeted at the largest AI developers, advocates for startups and “Little Tech” worry that they will end up caught in the crosshairs anyway. Jai Ramaswamy and Matt Perault of a16z join today to argue that attempts to carve out Little Tech from the burdens of AI regulation fall flat, because they focus on the wrong metrics like the cost of training AI models and computing power. Rather than try and regulate the development of AI, policymakers should focus on how AI is used—in other words, regulate the misuse of AI, not the making of AI.
Matt Perault is the Head of Artificial Intelligence Policy at Andreessen Horowitz, where he oversees the firm's policy strategy on AI and helps portfolio companies navigate the AI policy landscape. Jai Ramaswamy oversees the legal, compliance, and government affairs functions at Andreessen Horowitz as Chief Legal Officer. They’ve written extensively on AI regulation for Little Tech.
138 episodes