AI Security Showdown: Yann LeCun Criticizes California’s SB 1047, Geoffrey Hinton Supports New Law

AI Security Showdown: Yann LeCun Criticizes California’s SB 1047, Geoffrey Hinton Supports New Law


Yanna LeCunMeta’s chief artificial intelligence scientist publicly criticized supporters of California’s controversial AI safety bill, SB1047on Wednesday. His criticism got here just a day after Geoffrey Hintonoften called the “godfather of AI,” supported the laws. The glaring disagreement between the two AI pioneers underscores the deep divisions in the AI ​​community over the way forward for regulation.

The California legislature has SB 1047 passedwho is now waiting for the Governor Gavin Newsom’s signatureThe bill has develop into a lightning rod for the debate on AI regulation. It would establish liability for developers of large-scale AI models that cause catastrophic harm if they fail to take appropriate safety measures. The bill applies only to models that cost at least $100 million to coach and operate in California, the world’s fifth-largest economy.

- Advertisement -

Battle of the AI ​​Titans: LeCun vs Hinton in SB 1047

LeCun, known for his pioneering work in deep learning, argued that many of the bill’s supporters have “distorted view“AI’s near-term possibilities. “The distortion comes from their inexperience, naivety about how difficult the next steps in AI can be, wild overestimation of their employer’s advantage, and their ability to advance rapidly,” the man now often known as X wrote on Twitter.

His comments were a direct response to Hinton’s support for open letter signed by greater than 100 current and former employees of leading AI firms, including OpenAI, Google DeepMind, and Anthropic. The letter, submitted to Governor Newsom on September 9, urged him to sign SB 1047 into law, citing potential “serious risk“caused by powerful AI models, equivalent to expanded access to biological weapons and cyberattacks on critical infrastructure.

The public disagreement between the two AI pioneers underscores the complexity of regulating a rapidly evolving technology. Hinton, who left Google last yr more comfortable talking about the dangers of AI, is a growing group of researchers who imagine that AI systems could soon pose an existential threat to humanity. LeCun, on the other hand, has consistently argued that such fears are premature and potentially harmful to open research.

Inside SB 1047: Controversial bill to vary AI regulations

The debate over SB 1047 has jumbled traditional political alliances. Supporters include Elon Muskdespite his previous criticism the bill’s writer, state senator Scott Wiener. Opponents include retired speaker Nancy Pelosi and the mayor of San Francisco London breedalong with several large technology firms and enterprise capitalists.

Anthropic, an artificial intelligence company that originally opposed the bill, modified its position after several amendments were made, stating that the bill “the benefits probably outweigh the costs.” The change underscores the evolutionary nature of the laws and the ongoing negotiations between lawmakers and the tech industry.

Critics of SB 1047 say it could stifle innovation and drawback smaller firms and open-source projects. Andrew Ng, founding father of DeepLearning.AI, wrote in TIME magazine that the bill “makes a fundamental mistake by regulating general-purpose technology rather than regulating the uses of that technology.”

But advocates say the potential risks of unregulated AI development far outweigh those concerns. They argue that the bill’s focus on models with budgets of greater than $100 million ensures it’ll primarily affect large, well-resourced firms that may implement robust security measures.

Silicon Valley Divided: How SB 1047 Is Dividing the Tech World

Engagement current employees from firms opposing the bill adds one other layer of complexity to the debate. It suggests internal disagreements inside these organizations about the proper balance between innovation and security.

As Governor Newsom considers signing SB 1047 into law, he faces a decision that would shape the way forward for AI development not only in California, but potentially across the United States. As the European Union is already moving forward with its own AI ActCalifornia’s decision could influence whether the United States takes a more proactive or passive approach to regulating AI at the federal level.

The conflict between LeCun and Hinton is a microcosm of the broader debate over AI safety and regulation, underscoring the challenge policymakers face in crafting regulations that address legitimate safety concerns without unduly hampering technological progress.

As the field of artificial intelligence continues to evolve at a breakneck pace, the end result of this legislative battle in California could set a key precedent for how societies grapple with the guarantees and perils of increasingly powerful AI systems. The tech world, policymakers, and the public can be watching closely as Governor Newsom considers his decision in the coming weeks.

Latest Posts

Advertisement

More from this stream

Recomended