SB 53, the landmark AI transparency bill, is now law in California

https://www.profitableratecpm.com/f4ffsdxe?key=39b1ebce72f3758345b2155c98e6709c

The Senate bill 53, the IA transparency bill, which has divided AI companies and makes headlines for months, is now officially law in California.

On Monday, California Governor Gavin Newsom signed the “Transparency In Frontier Artificial Intelligence Act”, written by Senator Scott Wiener (D-CA). This is the second project for such a bill, because Newsom vetoed the first version – SB 1047 – last year due to concerns, it was too strict and could suffocate AI innovation in the state. It would have required all AI developers, in particular model manufacturers with training costs of $ 100 million or more, to test specific risks. After the veto, Newsom instructed IA researchers to offer an alternative, which was published in the form of a 52 -page report – and formed the SB 53 base.

Some of the researchers’ recommendations have made SB 53, such as requiring large IA companies to reveal their security and security processes, allowing denunciation protections for employees of AI companies and directly sharing information with the public for transparency. But some aspects have not been part of the report – as third -party assessments.

As part of the bill, the major AI developers must “publicly publish a framework on [their] The website describing how the company incorporated national standards, international standards and the best practices of the industry in its frontier framework, “said a statement. Any large AI developer who updates his security and security protocol will also have to publish the update, and his reasoning for IT, within 30 days. But it is worth noting this part. Propose voluntary executives and best practices – which can be considered as guidelines rather than rules, with little or no attached penalties.

The bill creates a new way for AI companies and public members to “report potential critical security incidents at the California Emergency Services Office”, according to the press release, and “protect the reproductors who disclose significant health and security risks posed by border models and creates a civil penalty for non-compliance, enforceable by the prosecutor.” The press release also said that California Department of Technology would recommend updates to the law “on the basis of multiple chargers, technological developments and international standards” each year.

AI companies were divided on SB 53, although most are initially publicly or private against the bill, saying that it would chase the companies in California. They knew the issues: with nearly 40 million California residents and a handful of AI centers, the state has an excessive influence on the AI ​​industry and how it will be regulated.

SB 53 had been publicly approved by Anthropic after weeks of negotiations on the bill of the bill, but Meta in August launched a super CAP at the level of the state to help shape AI legislation in California. And Openai had pressure against such legislation in August, with its director of global affairs, Chris Lehane, writing in Newsom that “California management in technological regulations are the most effective when it completes effective world and federal security ecosystems”.

Lehane Suggestéd that Ai Companies Should be able to get around california state requirements by signing onto federal or global agree Intead, Writing, “in order to make california a leader in global, national and state-level ai police, we encourage the state to consider frontier Requirements when they sign onto a parallel regulatory framework like the [EU Code of Practice] Or conclude a security focused agreement with a relevant American federal government agency. »»

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button