A safety report card ranks AI company efforts to protect humanity

https://www.profitableratecpm.com/f4ffsdxe?key=39b1ebce72f3758345b2155c98e6709c

Are artificial intelligence companies protecting humanity from the potential harms of AI? Don’t bet on it, says a new report card.

As AI plays an increasingly important role in how humans interact with technology, the potential harms become more evident: people using AI-based chatbots for advice and then committing suicide, or using AI for cyberattacks. There are also future risks: AI being used to make weapons or overthrow governments.

Yet there aren’t enough incentives for AI companies to prioritize humanity’s safety, and that’s reflected in an AI Safety Index released Wednesday by Silicon Valley’s nonprofit Future of Life Institute, which aims to steer AI in a safer direction and limit existential risks to humanity.

“They are the only industry in the United States making powerful technology that is completely unregulated, which puts them in a race to the bottom against each other where they simply have no incentive to prioritize safety,” Max Tegmark, the institute’s president and a professor at MIT, said in an interview.

The highest overall grades awarded were just a C+, given to two San Francisco AI companies: OpenAI, which produces ChatGPT, and Anthropic, known for its AI chatbot model Claude. Google’s AI division, Google DeepMind, received a C grade.

Facebook’s parent company, Menlo Park-based Meta, and Elon Musk’s Palo Alto-based company, xAI, received a D. Chinese companies Z.ai and DeepSeek also earned a D. The lowest grade went to Alibaba Cloud, which earned a D-.

The companies’ overall scores were based on 35 indicators across six categories, including existential security, risk assessment and information sharing. The index collected evidence based on publicly available documents and company responses through a survey. The rating was conducted by eight artificial intelligence experts, a group made up of academics and leaders of AI-related organizations.

All companies in the index ranked below average in the existential security category, which takes into account internal monitoring and control interventions and existential security strategy.

“As companies accelerate their AGI and superintelligence ambitions, none have demonstrated a credible plan to prevent catastrophic misuse or loss of control,” according to the institute’s AI Safety Index report, using the acronym for artificial general intelligence.

Google DeepMind and OpenAI have said they are investing in security efforts.

“Security is at the heart of how we build and deploy AI,” OpenAI said in a statement. “We invest heavily in border security research, build robust safeguards into our systems, and rigorously test our models, both internally and with independent experts. We share our security frameworks, assessments, and research to help advance industry standards, and we continually strengthen our protections to prepare for future capabilities.”»

Google DeepMind said in a statement that it takes “a rigorous and scientific approach to AI security.”

“Our Border Security Framework outlines specific protocols for identifying and mitigating serious risks from powerful border AI models before they manifest,” Google DeepMind said. “As our models become more advanced, we continue to innovate in security and governance at the pace of our capabilities. »

The Future of Life Institute report states that xAI and Meta “lack commitments to oversight and control despite having risk management frameworks, and have not presented evidence demonstrating that they invest more than minimally in security research.” Other companies like DeepSeek, Z.ai and Alibaba Cloud lack publicly available documents on their existential security strategy, the institute said.

Meta, Z.ai, DeepSeek, Alibaba and Anthropic did not return a request for comment.

“Legacy Media Lies,” xAI said in a response. A lawyer representing Musk did not immediately respond to a request for additional comment.

Musk is also an advisor to the Future of Life Institute and has funded nonprofits in the past, but has not been involved with the AI ​​Safety Index, Tegmark said.

Tegmark expressed concern that if there is not enough regulation of the AI ​​industry, it could lead to helping terrorists make biological weapons, manipulating people more effectively than today or even compromising government stability in some cases.

“Yes, we have big problems and things are going in a bad direction, but I want to emphasize how easy it is to fix this problem,” Tegmark said. “We simply need to have binding security standards for AI companies.”

The government has worked to establish increased oversight of AI companies, but some bills have been rejected by tech lobbying groups who say increased regulation could slow innovation and prompt companies to move elsewhere.

But some laws aim to better control security standards at AI companies, including SB 53, signed by Governor Gavin Newsom in September. This requires companies to share their safety and security protocols and report incidents such as cyberattacks to the state. Tegmark called the new law a step in the right direction, but more needs to be done.

Rob Enderle, principal analyst at consulting services firm Enderle Group, said he thinks the AI ​​Safety Index is an interesting way to approach the underlying problem of the lack of regulation of AI in the United States.

“It’s not clear to me that the United States and the current administration are capable of having well-thought-out regulations at this time, which means that these regulations could end up doing more harm than good,” Enderle said. “It’s also not clear that anyone has figured out how to tighten the regulations to ensure compliance. »

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Check Also
Close
Back to top button