Anthropic safety researcher quits, warning ‘world is in peril’

An Anthropic security researcher has resigned, saying “the world is in peril” in part because of advances in AI.
Mrinank Sharma said the security team “constantly [faces] pressures to set aside what matters most,” citing concerns about bioterrorism and other risks.
Anthropic was founded with the explicit goal of creating safe AI; Its CEO, Dario Amodei, said in Davos that progress in AI was moving too fast and called for regulation to force industry leaders to slow down.
Other AI security researchers have left leading companies, citing concerns about catastrophic risks. Two key members of OpenAI’s Superalignment team, charged with driving AI development, resigned in 2024, saying the company was focusing on financial gain rather than minimizing the dangers of building “AI systems much smarter than us.”




