As AI leaps forward, concern rises that innovation is leaving safety behind

When the US military captured former Venezuelan President Nicolás Maduro in January, it used an AI tool developed by a private US company. It’s unclear what exactly the tool did, but company policy states that its products cannot be used for violent purposes or to develop weapons.
Now the Pentagon is considering cutting ties with that company, Anthropic, over its insistence on limits on how the military uses its technology, according to Axios.
Tensions between AI safeguards and national security are not new. But numerous events in the past month have put the issue of AI safety – in contexts ranging from weapons development to ethical advertising – into the spotlight.
Why we wrote this
Artificial intelligence is developing so quickly that some in the industry worry that security issues aren’t getting enough attention. This sparks a conversation about how to balance innovation, competition and safeguards.
“Many people involved in the AI field have been thinking about security in various forms for a long time,” says Miranda Bogen, founding director of the AI Governance Lab at the Center for Democracy and Technology. “But now these conversations are happening on a much more visible stage.”
This month, researchers resigned from two major U.S. AI companies, citing inadequacies in the companies’ safeguards around things like consumer data collection. In a Feb. 9 essay titled “Something Big Is Happening,” investor Matt Shumer warned that AI will not only soon massively threaten Americans’ jobs, but also may begin to behave in ways that its creators “cannot predict or control.” The essay went viral on social media.
While urging action in the face of very real risks, many AI security experts warn against exaggerated fears about hypothetical scenarios.
“These moments of public attention are valuable because they create openings for the kind of public debate about AI that is essential,” Dr. Alondra Nelson, a former member of the United Nations High-Level Advisory Body on Artificial Intelligence, wrote to the Monitor in an email while attending a global AI summit in India. “But they are no substitute for democratic deliberation, regulation and real public accountability. »
Pressure to compete
In December, President Donald Trump issued an executive order blocking “onerous” state laws regulating AI. For example, his order pointed to Colorado law that prohibits “algorithmic discrimination” in areas like hiring and education. The president’s order was supported by Republicans, who said forcing AI companies to comply with excessive regulations could put the United States at a disadvantage compared to China.
This sense of competition seems to be at the heart of Anthropic’s distance from the Pentagon. Anthropic wants to ensure that its technology is not used to conduct domestic surveillance or develop weapons that fire without human intervention.
But the Defense Department, which said earlier this year that the U.S. military “must build on its advantage over our adversaries to integrate [AI]» wants to deploy AI technology without taking into account individual company policies, according to a report from Axios and Reuters.
“We are constantly under pressure to put aside what matters most,” AI security researcher Mrinank Sharma wrote in a resignation letter published by Anthropic last week. He did not reference a specific event that led him to resign, but warned that “our wisdom must grow in a measure equal to our ability to influence the world, lest we suffer the consequences.”
Dr. Bogen says policies designed to force AI vendors to subject their models to certain tests or invest in security are often diluted with disclosure requirements or non-binding recommendations.
“The incentives are very strong for rapid progress, even when there is a desire to put safeguards in place,” she says.
Is the world “in danger”?
Those who warn of the dangers of AI have sometimes used existential language.
Zoë Hitzig, a former researcher at OpenAI, expressed “deep reservations” about the company’s strategy in an op-ed she wrote for the New York Times last week, expressing concern that its decision to begin testing ads on ChatGPT “creates the potential to manipulate users in ways that we don’t have the tools to understand, much less prevent.”
Mr. Sharma’s resignation letter from Anthropic warned that “the world is in peril.”
Some experts believe such language is counterproductive.
“I find the definition of this ‘point of no return’ to be very disempowering,” says Dr. Bogen.
She worries that as people choose to hand over more of their decision-making to AI and learn to use the technology in their jobs, they will create dependencies that will be increasingly difficult to untangle.
But she says people are ultimately responsible for their choices and actions.
“I don’t think we’ll ever get to the point where it’s really impossible to make decisions about how to deal with this new technology,” she says.
Katherine Elkins, an AI security investigator at the National Institute of Standards and Technology, says she hopes she’s wrong about some of the risks she sees, such as an AI chatbot potentially using a person’s data to manipulate them. But until she knows for sure, she wants to keep safety an urgent priority.
“Personally, I thought it was better to err on the side of caution and spend my time thinking about the risks of AI” rather than thinking the technology won’t improve.

