The World Is Not Prepared for an AI Emergency

Imagine waking up to find the internet flickering, card payments failing, ambulances heading to the wrong address, and emergency broadcasts that you’re no longer sure you can trust. Whether caused by model malfunction, criminal use, or growing cyber shock, an AI-driven crisis could quickly cross borders.
In many cases, the first signs of an AI emergency would likely look like a generic outage or security failure. Only later, if at all, will it become clear that AI systems played an important role.
Some governments and businesses have started building safeguards to manage the risks of such an emergency. The European Union’s AI law, the US National Institute of Standards and Technology’s risk framework, the G7 Hiroshima process and international technical standards all aim to prevent harm. Cybersecurity agencies and infrastructure operators also have runbooks for common hacking attempts, outages, and system failures. What is missing is not the technical manual for patching servers or restoring networks. This is the plan to prevent social panic and a breakdown of basic trust, diplomacy and communication if AI finds itself at the center of a rapidly evolving crisis.
Preventing an AI emergency is only half the battle. The missing half of AI governance is preparation and response. Who decides that an AI incident has become an international emergency? Who talks to the public when fake messages flood their feeds? Who keeps the channels open between governments if normal lines are compromised?
Governments can and should establish AI emergency response plans before it is too late. As part of upcoming research based on disaster law and lessons learned from other global emergencies, I am examining how existing international rules already contain the elements of an AI playbook. Governments already have the legal tools, but must now agree on how and when to use them. We don’t need new and complex institutions to oversee AI; we just need governments to plan ahead.
How to Prepare for an AI Emergency
We have already seen the general model of governance. The International Health Regulations allow the World Health Organization to declare a global health emergency and coordinate its actions. Nuclear accident treaties require rapid notification when radiation could spread across borders. Telecommunications agreements remove legal barriers so that emergency satellite equipment can be activated quickly. Cybercrime conventions establish 24/7 contact points so that police forces can cooperate at short notice. The lessons show pre-agreed triggers, appointed coordinators and rapid communication channels to save time in the event of an emergency.
An AI emergency requires the same foundation. Start with a shared definition. An AI emergency should be an extraordinary event caused by the development, use or malfunction of AI that risks causing serious cross-border harm and exceeds a country’s capacity to cope. It should also cover situations where AI involvement is only suspected or is one of several plausible causes, so that governments can act before forensic certainty arrives, if it comes at all. Most incidents will never reach this level. Agreeing on the definition in advance helps avoid paralysis during the first critical hours.
Next, governments need a practical guide. The first element of this playbook should be defining a common set of triggers and a basic severity scale so that managers know when to move from a routine incident to an international alert, including criteria for determining where AI involvement is only credibly suspected rather than conclusively proven. A second chapter should include the appointment of a global coordinator able to convene quickly, supported by technical experts, law enforcement partners and disaster specialists. A third step should be to establish interoperable incident reporting systems so that countries and companies can exchange critical information in minutes, not days. Next, we need to create crisis communications protocols using authenticated analog methods such as radio. Finally, we must draft a clear list of continuity and containment measures. This could include slowing down high-risk AI services or switching critical infrastructure to manual control.
Structuring emergency preparedness using AI
So who should oversee these AI emergency preparedness initiatives? My answer: the United Nations.
Placing this system within the UN structure is important for several reasons. The first is that an AI emergency will not respect alliances. A UN-anchored mechanism offers greater inclusiveness and reduces duplication between rival coalitions. It provides technical assistance to countries without advanced AI capabilities so that the burden is not borne by a handful of major powers. This adds legitimacy and constraint. Extraordinary powers must be lawful, proportionate and reviewable, particularly when they affect digital networks used by billions of people.
This international dimension must correspond to national measures that governments can take now. Each country should designate a 24/7 AI emergency contact point. Emergency powers should be reviewed to see if they cover AI infrastructure. Sector plans should be aligned with baseline incident management and business continuity standards. Joint exercises are expected to lead to waves of misinformation, model failures and cross-sector breakdowns. Migration to post-quantum cryptography must be a priority before a hostile attack forces such an upgrade. Governments should also register trusted senders and alert patterns so that messages can still reach citizens when systems are unstable.
These precautions are necessary now. Reported AI-related cyberattacks are on the rise, and many countries have already experienced smaller-scale outages, attempts to manipulate data, and waves of disinformation that hint at what a larger event might look like. Moreover, a rapid failure of AI could combine with today’s hyperconnected infrastructure to produce a crisis that no country can handle alone.
This is not a call for the creation of a new global super agency. This is a call to bring together what already exists into a coherent response. We need an AI emergency playbook that takes these tools and repeats them.
The measure of AI governance will be how we respond on our worst day. Currently, the world does not have a plan to deal with an AI emergency, but we can create one. We must build it now, test it and bind it to the law with guarantees, because once the next crisis begins, it will already be too late.



