Palantir Demos Show How the Military Could Use AI Chatbots to Generate War Plans

When the user asks “Which enemy military unit is in the region?” » the AIP wizard assumes that it is “probably an armored attack battalion based on equipment configuration.” This prompts the analyst to request an MQ-9 Reaper drone to study the scene. They then ask the AIP wizard to “generate three courses of action to target this enemy equipment”, and within moments the wizard suggests attacking the unit with either an “air asset”, “long range artillery” or a “tactical team”. The user asks the assistant to send these options to a fictional commander, who ultimately chooses the tactical team.
The final steps happen quickly: the analyst instructs the AIP to “analyze the battlefield,” then “generate a route” for troops to reach the enemy, and finally “assign jammers” to sabotage their communications equipment. Within seconds, the analyst gives a final review of the battle plan and orders the troops to mobilize.
In this scenario, Claude would be the “voice” of the AIP Assistant and the “reasoning” it uses to generate responses. Other AIP demos show users interacting with large language models in similar ways. In a blog post last week, for example, Palantir explained how Maven Smart Systems customer NATO could use an AIP agent in the tool.
In a graphic, Palantir shows how a third-party defense contractor can choose from several of Palantir’s built-in AI models, including different versions of OpenAI’s ChatGPT and Meta’s Llama. The user selects GPT 4.1 from OpenAI, but apparently this could be where a soldier would also have the option to choose Claude instead.
An analyst then views a digital map showing the location of troops and weapons. In a panel labeled “COA,” they click a button that prompts a GPT-4.1-powered tool to generate five possible military strategies, including one called Support-by-Fire-Then-Penetration-Shock-and-Destruction.
Another example shows how the system could help interpret satellite imagery: The analyst selects three fuel truck detections on a map, loads them into the AIP agent’s chat interface, and asks it to “interpret” the imagery and suggest options on what to do next.
Claude can also be used by the military to create intelligence assessments that could inform strike planning later. In June 2025, WIRED saw a demonstration by Kunaal Sharma, head of public sector at Anthropic, showing how the enterprise version of Claude could be used to generate “advanced” reporting on a real Ukrainian drone strike dubbed Operation Spider’s Web. In the demo, Sharma explained, Claude relied only on publicly available information. But by partnering with Palantir, he said, the federal government can also leverage internal data sets.
“It’s usually something where I can sit down for about five hours with a cup of coffee, read Google, participate in focus groups, start writing reports and writing a citation, etc.,” Sharma said. “But I don’t have that kind of time.”
In the demo, Sharma asked Claude to create an “interactive dashboard” with information about Spider’s Web operation, then translate it into “object types” that could be analyzed in Foundry, one of Palantir’s commercially available software products. He also asked Claude to write a detailed analysis of recent developments in Russia’s border provinces, as well as a 200-word summary of the “military and political effects” of the operation.
“Frankly, I’ve been reading this stuff for 20 years – writing it, being an academic myself,” Sharma said, “It’s actually pretty good.”


