The official described this as an example use case of how things might work, but would not confirm or deny whether it represents how AI systems are currently being used.
Other outlets have reported that Anthropic’s Claude has been integrated into existing military AI systems and used in operations in Iran and Venezuela, but the official’s comments add insight into the specific role chatbots may play, particularly in accelerating the search for targets. They also shed light on the way it’s deploying two different AI technologies, each with distinct limitations.
Since at least 2017, the US military has been working on a “big data” initiative called Maven. It uses older types of AI, particularly computer vision, to analyze the oceans of data and imagery collected by the Pentagon. Maven might take thousands of hours of aerial drone footage, for example, and algorithmically identify targets. A 2024 report from Georgetown showed soldiers using the system to select targets and vet them, which sped up the process to get approval for these targets. Soldiers interacted with Maven through an interface with a battlefield map and dashboard, which might highlight potential targets in one color and friendly forces in another.
Now, the official’s comments suggest that generative AI is being added as a conversational, chatbot layer—one which the military would use to more quickly find and analyze the data as it makes decisions like which targets to prioritize.
Generative AI systems, like those that underpin ChatGPT, Claude, and Grok, are a fundamentally different technology than the AI that has primarily powered Maven. Built on large language models, their use in war is much more recent and less battle-tested. And while the old interface of Maven forced users to directly inspect and interpret data on the map, the outputs given by generative AI models are easier to access but harder to verify.
The use of generative AI for such decisions is reducing the time required in the targeting process, the official added, but did not provide detail when asked how much additional speed is possible if humans are required to spend time double checking a model’s outputs.
The use of military AI systems is under increased public scrutiny following the recent strike on a girls school in Iran in which more than one hundred children died. Multiple news outlets have reported the strike was from a US missile, though the Pentagon has said it is still under investigation. And while the Washington Post has reported that Claude and Maven have been involved in targeting decisions in Iran, there is no evidence yet to explain what role generative AI systems played, if any. The New York Times reported on Wednesday that a preliminary investigation found outdated targeting data to be partly responsible for the strike.