A Service of UA Little Rock
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

The secret campaign within the Pentagon to bring AI into combat

MARY LOUISE KELLY, HOST:

The problem with war has always been the humans. We humans are inefficient. We get tired. We get killed. That is the view of a Marine Corps colonel named Drew Cukor, who arrived at the conclusion that humans do better when machines help us and that AI will completely change - maybe already is changing - the way that America fights wars. Well, his story is at the heart of a new book about the Pentagon's campaign to incorporate AI into combat - a campaign known as Project Maven. "Project Maven" is also the title of the book. The author is Katrina Manson, and she is in our New York Bureau. Welcome.

KATRINA MANSON: Thanks so much.

KELLY: So Project Maven actually gets stood up in 2017. Why? Like, what was happening then that this got greenlit?

MANSON: By then, the U.S. is deep into its forever wars, which are meant to be winding down in Afghanistan and Iraq, but they're also fighting ISIS. And at this time, several people at the very senior-most ranks of the intelligence and defense communities are also looking towards a potential future conflict with China and needing to lean into, in their view, modern tech, cutting-edge tech, seeing that the commercial world in the U.S. was now relying on AI increasingly bringing together what was then known as big data and finding out that the Pentagon really was behind, in their view, and they wanted to develop much more sophisticated weapons. In the same way almost that the U.S. had tried to get a jump start on the nuclear bomb, they wanted to get a jump start on AI, and the aim of this was autonomy - to take humans off the battlefield and deliver overwhelming U.S. power.

KELLY: So you just used the phrase that they wanted to lean into cutting-edge tech. I'm trying to cast my mind back to 2017 and where AI was, and it certainly would not count as cutting-edge tech today. There must've been early disasters, early triumphs, as they're trying to figure this out because it occurs to me, if you're trying to figure out how do you get humans off the battlefield and test AI on the battlefield, the only way to do that is test AI on the battlefield.

MANSON: They tried to do it in safe ways, so they weren't immediately running algorithms into operations, but they were running it over operations at forward deployed centers. And they really were cutting edge, but these were algorithms that had been trained initially on things as human as wedding cakes. So the algorithms - initially, the models could recognize wedding cake tiers, bridal veils, a groom's suit. And this technology was repurposed to start recognizing things on the battlefield. And these algorithms were not working in the early days. They would mistake trees for people, rocks for buildings, a cloud was identified as a school bus. And even Drew Cukor himself, who was this big evangelist for AI, said that AI was just a bag of potato chips to other people, meaning that it simply wasn't good enough. But he argued that it would get better, and he wanted to build the systems, the operating systems, the digital interface, and really the trust and almost muscle memory of operators to try to lean into new tech.

KELLY: Were there consequences to some of those early, it sounds like, huge errors, like mistaking a - what did you say? - a cloud for a bus?

MANSON: I think the consequences there were fury and a lack of take up. So operators just stopped using it, and then they had to rethink. And they sent out people who were very skilled as drone analysts to try and encourage them, to say, look, AI could help. One of the first breakthroughs they had was AI did detect someone hiding quicker than a human did. On another occasion, the AI detected a farmer walking across a field who the U.S. was about to target. They had been able to call off the strike in time, but it had taken the humans something like 40 seconds to notice there was a farmer there. The AI had spotted that farmer very quickly and sometimes was able to spot Marines in the fray of battle quickly enough to count out those Marines, say they were safe, and then call in a missile against the enemy targets. So they did start seeing results with some algorithms.

KELLY: I want to bring us up to how the Defense Department is using AI today. You've talked about it was used to share targeting information with Ukraine near the start of the war, 2022, that it was used in 2024 in strikes against Syria and Iraq, the Houthis in Yemen. What do we know about the current war in Iran?

MANSON: I think it's very interesting that CENTCOM has been prepared to take time out during these operations to make public that they are using AI tools. The spokesperson of Central Command has also told me they're using a variety of AI tools to generate points of interest. Now, points of interest is sort of military speak for everything before a decision to target. So the line they're drawing there is that AI is not deciding what to shoot at, but they are using AI to develop targets including location, elevation, description. And most recently, a senior defense official even explained that the system - Maven Smart System - can develop courses of action and work through something called Target Workbench, all of which is about developing not only a target, but also the weapon you would pair with it and what order you might shoot it in.

KELLY: This brings me to ask about a line in your book that caught my eye. You write, (reading) AI remains a narrow, faulty tool with considerable limits to its usefulness and reliability that the U.S. military is still discovering.

Limits like what?

MANSON: There's widespread knowledge within the Pentagon that AI can make mistakes. We all know that AI can hallucinate. It can be prone to bias. It also has this thing called algorithmic drift. Over time, algorithms tend to become less right. And in addition, research has shown and some of the advisers to the Pentagon have highlighted this research to me that chatbots can be escalatory. They can tend to agree with you.

KELLY: You're reminding me of the 1980s Matthew Broderick movie "WarGames."

MANSON: Right, right, exactly. And one official I interviewed did say it's not - it's - you know, we're not building the WOPR. But actually, if you are asking questions about - shall I take this move; is this a sensible move; are we in line with the laws of war? - you have to be very careful about the way in which you ask that question. And I do report in the book that they have thought about this or some quarters of the Pentagon have, and they're trying to add guardrails into the prompt. It tries to say, are you going to escalate? Check that you don't. And so the claim was made to me that you can actually rein in that capacity for error rather well. I think that needs to be continually tested, and the extent to which this administration is prepared to accelerate AI and also consider the policy implications and just the technical realities of AI is still something that's rolling out.

KELLY: Katrina Manson is a Bloomberg reporter who covers tech and national security. Her book is "Project Maven: A Marine Colonel, His Team, And The Dawn Of AI Warfare." Katrina Manson, thank you.

MANSON: Thanks.

(SOUNDBITE OF MUSIC) Transcript provided by NPR, Copyright NPR.

NPR transcripts are created on a rush deadline by an NPR contractor. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.

Mary Louise Kelly is a co-host of All Things Considered, NPR's award-winning afternoon newsmagazine.
Courtney Dorning has been a Senior Editor for NPR's All Things Considered since November 2018. In that role, she's the lead editor for the daily show. Dorning is responsible for newsmaker interviews, lead news segments and the small, quirky features that are a hallmark of the network's flagship afternoon magazine program.