A very public contract dispute exposes the ethical debate over how the U.S. military should use AI going forward. Anthropic vs. the Pentagon and what it means for U.S. citizens.
Guests
Steven Levy, editor-at-large at WIRED. His article “AI Safety Meets the War Machine” was published in his newsletter, Backchannel, in February.
Heather Roff, Senior Research Scientist at the Center for Naval Analyses. She wrote the Department of Defense’s AI Ethical Principles in 2019.
The version of our broadcast available at the top of this page and via podcast apps is a condensed version of the full show. You can listen to the full, unedited broadcast here:
Transcript
Part I
MEGHNA CHAKRABARTI: When the U.S. military launched its air attack on Iran this weekend, it used artificial intelligence tools to help with the strike. Now, this is not a surprise. But what is a surprise is that the DOD reportedly used AI services from Anthropic, the company that created the AI tool Claude, and it did so just one day after the Pentagon declared Anthropic a supply chain risk. That is a threat to national security, but the military went ahead and used Anthropic’s AI in the most consequential military operation in the Middle East since the Iraq war. Is Anthropic a threat to U.S. national security or isn’t it?
Let’s take a moment to recap the quite complicated backstory to this question. Anthropic is one of the premier contractors of AI for the U.S. military, and many analysts say that Anthropic’s tools are superior to others at the Pentagon has at its disposal. But last month, a major dispute broke out into the public when defense secretary Pete Hegseth started blasting Anthropic as quote “woke AI.”
And what fueled this defense secretary to deploy such a Trumpian smear? Just one thing actually, Anthropic has a very hard line on how its technology can be used by any military. It will not allow its AI tools to be used on mass surveillance of Americans, and it won’t allow it to be used in weapon systems that do not need human oversight to launch attacks.
Over a series of very public statements, reporting and meetings, Anthropic’s CEO Dario Amodei refused to water down those rules regardless of pressure that he received from Secretary of Defense, Pete Hegseth. So finally on Friday, Hegseth ordered the Pentagon to designate Anthropic a supply chain risk. An enormous decision, which we will discuss in depth a little bit later in this show.
And just hours after that decision, Anthropic CEO, Dario Amodei … appeared on CBS to respond. And the very first thing he wanted to assure the American people is that he actually supports the use of AI by the military.
DARIO AMODEI: I believe that we have to defend our country. I believe we have to defend our country from autocratic adversaries like China and like Russia.
CHAKRABARTI: But he says, defending the country also means upholding America’s long held tradition of democracy.
AMODEI: I have always believed that, as we defend ourselves against our autocratic adversaries, we have to do so in ways that defend our democratic values and preserve our democratic values.
CHAKRABARTI: His unwillingness to let the military use an unfettered version of Claude AI also rests on a simple technological truth.
AMODEI: The AI systems of today are nowhere near reliable enough to make fully autonomous weapons. Anyone who’s worked with AI models understands that there’s a basic unpredictability to them that in a purely technical way, we have not solved. And there’s an oversight question too. If you have a large army of drones or robots that can operate without any human oversight, where there aren’t human soldiers to make the decisions about who to target, who to shoot at, that presents concerns.
And we need to have a conversation about how that’s overseen.
CHAKRABARTI: Now, of course, Secretary Hegseth and the military disagree and insist that AI use by the military should be completely unfettered. Now, the Pentagon v. Anthropic fight is about so many deeply and definitively important issues of our time.
How should the military use AI? What are the limits, if any, that should be placed on that use? And what are the consequences seen and unforeseen when the federal government seeks to kneecap a company simply because that private business will not sell the exact product the government wants. So let’s start with Steven Levy.
He is an editor-at-large at WIRED, and he’s recently written about this, in a piece called AI Safety Meets the War Machine. Steven Levy, welcome to On Point.
STEVEN LEVY: Thank you. Happy to be here.
CHAKRABARTI: So where are we right now? Just bring us up to date with the latest, if there’s been any change since Friday on this supply chain risk designation and the relationship between the Pentagon and Anthropic.
LEVY: There have been changes, for one thing, as you mentioned. We are at war now. And Anthropic’s AI Claude is going to war for the United States. And this is, as you point out, the technology, which is sufficiently dangerous in the Pentagon’s view. They want to not only stop using it themselves, but they don’t want to do business with any other company that uses it.
And depending on how that’s interpreted, that could be a critical factor in whether Anthropic lives or not as a company. Some people say that if you read the law, it only would stop companies from selling their technology to the government, with a clause embedded in their military uses.
But Donald Trump seems opposed to saying, anyone who even has a business relationship with Anthropic we won’t do business with.
CHAKRABARTI: Okay. Okay. So let me just clarify a bit there, because you got straight to something that’s really at the heart of the issue, this has to do with the supply chain risk designation, right?
And so the Pentagon is saying not only will we stop using AI. I think they’ve given themselves a six month unwind period that will stop using Anthropic AI. But it’s also saying, as you just mentioned, that because it’s a supposed supply chain risk to the United States, Anthropic is, that the giant United States military will not do business with any other company that does business with Anthropic.
CHAKRABARTI: Okay. So doesn’t this, not only is this an existential threat to Anthropic, but how many companies does it leave in terms of the ecosystem of all the tech that the U.S. military contracts out for?
It seems to be a huge decision.
LEVY: Yeah. The president’s social media message seems to imply that, you know, any, the whole government will enforce this ban. Basically, every major corporation does business with the government. So if you interpret that broadly as the President seems to, and Pete Hegseth’s death doesn’t seem to disagree with that.
That would be an unbelievable limit on who Anthropic can sell to. Especially since Anthropic’s business plan is to sell to corporations. So in a way, they’ve gone to an extreme, not only to say we don’t want to include you in our military suite of armaments, because you won’t let us use it as we want, but we’re going to punish you and basically try to end you as a company.
CHAKRABARTI: So the government though has, from what I recall back last summer, the Pentagon actually was, it gave the same, like the same amount, roughly a $200 million contract to not just Anthropic, but Google OpenAI and xAI. But the thing is that Anthropic was the first to be cleared for classified use because military officials considered it the most advanced and secure model for really sensitive military applications.
So that’s the Pentagon saying that Anthropic was delivering the best product.
LEVY: That’s right. And they, I think they still believe that if you talk to folks there have been reports that xAI, Elon Musk’s company is very eager to get that classified status.
And I think they’ve just been granted it. But the people in the military feel that it’s not as reliable as Anthropic’s Claude. So they prefer Claude. Now, the other development that happened very recently is that OpenAI, which is Anthropic’s probably key competitor has jumped in and completed a contract with the Pentagon.
They haven’t done any implementation yet, they’ve gotten the classified status and are willing to work with the Pentagon to take the place of Claude in the Pentagon’s plans. And there’s an amazing backstory to that in that Anthropic was founded by people who worked for OpenAI who left because they felt that OpenAI wasn’t developing as technology with the safety provisions that they felt was necessary.
CHAKRABARTI: So I don’t know if you can give us any insight into this Steven, but Anthropic always been very public about these really hard lines that it follows regarding how its AI is going to be used. And as you heard say earlier, it’s not that Anthropic is opposed to military applications for AI, of course not.
It’s just that they won’t let it do certain things. Now, the Pentagon had to know this beforehand, so why even go forward with any kind of contract knowing that ultimately what it wants to use and we’ll get to the morality of why it might want to do mass surveillance of Americans later, but that Anthropic simply wouldn’t let it do that.
Do you have any insight on that?
LEVY: Yeah, I do actually. So when they formed the contract last year, Anthropic felt that they had those safeguards, they wouldn’t be using it for mass surveillance of Americans or for autonomous weapons, as you described. And you painted really the most scary scenario where you have armies of drones with the wherewithal to use lethal force without humans saying that’s okay, or that’s not okay.
You could imagine these drones being deployed on the border or on the seas. Where right now humans are making decisions to blast people, in situations really of questionable legality. But so what happened was that last year they got those assurances, but something happened that’s under dispute.
A couple months ago after the government, the U.S. government displaced Maduro in Venezuela … Anthropic met with Palantir, which was a company that uses their technology and is used by the government, and Palantir told Anthropic that, by the way, your stuff was used in that raid in Venezuela.
And now this is under dispute. Anthropic denies this, but apparently the people of Palantir heard that Anthropic say we don’t like that, that was a misgiving about that. In any case, they apparently went to the Pentagon and said, Anthropic might not be on board for all you want to do, and it very well could be that incident led the Pentagon to say, you know what? We have to have a total green light to use whatever technology we’re buying, so we’re going to push this and make those red lines go away. No longer will we accept the red lines, and that might have been what forced this confrontation.
Part II
CHAKRABARTI: I’d like to turn now to Heather Roff. She’s a senior research scientist at the Center for Naval Analysis, and back in 2020, she was the primary author of the Department of Defense’s AI Ethics principle, and she joins us today from Loudoun County in Virginia. Heather Roff, welcome back to the show.
HEATHER ROFF: Hi, Meghna. Great to be here. Thank you for having me.
CHAKRABARTI: Okay, so first of all actually, can you give us a kind of a broad picture of the Pentagon’s use of AI today? I know that’s like a 17-hour answer, but just to give us a sense as to why it is so important in operating everything from big data analysis, we’d mentioned Palantir before. We’ll talk about that more later too. Obviously, even as we saw over the weekend, like targeting, airstrikes in an active war in Iran.
ROFF: Yeah. Look, defining AI as you mentioned is like a 17-hour ordeal. But we can basically say, look, this is a suite of tools, a suite of techniques, right?
Where you can have very well-defined algorithms that are deterministic in nature, and we’ve been using those in the Department of Defense for decades, right? Everything from missile defense to some rudimentary planning and things like that. And then you get into more of the more nuanced flavor of the day.
AI right now with generative AI and machine learning and things like that. And the current uses around generative AI, LLMs, large language models like Claude and ChatGPT and these other systems, this is more new to the department just as it’s more new to everyone using them in the private sector.
For the uses that the LLMs are appropriate for, this could be everything from trying to do, putting in different types of doctrine concept notes to figure out like what’s the most important thing that I need to pull out of this to looking at data processing and large data analysis.
Patterns, predictions, things of that nature. But there are limitations to how far these models can go, especially when we’re talking about high stakes operations.
CHAKRABARTI: Okay. I don’t know if you’re willing to theorize, but I’m very fascinated if we can speculate, if how AI might have been used to say even over the weekend in those strikes against Iran.
ROFF: Speculating, look, we, so Palantir has what’s called the Maven Smart System, right? And the DOD and others like NATO has just brought on the Maven Smart System. These are very data intensive software platforms hosted on a cloud, right? Hosted on AWS, which is Amazon.
And Claude has been a component part of that smart system for a couple of years now, since 2024, 2025.
CHAKRABARTI: What does it do?
ROFF: There’s multiple things. There’s data visualizations, if you want to see where ships are, if you want to be able to look at satellite imagery for targeting, things of this nature.
So without getting too into the weeds and on the high side but it’s about data visualization and it’s about planning and operations. And so I can see if you’re using Maven Smart System as part of your intelligence surveillance and reconnaissance and your planning processes, then you’re using quote-unquote AI, right? When you’re doing this, because you’re trying to see, you’ve got how many satellites overhead that you’re trying to get information from those satellites, you’re trying to see where, pulling data from where all the ships are in a particular area.
You’re pulling that data information, you’re pulling information from the intelligence community, right? There’s a ton of information that goes into these types of operations, and that’s what these systems are trying to do, right? They’re trying to get every single piece of data and then visualize it to whatever person they need to within the chain of command so that the command can make decisions.
CHAKRABARTI: Okay. And as we heard earlier from Steven, reportedly, Palantir had told Anthropic and Palantir uses Anthropic in some of its systems, maybe even the one that you’re talking about, that the tech might have been used in Venezuela.
Okay. So the pieces are starting to come together now. Heather, can I just for a second put the Pentagon and what the Secretary of Defense has said and done recently against Anthropic. I want to put that aside and we’re going to return to it because thus far in the conversation, we’ve been operating as if everybody knows what Anthropic is that people are familiar with.
Dario Amodei’s view on what AI should and shouldn’t be used for. And I don’t actually think that everyone does remember, does know that. So can you just tell us a little bit about, has Anthropic under Amodei’s leadership always had this kind of let’s say more desire to put some guardrails around what AI can do versus other companies?
ROFF: Yeah, no, I think he has, I met Dario for the first time back in 2014, 20 15. And he and a group of then OpenAI folks as well as some other academics and myself we wrote a big paper called The Malicious Uses of AI, right, where we did, and it was like a 30 author. There was quite a few of us.
That got together and thought about all the different ways AI could be misused coming in the future, right? It’s a warning shot. Hey, we should think about this. And that was over 10 years ago, and so Dario’s had that on his mind about how do I do safe development, right? He thinks and believes that AI can do good things, but that we have to be measured in how we go about it.
CHAKRABARTI: And in fact, recently. Yeah. And Heather, sorry, just to add to that, recently also, I’d say in the past year he’s been incredibly public, right?
In sounding these alarms, writing, writing op-eds in the New York Times, saying we need to be thinking more thoroughly about what AI can do before it gets too far ahead of us that we need regulatory regimes around it. Like for example, here Dario is in, let’s see, when was this?
It was at Davos earlier this year. And he talks specifically about how certain types of governments could use AI technologies to bridge surveillance and attack capabilities.
I am concerned that AI may be uniquely well suited to autocracy and to deepening the repression that we see in autocracies.
DARIO AMODEI: We already see it in the kind of surveillance state that is possible with today’s technology, but if you think of the extent to which AI can make individualized propaganda, can break into any computer system in the world, can surveil everyone in the population, detect dissent everywhere and suppress it, make a huge army of drones that could go after each individual person.
It’s really scary. It’s really scary and we have to stop it.
CHAKRABARTI: So that’s Dario Amodei just this past year at Davos. Okay. So they have, Anthropic has this long history, almost even as like reason for being, in terms of being a more ethical AI company, while also putting out a very sophisticated product.
As we talked about earlier Heather, it was the first, the one that the DOD said, yeah, it’s really good and we’re going to clear it for classified use. So that makes me want to take us further, even further back in time. Heather, because we had you on in 2021, I believe. And that was before anyone, like, just we regular Joes out there, like even knew that we could use AI in our daily lives.
It feels like a completely different world. But we had you on that back then to talk about the military and ethical concerns around AI because you had written the Department of Defense AI ethics principles document. So what was the DOD thinking back then regarding, I don’t know, ethical limitations or considerations that they should have when it comes to using AI.
ROFF: Sure. Yeah, also, Meghna, that just makes me feel even older.
CHAKRABARTI: Tell me about it.
ROFF: But yeah, no, I think, so that project started under President Trump’s first term, right? So this was, the project got started in 2018 when we started hosting a series of expert round tables and discussions within the department and then without, externally.
And then it culminated, right, in these five principles with this very long supporting document that explains all the principles more. And that came out of the Defense Innovation Board and I was the special governmental expert for the board. Taking on that role. And so then when the board presented those principles to then Secretary of Defense Esper he said, this looks good.
And there was support within the department. There was a consensus around, yes, we want to use these tools, but we want to make sure that we have the right grounding for what is required. Why are these tools different than any other tools that the department uses? And why do we need a separate set of rules for them?
So go ahead.
CHAKRABARTI: Only because I just want to jump in here because you said there were basically five ethical principles, correct? And I have them here in front of me. I just want to take a second to go through them. Because it does tell us a lot about how the military was thinking about this at the time, right?
So I see here that the five ethical principles begin with responsible, that the DOD personnel will exercise appropriate judgment and remain responsible for AI development and use. Tell us a little bit about that.
ROFF: Yeah. Humans are the responsible agents. The tool is not responsible.
If a rock fell down off of a mountain, you wouldn’t blame the rock. And so people are responsible. And this is just another kind of, Hey, we are the responsible entities, DOD personnel need to exercise their human judgment, right? Within due care and precaution and all of those these legal as well as ethical principles to remain, designed for responsibility, right? Think of it that way. And there’s a chain of command. And so responsibility grounds all of the other principles, right? This is a tool for human use. This is, we don’t want to design tools in such a way that sort of obfuscate that responsibility. So responsible is that grounding principle.
Then it goes into the notion of equitable, right? So equitable is the second principle. And this is about really unintended bias. And the new AI strategy that was recently released talks about, it talks about indirectly the equitable principle as one of these woke principles.
But that’s not really what was meant when this was put together. It’s about mathematical bias, right? It’s about biasing the system, so you can say, I want to buy a system to look for certain characteristics or patterns in the data. That’s my intended bias, right?
I am looking at the data. I want it to go in a particular direction. I want the system to push towards these types of cases. What we were saying was that you will take deliberate steps to minimize unintended bias. And that’s really what that is about. So we don’t want to say, oh, I never looked at the data, or I didn’t realize that the outputs of this system consistently bias in a direction that I don’t want it to.
And that can be everything from, gosh, all of a sudden, people’s pay raises were going in a direction I didn’t know. It can be everything from business to logistics to operations.
CHAKRABARTI: And then so go ahead. I’m sorry to jump in here. Because then you also, these all build on each other, right?
You have traceability. So being able to trace the systems, its decisions, audit them, et cetera. Reliable. That one’s like self-evident, right? We need the system to be reliable. Governable, right? To avoid those unintended consequences that the system could be deactivated if necessary.
And what was really interesting to me is like the overall goal here, as far as I understand Heather, and this is of course, back in 2020, was to ensure that AI systems would be secure, reliable, and compliant. With the ethical frameworks such as like the laws of war. Okay. Makes sense.
Now this is no longer the ethical principles that are active at the Pentagon. You were just saying that there’s a whole new set here.
ROFF: It’s hard to, I don’t know how you want to frame it. So once the principles came out. It became a lot of a mouthful to talk about the AI ethics principles.
And so people just started short handing that to responsible AI. Okay. Or RAI. And the shorthand for RAI was that if it’s responsible AI, it’s meeting these principles, these requirements, and that we’re doing the right testing. We’re creating sandboxes, we’re doing assurance.
We’ve designed it in such a way that the human understands what’s going on. There’s some traceability to it, right? You can figure out where the errors are, we can pull it back. That was what we’re talking about when we’re talking about responsible AI. Responsible innovation in the AI space.
CHAKRABARTI: Help set me straight though, because I thought I heard you say that the whole idea of, let’s say, first of all, like the ethical considerations that were primary in the 2020 document that the current Pentagon considers that.
ROFF: The new AI strategy that just came out a couple weeks ago.
I would say that it’s unclear to me if just the word equitable has become a hot button word. And so it was eschewed in that strategy document. But I don’t, but there’s still discussion within the Pentagon about responsible AI or RAI, that we still want, we don’t want to just say, Hey, let’s just, to try to be equivocal. There is still pressure to move faster and to innovate more, to deploy prototypes and demos more quickly. That being said, it’s hard to do that and go, are we doing all of the things we could to make sure that this is the best piece of equipment or are we just going fast, and going fast?
Part III
CHAKRABARTI: I just want to listen to a couple of examples. Different points of view on what AI can and should be used by the U.S. military. So to remind folks at the top, Anthropic’s no go line for the use of Claude and their other AI technologies by the Pentagon were two things. One, are weapon systems that could self-deploy without human oversight.
And two, the use of AI or their products for the mass surveillance of all Americans. Specifically, Anthropic CEO Dario Amodei talked about this on the New York Times, one of the New York Times podcasts back in mid-February. And here’s what he discussed regarding how AI could be used by authoritarian states or basically any government to surveil citizens.
AMODEI: Think about the Fourth Amendment. It is not illegal to put cameras around everywhere in public space and record every conversation. It’s a public space. You don’t have a right to privacy in a public space. But today the government couldn’t record that all and make sense of it, right?
With AI, the ability to transcribe speech, to look through it, correlate it all. You could say, Oh, there’s this, this person is a member of the opposition, right? This person is expressing this view and make a map of all hundred million. And so are you going to make a mockery of the fourth Amendment by the technology finding kind of technical ways around it.
CHAKRABARTI: So here’s another view. This is from Palantir CEO Alex Karp, and he thinks that American citizens have already accepted a certain level of surveillance and he’s downplayed concerns over what’s called this pattern of life surveillance because it’s enacted by corporations and not by the government.
KARP: The primary evidence for a surveillance state in the west is not government on consumer.
It’s company knowing every single action you have at all times. And we walk around with surveillance devices called electronic devices, and every single thing we do is monitored, not, I think, primarily so people can eviscerate or have an understanding of am I shagging with too many people on the side and lying to my partner or lying by omission, because they want to sell us like cornflakes.
CHAKRABARTI: So that’s Palantir CEO Alex Karp. Okay, Heather, so let’s take this back to the Defense Department. Now the DOD says the law says, as far as I read it, that the Department of Defense cannot engage in mass surveillance of Americans. So a lot of people say, Nothing to see here. There’s no worry.
They couldn’t even use the AI to do that if they wanted. But other people obviously including Anthropic CEO, say, then why do you want Claude to be able to do that for you? Like, why is this a line that you say we want Claude AI to be, or Anthropic AI to be unleashed in order to be able to do that for the Defense Department.
How do you read this, Heather?
ROFF: I think there’s a couple of things here that are, I don’t know if they’re strawman arguments or not. As you say, so Title 10 right of the U.S. code is about governing military behaviors, right? What the military is allowed to do, Title 50 is about the intelligence community and what the intelligence community is allowed to do. When it comes to the surveillance of American citizens versus non-citizens, and then people who are outside of America, right?
So if we’re not talking about even surveillance within the borders of America, but outside of it, right? There’s no law that says we can’t surveil outside of it. Then it becomes a question of are we, is the DOD using these to surveil on citizens inside the country? And you go, that’s not really their remit.
They’re about fighting wars elsewhere. Now we can talk about whether or not using National Guard troops within U.S. cities and if that’s a whole other above my pay grade decision that I am not comfortable getting into the weeds on that. But so you can say here’s the military’s kit and it’s designed to do what? So AI to do what? Is it AI to suck up a bunch of data over here and here, and then correlate patterns of life or whatever you’re observing. Then you have to figure out, Where am I using it? When is it appropriate? Who’s using it? Now, it should also be said that Palantir sells its kit to the Department of Homeland Security.
So the Department of Homeland Security does surveil American citizens.
The Department of Homeland Security does surveil American citizens.
Heather Roff
CHAKRABARTI: I was gonna say that was Karp’s misdirection there, right? It’s only Kellogg’s that’s watching how, what kind of cereal you eat. No, clearly not.
ROFF: No. And I think the other thing about, one thing to know about Palantir’s products, right? Is that what they do is they create this giant ontology, right? They call it a conceptual semantic object model, right? Which is basically, it’s gonna take all of the different data points, right? Who’s doing what, at what time and what location, what are they doing, what actions? And they create and they call that an object, right?
So an object becomes this conceptual thing for people, places, and things, and then that object becomes a single point of convergence for all of the data around whatever is related to that thing, and that’s how they’re able to build these pictures of you, right? Oh, you spent so much time on Wayfair looking at deck chairs, and then you went to, so you can create I’m over in my home and so my phone is tracking when I, oh, it looks like you’re going to work today. You’re in the car, right? There’s all sorts of these devices that are gathering data and it can create a very robust picture of your behavior. And that’s this object, semantic object model, right?
That Palantir uses. And so we can use that model for anything, right? And so if we’re using it in a military campaign or an operation, and we’re using it abroad, right? We’re trying to get as much information from that’s publicly available and the open sources, open-source intelligence.
You’re trying to get all sorts of other things. You’ve got all sorts of satellite data, you’ve got anything that you can but have a better information picture to undertake your operation. Now, if it’s in the United States, then Dario has to have a conversation with David Karp about them selling the product to DHS.
So can I just, but can I just at the risk of oversimplifying things here, Heather, I think a lot of people are like, but hang on, isn’t this actually simple if the law says that the United States military cannot surveil Americans inside the United States. And that’s what Anthropic also says.
It does not want its tools to be used for, one of the things, then like full stop, right? Like why would the Pentagon say no, this is a restriction that we cannot accept.
ROFF: Again, I hate to be the bearer of that’s above my pay grade, right? This is a conversation that is going to have to happen at the Supreme Court and in Congress and the White House as well as with the Secretary of Defense.
The law says one thing and then how things are utilized ex lege outside of the law is another thing.
CHHAKRABARTI: So then let’s turn to the other one, the other issue. Because I think maybe that’s more in a sense more concrete, if not more actually concrete, but at the same time complex.
More complex. And that is Anthropic was, has said, no, it does not want its tools to be used to create weapon systems that could basically work entirely without human approval or human oversight. Now, this one I have heard from a number of people who are thoughtful military analysts saying, we need to build structures that would prevent accidental firings or the starting of a nuclear war or whatnot.
But at the same time, we cannot say that the military should not have such a capability. Because everyone else is going to do it. China’s going to do it. So putting the shackles on the Pentagon here would actually, could actually make us weaker overall.
CHAKRABARTI: Yeah, I’ve always thought that’s a red herring for an argument.
Oh, we want to be responsible, but not when it’s hard. If it’s hard then we want to make sure that we have all the rules are, the rules are there are no rules. And so I would say, look, when it comes to autonomous weapons, that in and of itself as a definitional thing, right?
I don’t know any military that’s going to be like, what I really want to create is a system that launches itself. And I have no idea where it is and what it’s doing. That’s what I think outside of the bounds of what any reasonable military in the world would ever want right. Now, do they want to create systems that have autonomous behaviors that they can operate in denied environments so they don’t have access to GPS, and they can locate on their own to a particular location and then select and engage a target.
Yeah. Some militaries do want to do that, right? Ours included, and our military does not preclude that from happening, right? The DOD policy, which is DOD Directive 3000.09, states that for autonomous and semi-autonomous weapons, they have to have certain levels of approval when they’re being tested and validated and sign offs for acquisition and things of that nature.
But it does not preclude us from going and generating them and procuring them and using and deploying them.
CHAKRABARTI: Can you give us a quick example? I don’t know if there’s autonomous systems within things like submarines.
ROFF: Yeah, putting a definition between automated and autonomous, sometimes it gets a little bit gray.
CHAKRABARTI: Ah, okay. And so you can say something like a landmine, you could say it’s autonomous. It selects and engages at a target on its own, right? And the selection is just how much weight goes on that pressure plate. Some people would say, no, that’s not autonomous. It’s automated, right?
It’s automatic. We use automatic target recognition on a large portion of our missiles, our precision guided munitions, right? They do terrain mapping. They can maneuver themselves in time and space by themselves, right? A human is not guiding that missile. Once they find their target, you can look at anti radar missiles, right? You can, so they can find a radar signature lock on that signature. We use sea missiles for Naval Maritime uses that also can do, so there’s a large number of systems that we use that are pretty sophisticated, right?
And you could say that’s autonomous. That’s not autonomous. Based on your definition. For the DOD, the definition is that a weapon system that can select and engage a target without intervention by a human operator, right? So we can fire and forget weapons. Israel has them, there’s a large class of the Shahed drone that Iran’s been using.
There are FPV drone in Ukraine, right? There’s all sorts of these different types of systems. So you say, look, I’m going to launch it and its ATR. Its automatic target recognition software is going to figure out when it’s got to that place and it’s going to say, yep, that’s the thing that I was trained to do and these are the sensor inputs that I have.
And it says to this confidence level, that’s the thing I’m looking at and therefore I engage. That’s an autonomous behavior.
CHAKRABARTI: And this already exists, you’re saying?
ROFF: Correct.
CHAKRABARTI: And this is something that the human oversight comes from what Target selection to begin with.
ROFF: It comes from, it comes down the whole chain.
When you’re thinking, these weapons don’t just design themselves de novo, right? You have to figure out like, what’s the delivery platform that the weapon system is on, right? Is it a missile? Is it a drone? Is it a boat? Is it a torpedo?
What’s the delivery platform and then what’s the targeting software, and then what’s the data that software’s using? And we can talk about it as AI, or we can talk about it as software. But what is the data that it’s trained on so that it knows what it’s looking for.
CHAKRABARTI: So it sounds like you’re saying though, that. And I don’t know if I’m hearing this incorrectly, that Anthropic’s concerns then would be moot if there are, as you’re describing, human —
ROFF: I think where they’re going is they’re saying fully autonomous weapon systems and so it’s okay then, are you okay with semi-autonomous weapon systems and where is the line between semi-autonomous and fully autonomous?
And that’s to me, where it’s really unclear. Okay. Dario, you don’t want fully autonomous weapon systems, but if your definition of fully autonomous weapon system is a system that just launches itself and decides to go to war one day, I don’t know any military system that has that. Right?
Like when you engage in an operation, right? There’s operational planning. There’s targeteers, there’s judge advocate generals, right? Do things go wrong? Yes, they do. But at the same time, you are trying to figure out what is the thing that I want to get? What is the military objective that I’m seeking?
And then what is the force that I’m going to apply to get to that military objective? To deny, degrade, destroy, whatever. What is the likely probable collateral damage? Is there civilian casualties? Is it objects and civilian objects that are going to be destroyed? So all of these considerations are happening, right?
And so when you’re saying if it’s fully autonomous, is it doing all of those things, and there’s nobody, that seems to me a little bit not how militaries work. And we can say that there are times when we might have autonomous systems that are out for extended periods of time.
Now we have to think about policies and training and tactics and procedures about how long is too long. If this is a submersible system that’s out for, is it a day, a week, a month? What are the types of training and testing and validation that we make sure that system is doing the behaviors we want it to and not learning while it’s deployed and it’s getting skewed. If you’re like, oh, it learns some new tricks.
The first draft of this transcript was created by Descript, an AI transcription tool. An On Point producer then thoroughly reviewed, corrected, and reformatted the transcript before publication. The use of this AI tool creates the capacity to provide these transcripts.