Anthropic teams up with Palantir and AWS to sell AI to defense customers

In This Article:

Anthropic on Thursday said it is teaming up with data analytics firm Palantir and Amazon Web Services (AWS) to provide U.S. intelligence and defense agencies access to Anthropic's Claude family of AI models.

The news comes as a growing number of AI vendors look to ink deals with U.S. defense customers for strategic and fiscal reasons. Meta recently revealed that it is making its Llama models available to defense partners, while OpenAI is seeking to establish a closer relationship with the U.S. Defense Department.

Anthropic's head of sales, Kate Earle Jensen, said the company's collaboration with Palantir and AWS will "operationalize the use of Claude" within Palantir’s platform by leveraging AWS hosting. Claude became available on Palantir's platform earlier this month and can now be used in Palantir's defense-accredited environment, Palantir Impact Level 6 (IL6).

The Defense Department's IL6 is reserved for systems containing data that's deemed critical to national security and requiring "maximum protection" against unauthorized access and tampering. Information in IL6 systems can be up to "secret" level — one step below top secret.

"We're proud to be at the forefront of bringing responsible AI solutions to U.S. classified environments, enhancing analytical capabilities and operational efficiencies in vital government operations," Jensen said. "Access to Claude within Palantir on AWS will equip U.S. defense and intelligence organizations with powerful AI tools that can rapidly process and analyze vast amounts of complex data. This will dramatically improve intelligence analysis and enable officials in their decision-making processes, streamline resource intensive tasks and boost operational efficiency across departments."

This summer, Anthropic brought select Claude models to AWS' GovCloud, signaling its ambition to expand its public-sector client base. GovCloud is AWS' service designed for U.S. government cloud workloads.

Anthropic has positioned itself as a more safety-conscious vendor than OpenAI. But the company's terms of service allow its products to be used for tasks like "legally authorized foreign intelligence analysis," "identifying covert influence or sabotage campaigns," and "providing warning in advance of potential military activities."

"[We will] tailor use restrictions to the mission and legal authorities of a government entity” based on factors such as “the extent of the agency's willingness to engage in ongoing dialogue,” Anthropic says in its terms. The terms, it notes, do not apply to AI systems it considers to "substantially increase the risk of catastrophic misuse," show "low-level autonomous capabilities," or that can be used for disinformation campaigns, the design or deployment of weapons, censorship, domestic surveillance, and malicious cyber operations.