Silicon Valley AI companies have a new best friend: the US Department of Defense.
Major companies developing generative AI technologies have developed, deepened, or begun cultivating relationships with the military in recent months, in some cases even. revise or make exceptions to internal policies to remove obstacles and restrictions on defense work.
Multiple DoD agencies, from the Air Force to various intelligence groups, are actively testing use cases for Meta’s AI models and tools, GoogleOpenAI, Anthropic and Mistral, as well as technologies from startups like Gladstone AI and ScaleAI, several people with knowledge of the tests said. Fortune.
This is a remarkable shift for Internet companies that, until recently, treated defense work as if it were taboo, if not outright taboo. verboten. But with the cost of developing and operating generative AI services already running into the hundreds of billions of dollars and showing no signs of slowing down, AI companies are feeling the pressure to show some return on their massive investments. The DoD, with its virtually unlimited budget and long-standing interest in cutting-edge technology, suddenly doesn’t look so bad.
Although landing a Defense contract can be tricky, with multiple levels of certifications to receive and strict compliance standards to follow, “the rewards are significant” and the money can come in for years, Erica Brescia, managing partner at Redpoint Ventures which focuses on Investing in AI, said.
“DoD contracts provide substantial annual contract values, or ACVs, and create long-term opportunities for growth and market defensibility,” Brescia said.
Brescia added that attacking Defense work has recently become more socially acceptable in tech circles. Not only are business leaders interested in the hundreds of millions of dollars in contracts that defense-focused startups like Palantir and Anduril are raking in, but the “changing political landscape” has made “the pursuit of defense as main market segment an increasingly attractive option for businesses.” businesses are ready to handle longer sales cycles and manage complex deployments.
The embrace of military labor may indeed suit the political moment well, with a business-friendly Trump administration set to take office in January, and a cohort of hawkish Silicon Valley insiders, led by “first buddy” Elon Musk, within the president-elect. circle. Musk’s tenure in his official role as co-head of the new Department of Government Effectiveness is significantly reduce expenses. But few expect serious cuts to the Pentagon’s budget, particularly on AI, at a time when The United States and China are fighting for AI supremacy.
For now, much of the military’s work with generative AI appears to be small-scale projects and testing, but the potential for generative AI technology to become a fundamental aspect of military computing future means the relationship between Silicon Valley and the Pentagon could be huge. .
Defensive uses of AI do not necessarily involve drone warfare or explosions. Much of the AI-specific work within the DoD constitutes the most mundane activity that any office would happily hand over to capable technology. Labeling, collecting and sorting data are common uses of AI within the department, as is the use of chatbots ChatGPT and Claude which most people can access online but require additional security when used by the DoD. Large language models could also prove useful for analyzing and searching classified information, facilitating the government’s work on cybersecurity, and providing better computer vision and autonomy for things like robotic tools, drones and tanks.
Some tech companies are trying to specifically avoid being involved in DoD projects that could be used in the “kill chain,” a military term referring to the structure of an attack on an enemy, a former official said of the DoD. Fortune regarding companies winning public contracts. However, these concerns sometimes dissipate as millions or even billions of dollars become available. “Once you get in, you want to expand,” the person added.
A changing set of ruless
Some tech companies, like Palantir and Andurilhave for years made Defense uses and contracts the backbone of their entire activity.
However, within Silicon Valley’s established internet companies and some of its younger AI startups, military work was avoided as companies sought to recruit and retain left-leaning engineers. When Google acquired deep mind in 2014, he allegedly committed never use the startup’s technology for military purposes purposes. And in 2018, Alphabet CEO Sundar Pichai faced a internal game about Google’s participation in Project Maven, a Pentagon drone warfare effort. While Google insisted its technology was used only for “non-offensive purposes,” such as analyzing drone video footage, the employee outcry was loud enough for Pichai to cancel his vacation to reassure staff And finally promised Google would not develop its AI for weapons.

Puce Somodevilla/Getty Images
Google’s “AI Principles” now states that it “will not pursue…weapons or other technologies whose primary purpose or implementation is to directly cause or facilitate harm to persons,” nor for “surveillance that violates internationally accepted standards.” But this policy leaves a lot of room for maneuver and the company has explicitly stated that it will not completely abandon working with the military.
The story is similar among other major AI players. Meta initially banned its Llama model to be used in military work, as OpenAI did, while Anthropic initially built its Claude model to be “harmless.” Today, all three have announced that such work with their models is acceptable and they are actively pursuing such uses. Sam Altman, co-founder of OpenAI on the principle to develop AI to “benefit humanity as a whole” and which said once there were things he “would never do with the Department of Defense,” he has since said. deleted any commitment to such restrictions in its usage policy.
A venture capitalist focused on investing in AI companies highlighted the “American dynamism” of venture capital firm Andreessen Horowitz. essay two years ago was the moment when defense contract avoidance began to evolve. The essay explicitly stated that technology companies working in the defense field were working to further the U.S. national interest.
“Leaders started thinking, ‘Oh, okay, defending America, working with the military, that’s actually good,'” the VC said.
Post-pandemic mass layoffs at tech companies have also had a chilling effect on employee protests, giving tech employers more freedom to pursue military activities.
The DoD has already spent nearly a billion dollars in official contracts with AI companies over the past two years, according to a Fortune analysis. Although the details of these contracts are vague, they have been awarded to companies like Morsecorp, which specializes in autonomous vehicle technology, and a subsidiary of ASGNmanagement and consulting company, to develop new AI prototypes.
Not all of these contracts are made public. But any public procurement contract awarded to a major AI company would likely represent tens of millions, if not hundreds of millions, or even billions, of dollars in revenue for those companies — and for their biggest backers.
The largest investor in OpenAI is Microsoftwhich recently said its Azure cloud service had been approved for DoD agencies to use OpenAI’s AI models to obtain information at lower security clearance levels – which took years of investment in a specialized infrastructure to be created. Likewise, Anthropic’s largest backer is Amazon. Amazon Web Services is perhaps the DoD’s largest cloud provider and has tens of billions of dollars in government contracts. For both companies, the ability to add new AI services and tools to DoD offerings could prove valuable. The same goes for a company like Google, which has also secured valuable government contracts, and its Gemini AI model.
“They’re basically building the plane while they’re flying it, so it’s a massive land grab,” an AI official said. Fortunereferring to a growing number of tech companies suddenly eager to put their AI tools and models in the hands of the DoD.
“Critical” technology for the DoD
The Ministry of Defense defines AI is one of its 14 “critical technology areas” because it holds “extreme promise” and is “imperative for dominating future conflicts.”
About a year ago, the DoD officially created the Office of Strategic Capital, a new federal credit program in partnership with the Small Business Administration, to ensure that critical technologies like AI receive funding in the form of direct loans. For fiscal year 2024, the OSC has made available $984 million for destined to be distributed to 10 companies focused on areas such as autonomous robotics and microelectronics manufacturing, which typically include AI chip manufacturing. The DoD is investing another approx. $700 million for chip manufacturing and the development of domestic semiconductor manufacturing, essential for creating AI chips.
Despite billions in investment and no signs of slowing down within Defence, the AI chief admitted that most current AI products are simply “not very useful yet”, either for Defense or for the general public. But their large-scale application in a government or defense environment could make them more useful, more quickly. “The army also created the Internet. » ARPANET, the key technological foundation of the modern Internet, was built within the DoD, as were mainstream technologies such as radar and GPS systems.
Even if a ministry like Defense wants useful products, it also knows that its budget increase from year to yearreaching just under $1 trillion in 2024. half of this budget is awarded to companies that contract with the department.
“Honestly, yes, they really like to spend money,” the executive said.
Additional reporting by Jeremy Kahn and Sharon Goldman.
Are you an employee of a technology company or someone with an insight or tip to share? Contact Kali Hays securely via Signal at +1-949-280-0267 or at kali.hays@fortune.com.