In 2017, Elon Musk told the world he was certain that artificial intelligence would lead us to World War III. His viral comments evoked visions of Terminators’s feet crushing a pile skulls and robotic airplanes hunting down humans. Now, however, it appears Musk has changed his mind.
In a series of recent posts on X, Musk has become a vocal advocate for replacing manned jets like the F-35 Lightning II with drone swarms controlled by AI hive minds. “It’s a shit design,” he wrote about the F-35, after posting a video of drone swarms blanketing the sky. Musk’s comments came after declassified testing reports from the Pentagon showed that the F-35 has faced consistent manufacturing and performance issues.
The F-35 design was broken at the requirements level, because it was required to be too many things to too many people.
This made it an expensive & complex jack of all trades, master of none. Success was never in the set of possible outcomes.
And manned fighter jets are… https://t.co/t6EYLWNegI
— Elon Musk (@elonmusk) November 25, 2024
In follow-up comments, Musk said that the F-35 was an absolute mess from the beginning, and that “manned fighter jets are obsolete in the age of drones anyway. Will just get pilots killed,” he said, before adding, in typical Musk form, that “fighter jets do have the advantage of helping Air Force officers get laid. Drones are much less effective in this regard.”
Musk’s idea is not totally wrong. It makes sense from an economic perspective (also for him personally, as his companies are heavily invested in AI). But the notion that the U.S. should replace manned jets with drones is reckless no matter how you implement it.
As Vincent Boulanin—director of the Governance of Artificial Intelligence Programme at Stockholm International Peace Research Institute (SIPRI)—told me during a video interview: “It’s wise to take Elon Musk’s predictions with a grain of salt.”
Currently, there are no AI drone platforms whose capabilities compare to that of manned military airplanes (and there will not be for a long time, as Musk’s own full-autonomy-driving failures show). Then there’s the fact that drone hives, designed with targeting and killing objectives to win a battle, could turn the U.S. into a war criminal, if those objectives end up killing civilians.
The consequences of giving an uncontrollable autonomous intelligence the power to manage lethal weaponry are clear. Without human oversight—and we will cross that bridge eventually in the name of efficiency and military superiority—unmanned killing drones will not result in a happy ending for anyone on this planet.
With his new standing as head of DOGE (the Department of Government Efficiency), which is not, let us remember, an official government department (raising questions about its powers and how it will operate), and having the ear of President-elect Trump, Musk’s musings on X may not be just opinionated missives—they have a real chance of influencing policy down the line. And yet, the political and technological reality is much more complicated than Musk wants his clueless X fanboys to believe. There are serious limitations to Musk’s autonomous weapons dreams. The design for such systems is constrained by operational, legal, and ethical hard walls. For the U.S., breaking them would mean to basically go rogue in the international arena.
What Musk Gets Right
Musk believes that drone swarms are a superior aerial combat solution to manned jets in future combat situations because they are much cheaper and more effective. He argues that they are expendable, and that they can execute maneuvers that no human pilot could possibly handle, while withstanding higher G-forces and performing evasive actions that would incapacitate a human. They can operate tirelessly—patrolling, surveilling, and attacking—without needing breaks or risking pilot fatigue. The cost per unit is a fraction of that for an F-35, making losses far more acceptable. He’s right about all of that.
In Ukraine, we’ve seen this kind of drone warfare in action, proving highly effective against Russian armor and air defense. Musk’s argument for drones is rooted in efficiency—a fleet of drones can be produced, deployed, and destroyed at far lower costs than training and maintaining elite pilots and multimillion-dollar jets.


Those jets, manufactured by Lockheed Martin, have been undergoing scrutiny lately. Lockheed Martin argues that its airplanes are best in class; but objectively, the development and manufacturing cost of the F-35 and the F-22 (and any other modern manned fighter jet or bomber, for that matter) are massive compared to drone swarms. They are extremely expensive machines that require extensive testing and development, partly because they are piloted by humans, and partly because of their advanced avionics, sensors, and the many weapons they carry. Drone orders, even the more expensive ones like the Predators, are infinitely cheaper.


The F-35 has an operating cost of about $42,000 per flight hour. It’s only second to the F-22’s $85,000, the highest of any combat jet in existence. According to the U.S. Government Accountability Office (GAO), the sustaining cost of the F-35 fleet will continue to increase, from $1.1 trillion in 2018 to $1.58 trillion in 2033. The operation costs for the F-35 are projected to reach $2 trillion over its lifetime.
The GOA says the Department of Defense plans to fly the F-35 less frequently than originally estimated “partly because of reliability issues with the aircraft.” The F-35 has been marred by several problems that have greatly affected its operational record. It has been grounded several times since its introduction due to technical issues with engines, fuel lines, and other problems that have seriously affected its combat readiness. In fact, lawmakers are fed up with these problems: Congress just pushed for a preliminary version of the 2025 National Defense Authorization Act (NDAA) that proposes limiting new F-35 aircraft purchases to 48 units until the U.S. Department of Defense resolves the “ongoing challenges” to aircraft availability and mission readiness rates.
Despite that, the F-35 has become the backbone of U.S. air superiority. There are about 630 F-35s, with plans to buy about 1,800 more through 2044.


What Musk Gets Wrong
While Musk is right that the F-35 is a mess and that drones could be much more effective, he chooses to ignore a huge drawback to his idea: AI drone swarms are required to operate in real-world war scenarios in which they won’t have a connection with humans. In those situations, they will need to make life-or-death decisions on their own, acquiring targets and destroying them without human intervention.
Ukraine has successfully used inexpensive flying critters to destroy extremely expensive tanks, air defense systems, helicopters, airplanes, infantry units, and anything that moves. These human-controlled drones are designed to be “suicide” robots, self-destroying to hit a target. When they are designed to deliver bombs, they are attritional, meaning they are designed to be so cheap that losing one doesn’t much matter.


But the advantage of autonomy in a contested anti-access/area denial (A2/AD) environment—where defending forces try to keep enemies out—is also where the danger lies. In these areas, opposition forces can jam the links between drones and their human operators and satellite signals, rendering them useless or, even worse, turning them against their own masters.
Ukraine recently demonstrated this by spoofing the GPS signals of 88 attacking Russian drones, turning them around in a “boomerang attack” against Russia and Belarus. But, if a drone equipped with autonomous AI loses its connection to its human pilot, the machine will fully take over the control of its bearings and weapons. The AI brains can pilot the drone, acquire targets and kill them completely on their own, without any real-time human oversight.
Sensors and autonomous AI will make mistakes. A drone swarm tasked with taking out an enemy air defense unit, for example, must evaluate its target independently, which could lead to erroneous identification of civilian vehicles or noncombatant structures. An AI drone could mistake a school bus for a missile truck. Or make the decision to destroy a building full of innocent people because it is necessary to take out an anti-aircraft rocket launcher.
Last year, I spoke with Neil Davison, a former scientific and policy advisor at the International Committee of the Red Cross, who expressed concern over these humanitarian issues, pointing out that “it is the weapon itself that triggers an attack against an object or a person. And that is the key humanitarian problem.”
With autonomy, human accountability in decisions of life and death is lost. These weapons, designed to be unpredictable in order to counter sophisticated defenses, cannot be trusted to make decisions that align with the ethical standards of warfare. Davison emphasized that such unpredictable autonomous systems must be banned, particularly those that operate without human control and can change their functionality during deployment.
Paul Scharre, executive vice president and director of studies at the Center for a New American Security, says that algorithmic calculations aimed to achieve tactical objectives without regard for proportionality or broader strategic consequences can also quickly escalate a conflict. “The rhythm of the action of combat eclipses the capacity of humans to respond,” he told me during a conversation for the production of a documentary about the dangers of AI-controlled weapon systems. This can potentially force a reliance on autonomous systems that could make preemptive-strike decisions without human input. It’s something that both China and the United States are actively exploring.
In fact, Ukraine is already putting 4,000 AI-powered drones on the battlefield. Called the HX-2, these next-generation drones can operate and make combat decisions on their own, which make them theoretically immune to A2/AD countermeasures. Helsing, the drone manufacturer, claims on its website that the HX-2 will require a human to make “critical decisions,” but it’s not hard to imagine Ukraine or anyone else overriding these controls. “We believe in the principle that a human needs to be in or on the loop for all critical decisions; and we know that enforcing this principle requires conviction and technological leadership, especially in the face of adversaries taking shortcuts,” the company says.
But as Boulanin tells me, “It’s difficult for Ukraine to think of high level principles about what constitutes the responsible use of autonomy right now, fighting the war.” He believes an agreement will need to come only once the war ends. “There’s value in agreeing in peacetime [on] some general principle that will, even if [it] can be broken and not respected,” he points out. “It’s harder for conflicting parties to agree on the rules of engagement.”
Scharre told me that China and the U.S. are at the forefront at both developing AI capabilities and creating weapons, such as autonomous drone swarms. Their current designs, however, call for these swarms to be controlled by human pilots flying airplanes like the American F-35 and the Chinese J-20. In this case, the drones act as any other weapon system for these airplanes. The F-35 acts as a mothership, operating alongside drones or a swarm of different systems—some manned, some unmanned. “This kind of mixed platform approach is key, especially in situations where human judgment is irreplaceable, such as interpreting complex situations or communicating with allied forces,” Boulanin adds. The human is always in control, even in highly contested A2/AD areas. The AI is the one doing the flying and the attacking, yes, but the pilot is the one that, in theory, approves the kill.


A (temporary) glimmer of hope
Boulanin says that “the idea of a straightforward, linear progression toward full autonomy doesn’t make sense, at least in the short term.” Autonomy can be useful in some scenarios, such as high-intensity conflicts where speed and resilience against jamming are critical. But that only applies to very specific scenarios, like drones fighting against military ships in open seas. In most real-world situations, however, human oversight is necessary to exercise judgment, especially when civilians are involved, to avoid violations of international law.
Operationally, going full autonomous may not make sense for the Pentagon. “Autonomy isn’t universally useful,” Boulanin points out. It’s about finding the right mix between human and machine, which is not easy to achieve, he says. Developing a good collaboration between humans and machines is the challenging part, much harder than achieving full autonomy. Machines can communicate with each other using established protocols, but ensuring that human-machine interaction is effective, reliable, and safe is much more complicated—something the commercial aircraft industry has struggled with for years.
International law sets the limits of this technology, which is another barrier to Musk’s fantasies. Even if the U.S. wanted to make AI autonomy a reality, it would mean breaking from the international community. The United Nations is now calling for a new agreement on worldwide AI-controlled weaponry by 2026. It will need to happen in a peace context, once the war in Ukraine ends. Boulanin says currently, experts are working on a “two-tiered” regulatory approach that combines outright prohibitions with requirements for designing and deploying autonomous systems. However, whether the outcome will be politically or legally binding is uncertain, given geopolitical tensions and mutual distrust, Boulanin adds.
Catherine Connolly, director of automated decision research at the NGO Stop Killer Robots, told me back in 2023 that “all autonomous weapons systems must be used with meaningful human control,” and those that cannot be used in such a manner should be banned. This affirmation still stands and, until now, it’s been supported by the U.S.
Judging by Musk’s comments—and the chilling effect it is having on companies like Lockheed Martin, which saw its stock drop 3% right after his tweets—the U.S. may be going fast in the opposite direction. But the sad fact is that, even without Musk’s influence over the incoming administration, this ominous path might be inevitable. Take China’s refusal to sign the agreement at the 2024 Responsible AI in the Military Domain (REAIM) Summit in Seoul. The agreement, endorsed by 61 countries, urged a commitment to keep human control in actions related to the use of nuclear weapons.
China’s abstention suggests it might not be interested in placing limits on AI’s role in warfare, particularly in autonomous, potentially lethal systems. Boulanin points out that, after bilateral talks, both the U.S. and China expressed that putting AI in nuclear command-and-control systems may be a very bad idea. But he says that these good intentions weren’t legally binding in any way. He told me that “it’s important to keep in mind the dynamics between the U.S. and China when considering what China is willing to commit to in terms of high-level regulation and AI control.”
If China goes deeper into integrating autonomous AI control into its weapons systems—as it has aggressively been doing, declaring it a core component of its military since 2019— there’s no doubt that the U.S. will follow suit. And if the U.S. takes Musk’s path, China will likely push its own AI plans even further. Neither military can afford being behind this. It is already a new arms race. “The major powers are cautious and unlikely to support binding commitments, fearing that others may not adhere to the treaty,” Boulanin says. Whoever has the AI upper hand, will have an insurmountable strategic and tactical advantage in the battlefield. And leaving humans out of the equation will be required for maximum speed—and winning a war.
The trajectory is worrisome. It doesn’t matter if scientists and thinkers are strongly advocating against the adoption of AI for weapons in any way. In a future dominated by AI-driven military systems, where decision-making is transferred from humans to machines, the potential for catastrophic outcomes grows.
Once the threshold is crossed in which machines independently decide who lives and who dies, it will be hard, if not impossible, to return. Musk may see drone swarms as the inevitable, ultraefficient future of warfare, but their cheaper manufacturing and operating price will result in an infinitely higher cost for humanity. One that may only leave his fabled Mars colonists standing alone in the cosmos.
The early-rate deadline for Fast Company’s Innovation by Design Awards is Friday, March 21, at 11:59 p.m. PT. Apply today.