If you’re worried about AI as it relates to our impending doom, you won’t like the news that ChatGPT technology will power swarms of Anduril’s killer drones. On Wednesday, Anduril announced a partnership with the most renowned American AI company.
Many people will also be quick to observe OpenAI’s rather abrupt pivot to the dark side. Earlier this year, a series of departures from the company led us to question its commitment to developing safe AI. OpenAI decided to abandon its non-profit beliefs and pursue profits like most AI companies.
Partnering with a defense contractor would make any AI skeptic even more concerned about OpenAI’s once-noble intentions regarding the future of artificial intelligence.
I must say that I share some of these views. I don’t think AI will bring about the end of the world in the near future, but I am aware of the different risks that different versions of AI pose to society. Giving the AI access to any weapon type is one such scenario. Things can go wrong, especially with emerging technologies like AI.
However, there is also a sense of relief in there. Make no mistake, some AI companies will get into the military game, whether it’s OpenAI, Google, Anthropic, or another entity. And it will probably have to happen sooner or later. The good thing about hearing that AI companies in the Western world are doing it is exactly that: we hear about it.
I’m sure less democratic countries working on their own AI-powered robot armies will conduct similar AI experiments if they haven’t already. And we won’t necessarily find out that it’s happening before the fact. Not to mention, the war between Russia and Ukraine proved just how big and deadly drone warfare can be, and that was long before ChatGPT went viral.
AI involvement in war is inevitable, no matter how much we would like to pretend otherwise. There’s probably already a race for AI to improve various aspects of the military before enemies can implement similar systems.
Before we start worrying about the AI wars of the future, we’ll just have to wait and see what Anduril and OpenAI share next, assuming they’re willing to share anything about their joint drone work AI.
So far, the announcement uses exactly the type of language one would expect from such a partnership. The drone maker says the strategic partnership with OpenAI will help it “responsibly develop and deploy advanced artificial intelligence (AI) solutions for national security missions.”
How will it work? Well, Anduril gives us a basic idea of what it will do with OpenAI’s ChatGPT-like technology:
The strategic partnership between Anduril and OpenAI will focus on improving the nation’s Counter-Unmanned Aircraft Systems (CUAS) and their ability to detect, assess and respond to potentially lethal aerial threats in real time. Under the new initiative, Anduril and OpenAI will explore how to leverage cutting-edge AI models to quickly synthesize time-sensitive data, reduce the burden on human operators, and improve situational awareness. These models, which will be trained on Anduril’s cutting-edge CUAS threat and operations data library, will help protect U.S. and allied military personnel and ensure mission success.
Anduril also cites the “accelerating race between the United States and China to become the world leader in AI development” as a reason to seek help from organizations like OpenAI.
Sam Altman, CEO of the creator of ChatGPT, made a reassuring statement on OpenAI technology used in the military sector:
OpenAI develops AI to benefit the masses and supports U.S.-led efforts to ensure the technology respects democratic values. Our partnership with Anduril will help ensure OpenAI technology protects U.S. military personnel and help the national security community understand and use this technology responsibly to keep our citizens safe and free.
Then again, Altman is responsible for the departure of many former high-ranking ChatGPT developers responsible for mastering the company’s AI. You should keep this in mind whenever he talks about safe AI, especially safe AI for the military.
As for the killer robots that OpenAI models will power, Gizmodo Remarks that most of them are defensive drones developed to protect American military personnel and vehicles.
However, Anduril also made a Kamikaze drone called Bolt-M (top photo) boasting “lethal precision firepower” and capable of “devastating effects against static or moving ground targets.” This drone is powered by the company’s AI. In the future, OpenAI’s AI technology could also play a role in this type of offensive drone technology.
The video below shows Bolt-M in action in various roles, including hitting a target: