The LILA Gazette dives into the controversy around AI powered weapons, to get to the cold hard facts.

By Max Evenson – 7th grade.
The idea of AI weapons is a controversial one, to say the least. Called “slaughterbots,” by some, Artificial Intelligence powered weapons are a new and emerging field in AI. Some of the professed benefits are potentially increasing the efficiency of operations, and making the AI more humane. According to the International Committee of the Red Cross, “Autonomous weapon systems, as the ICRC understands them, are any weapons that select and apply force to targets without human intervention,” such as AI powered weapons.
There are worries though, that AI weapon handlers would end up pressing just a kill button, and that these AI’s would have bias in their logic, caused by their training data. “Another challenge is that tech companies are opaque about the degree of autonomy and human oversight in their weapons systems,” said Neuroscientist Kanaka Rajan, in an interview with HM News. She continued, “For some, human oversight could mean pressing a ‘go kill’ button after an AI weapons unit makes a long chain of black box decisions, without the human understanding or being able to spot errors in the system’s logic. For others, it could mean a human has more hands-on control and is checking the machine’s decision-making process.”
Many people also believe that autonomous weapons that target humans are morally wrong, and should be banned. According to the Future of Life’s article Slaughterbots are here, “Alongside a majority of the world’s states, the Future of Life Institute argues that some autonomous weapons must be banned – specifically, those which target humans, which are highly unpredictable, or which function beyond meaningful human control.” One example of these could be LAWs, or Lethal Autonomous Weapons, used by the Ukrainian Armed forces to strike in the heart of Russia. Another could be Israel’s “use of artificial intelligence to aid military decision-making that may contribute to the commission of international crimes,” as stated in a resolution passed by the UN Human Rights Committee in April of 2024.

Though, the trend of new technologies being used in war is not new. Fritz Haber’s development of synthesising gases from their elements helped create a sustainable source of fertilizer, but it was used to make chlorine gas, used by the German Empire in the first world war.
With AI, it is incredibly easy to use it in war. “The technical capability for a system to find a human being and kill them is much easier than to develop a self-driving car. It’s a graduate-student project,” said Stuart Russell, a University of California computer science professor, in an interview with Nature magazine.
But, for some AI in weapons are beneficial. According to the Army University Press, an organisation affiliated with the United States Armed Forces, autonomous weapons “can reduce casualties by removing human warfighters from dangerous missions,” that “ fully autonomous planes could be programmed to take genuinely random and unpredictable action that could confuse an opponent,” and that it could allow the army to “eventually could reduce the size of a brigade from four thousand to three thousand soldiers without a concomitant reduction in effectiveness.” And, according to the Centre for Arms Control and Non-Proliferation, autonomous weapons are “generally cheaper and easier to produce than other highly technical and intensive means of delivery, and do not require extensive training or dedicated personnel to operate.”
Some of the benefits inherently can lead into new issues, such as their efficacy leading to the increased use of biological and chemical weapons, and allowing militaries, such as the Ukrainian military to strike much larger opponents, such as Russia, potentially leading to wider-scale damages.
And as it is new, its testing can be of dubious legality under international law such as Israel’s testing of Autonomous Weapons on Palestinian civilians. In a segment aired on NPR on December 3rd 2025, it was reported that the Israeli forces used AI technology in the Gaza Strip. “Israel also began using unmanned vehicles on the ground in Gaza, Lebanon and Syria, made by a company called Ottopia. They roll into enemy territory, scan and identify targets using AI, and keep the soldier out of harm’s way, says founder Amit Rosenzweig. “He or she can drink coffee while using a joystick to control a tank or an APC or whatever it is that he or she needs to control to get the job done”.
Vladimir Zelensky, the president of a country making significant use of autonomous weapons, as reported by the Courthouse News Service, stated “Drones are far worse than climate change. Stopping this war and the global arms race is cheaper than building underground kindergartens and protecting every port and ship from terrorists with sea drones.” What does one do about the decreased accountability of autonomous weapons? Is it right to take the conscious element out of killing? What costs are we willing to accept for the potentiality of reduced conflict and casualties; is killing civilians one of them? These things will lead to many significant changes in the world of warfare and in our world.





Leave a comment