Newsletter Subscribe
Enter your email address below and subscribe to our newsletter
Enter your email address below and subscribe to our newsletter

AI in Warfare: Envision a drone that tracks down and destroys you, not as a result of an order given by someone in uniform, but rather when an algorithm determines your data matches the profile of a target. No delay. No quarter. No appeal. There are such drones. They exist today. And they represent only the beginning of AI in warfare.
For almost all of human existence, warfare has been an ugly yet essentially human endeavor. People have made choices, some good and some devastating; however, they were made by people. We are now moving beyond that period.
Artificial intelligence (AI) is no longer simply fighting on battlefields; it will determine what constitutes a battlefield.
From the frozen trenches of eastern Ukraine to the airspace above Iran, AI has already altered the way that we fight wars as well as who does the fighting, and perhaps most ominously, whether humans need to decide to take someone else’s life.
This represents one of the greatest technological transformations in the history of armed conflict, and it is occurring at an alarming rate with little or no warning.
To see how AI Warfare really works (not how it will work; not how the Pentagon says it should work, but on the actual battlefields with the actual blood), take a close look at Ukraine.
It is referred to as the “Silicon Valley of Offensive AI”. While this term may seem scary, it is very true. This conflict is the first time in all of History the amount of testing of Autonomous Military Technology on the ground has taken place so fast. The cycle of testing and deployment of new drone design/technology has moved from taking years to weeks.
The statistics are astonishing. It is estimated that approximately 80% of all combat casualties on both sides of the conflict are inflicted by drones. The front lines are turning into a “Kill Zone” (at least 10 miles wide) where any human presence can be detected and destroyed via a UAV that operates autonomously or semi-autonomously overhead. Traditional safe distances from the front line do not exist anymore.
A recent study by researchers from the United States Army War College found that adding an artificial intelligence (AI) based targeting ability to an off-the-shelf military drone would cost as little as twenty-five dollars. That is a sum smaller than what many people spend when dining out.
That said, this is not a fictional example. Many of the same types of drones are being used today by Ukrainian volunteer groups. In other words, AI-based weapon systems are no longer limited to nation-states with massive trillion-dollar military budgets. These systems have become commoditized, making them available to all types of organizations, including armed groups, cartels, and any number of non-state actors who possess some basic computer skills and the money to purchase the necessary equipment.
Did You Know?
In June 2025, volunteers operating under Ukraine’s “Operation Spider Web” successfully smuggled hundreds of small drones into Russian airbases and then remotely activated them to attack Russian airbases at their weakest points, using AI-based guidance. As a direct result of Operation Spider Web, 34 percent of Russia’s fleet of Tu-160 long-range bombers, which are also nuclear-capable, were either severely damaged or destroyed.
Ukraine was the testing ground. The 2025 war between Israel and Iran was the commencement of the final exam.
In mid-2025, when Israeli and U.S. military forces initiated large-scale attacks against Iran’s nuclear weapons infrastructure utilizing what most analysts describe as the first true “AI-driven” military campaign, where AI was not just used as a support tool for human decision-making, but rather an essential component of the entire operation.
Iran’s defensive system was designed to exploit human frailties. As such, Iran’s critical infrastructure (i.e., underground uranium enrichment plants; mobile missile/launch pads; air defense systems) was spread out over approximately 1.65 million square kilometers of rugged terrain. This was done to make the task of identifying all potential targets so difficult that no human analyst would be able to keep up with this dynamic problem set in near-real time.
This did not prove to be effective. In the same way that multiple computers can provide faster processing than one computer, several satellite systems, drone systems, various types of reconnaissance systems, and cyber-based intelligence systems were fed into a single AI system, which generated targeting graphics (pictures) in near-real time without human intervention. Within 12 days, thousands of valuable targets were destroyed at a rate of destruction that would have been operationally impossible if traditional methods of human-based intelligence were employed.
Ethical Red Line
Additionally, in late spring 2015, Ukrainian military personnel fired upon an unmanned aerial vehicle (drone) that originated from Iran and, upon inspection, determined that the UAV was entirely autonomous. The UAV possessed a completely autonomous target selection and engagement algorithm that required absolutely no human interaction. Based solely on pre-determined mission parameters and behavioral data, the UAV autonomously identified and engaged targets. Fully autonomous lethal systems operating independently in combat situations now exist.
While many will say “AI drones are bad news,” the scope of changes is so large and complex that there are five primary areas in which AI is changing the nature of modern warfare:
Every significant armed force globally is aware of what is now at risk. The nation that will be first to master the use of AI as a weapon system will have an opportunity to establish a permanent, structural position of superiority over its opponents that may continue for years to come. A high-stakes competition is underway.
United States
Project Maven, DARPA ACE, Anduril, Palantir. Budgeted for Autonomy $13.4 billion. Testing human-like (humanoid) robots in combat environments. AI targeting technology was used during attacks on Iranian assets.
China
Planning to deploy 1 million low-cost, tactical drones by 2026. They are developing large-scale drones capable of launching swarms of 100 + UAVs. China has committed significant resources to creating an AI-based military doctrine.
Russia
Deploying hundreds of small, camera-equipped drones using First Person View (FPV), into Ukraine. Russia is also developing a variety of AI-based combat systems. They have begun testing their marker unmanned ground vehicle. They acknowledged that they believe autonomous drones already operate as “killers” on the battlefield.
Israel
Used AI-based targeting systems at a large scale against both Gaza and Iran. Israel has developed an AI-based processing system designed to process multiple domains and thousands of targets during each campaign. Israel has the most mature operational AI warfare doctrine among all nations.
Ukraine
Currently, the leaders in AI innovation are on the battlefield in real time. Tested and successfully operated autonomous drones up to 15 kilometers from a command center. Have tested 70+ unmanned ground vehicles. Currently working on developing swarm-based AI.
Europe
Plan to allocate €800 Billion in EU funds to rearm European militaries under the “ReArm Europe Plan”. Allocate €1 Billion for R&D on AI and drone-based systems in 2026. Successfully deployed HX-2 drones made by Helsing AI to Ukraine. Now racing to catch up with the U.S. and China.
No one has addressed the following query, and it may well be the most vital issue of the 21st Century: Who shall be legally accountable for a death caused by an Autonomous Weapon System (AWS)?
The Commander in Chief who gave the order for deployment? The Engineer(s) who developed the Algorithm? The Company that designed/developed and/or supplied the AWS System? The Soldier that activated the system?
Currently, under all existing International Law, i.e., the Geneva Convention and Laws of Armed Conflict, there is always a Human Being accountable for each lethal Action taken. However, Autonomous Weapons Systems using AI completely sever that chain of Accountability.
This is Not Hypothetical Hand-Wringing. The UN Secretary General is calling for an internationally-binding Treaty Prohibiting Fully-Autonomous Weapons Systems, which will act autonomously without “Meaningful Human Control”, with a goal to complete the Treaty by 2026. Over 120 Nations are supporting this initiative. The International Committee of the Red Cross also endorses the Call for this Treaty.
However, the Three Countries that have the Most Influence on whether or not this Treaty becomes effective, namely, the U.S.A., Russia, and Israel, are resisting the Call. These Three Countries have Operational AI Weapon Systems currently deployed, and do not wish these systems to be bound by International Law. Therefore, the Gap between the Urgency of the Ethical Problem and the Pace of International Governance continues to grow.
Researchers at the U.S. Army War College have made it clear they believe there exists an “Oppenheimer moment,” the point in time when we pass a boundary beyond which we cannot go back. For example, Robert Oppenheimer watched as he witnessed the detonation of the first nuclear device. He quoted lines from the Bhagavad Gita stating, “I am death, the destroyer of worlds.” This was before the bomb was ever built.
Has this already happened for autonomous lethal weaponry? Some say yes, that completely autonomous lethal systems are already in use; we have passed this threshold, and what we need to be debating is how to regulate them, not prevent them.
Some others feel that this is a fight worth fighting with safety nets (guardrails) that AI-enabled weapons, if used responsibly, will provide fewer civilian casualties because the ability to target is much greater than that provided through human decisions. In fact, the results of some of the recent AI-assisted military operations demonstrate a significant amount of better precision than traditional bombing. Precision and ethics, however, are two separate issues. It may kill the correct person more effectively, but a machine still makes a lethal decision regarding another human’s life.
AI in combat is no longer something that is going to happen sometime in the future. The technology already exists today, and it’s being used to kill people. We’ve seen autonomous drones carry out killing operations. And we’ve seen the use of AI for targeting and decision-making for large-scale strikes. There are even humanoid robots being tested as potential soldier units.
This isn’t an issue about when AI will impact the way we go to war; it has already done so. Instead, this is an issue about whether there will be a framework ethically and legally established in sufficient time to control how we operate these types of technologies, or if the first major “catastrophic” threshold of operation is reached before we’re able to establish some sort of controls. And whether the nations with the largest militaries will ultimately agree to place limitations on technologies that provide them with such significant advantages over their competitors.
Historical data suggests that each new generation of major weapon technologies (gunpowder, aerial bombing, nuclear) created regulatory frameworks around those weapons, although they were often imperfect. In addition to the Biological/Chemical Weapons Treaty, we also have the Nuclear Non-Proliferation Treaty, which is clearly under duress. But both examples demonstrate that history shows that it typically requires a horrific demonstration of the consequences of the unregulated use of a particular type of technology to create regulatory frameworks.
What makes AI unique is the speed at which it operates. These systems iterate or improve themselves in weeks. They spread over months. Regulatory frameworks develop over many years, sometimes decades. Therefore, the gap between what machines are capable of doing and what human-created institutions can reasonably regulate represents one of the most hazardous spaces in the world today.
Disclaimer: This article is for informational and educational purposes only. It reflects analysis based on publicly available geopolitical developments and does not constitute prediction or professional advice.
Want More Guides on Contemporary Politics? Check out this One