Enter your email address below and subscribe to our newsletter

AI in Warfare

AI in Warfare 2026: How Artificial Intelligence Is Changing Modern War

Share your love

AI in Warfare: Envision a drone that tracks down and destroys you, not as a result of an order given by someone in uniform, but rather when an algorithm determines your data matches the profile of a target. No delay. No quarter. No appeal. There are such drones. They exist today. And they represent only the beginning of AI in warfare.

For almost all of human existence, warfare has been an ugly yet essentially human endeavor. People have made choices, some good and some devastating; however, they were made by people. We are now moving beyond that period.

Artificial intelligence (AI) is no longer simply fighting on battlefields; it will determine what constitutes a battlefield.

From the frozen trenches of eastern Ukraine to the airspace above Iran, AI has already altered the way that we fight wars as well as who does the fighting, and perhaps most ominously, whether humans need to decide to take someone else’s life.
This represents one of the greatest technological transformations in the history of armed conflict, and it is occurring at an alarming rate with little or no warning.

The Ukraine War: Earth’s Largest AI Battlefield Laboratory

To see how AI Warfare really works (not how it will work; not how the Pentagon says it should work, but on the actual battlefields with the actual blood), take a close look at Ukraine.

It is referred to as the “Silicon Valley of Offensive AI”. While this term may seem scary, it is very true. This conflict is the first time in all of History the amount of testing of Autonomous Military Technology on the ground has taken place so fast. The cycle of testing and deployment of new drone design/technology has moved from taking years to weeks.

The statistics are astonishing. It is estimated that approximately 80% of all combat casualties on both sides of the conflict are inflicted by drones. The front lines are turning into a “Kill Zone” (at least 10 miles wide) where any human presence can be detected and destroyed via a UAV that operates autonomously or semi-autonomously overhead. Traditional safe distances from the front line do not exist anymore.

When $25 Can Make a Drone a Killer

A recent study by researchers from the United States Army War College found that adding an artificial intelligence (AI) based targeting ability to an off-the-shelf military drone would cost as little as twenty-five dollars. That is a sum smaller than what many people spend when dining out.

That said, this is not a fictional example. Many of the same types of drones are being used today by Ukrainian volunteer groups. In other words, AI-based weapon systems are no longer limited to nation-states with massive trillion-dollar military budgets. These systems have become commoditized, making them available to all types of organizations, including armed groups, cartels, and any number of non-state actors who possess some basic computer skills and the money to purchase the necessary equipment.

Did You Know?
In June 2025, volunteers operating under Ukraine’s “Operation Spider Web” successfully smuggled hundreds of small drones into Russian airbases and then remotely activated them to attack Russian airbases at their weakest points, using AI-based guidance. As a direct result of Operation Spider Web, 34 percent of Russia’s fleet of Tu-160 long-range bombers, which are also nuclear-capable, were either severely damaged or destroyed.

From Ukraine to Iran: Full-Scale AI in Warfare

Ukraine was the testing ground. The 2025 war between Israel and Iran was the commencement of the final exam.

In mid-2025, when Israeli and U.S. military forces initiated large-scale attacks against Iran’s nuclear weapons infrastructure utilizing what most analysts describe as the first true “AI-driven” military campaign, where AI was not just used as a support tool for human decision-making, but rather an essential component of the entire operation.

Iran’s defensive system was designed to exploit human frailties. As such, Iran’s critical infrastructure (i.e., underground uranium enrichment plants; mobile missile/launch pads; air defense systems) was spread out over approximately 1.65 million square kilometers of rugged terrain. This was done to make the task of identifying all potential targets so difficult that no human analyst would be able to keep up with this dynamic problem set in near-real time.

This did not prove to be effective. In the same way that multiple computers can provide faster processing than one computer, several satellite systems, drone systems, various types of reconnaissance systems, and cyber-based intelligence systems were fed into a single AI system, which generated targeting graphics (pictures) in near-real time without human intervention. Within 12 days, thousands of valuable targets were destroyed at a rate of destruction that would have been operationally impossible if traditional methods of human-based intelligence were employed.

Ethical Red Line
Additionally, in late spring 2015, Ukrainian military personnel fired upon an unmanned aerial vehicle (drone) that originated from Iran and, upon inspection, determined that the UAV was entirely autonomous. The UAV possessed a completely autonomous target selection and engagement algorithm that required absolutely no human interaction. Based solely on pre-determined mission parameters and behavioral data, the UAV autonomously identified and engaged targets. Fully autonomous lethal systems operating independently in combat situations now exist.

The Five Ways AI Is Transforming the Battlefield

While many will say “AI drones are bad news,” the scope of changes is so large and complex that there are five primary areas in which AI is changing the nature of modern warfare:

  1. Targeting and ISR: Seeing Everything, All at Once
    Historically, the value of ISR was directly related to the amount of data collected, processed, and analyzed by human intelligence professionals working long hours. The sheer volume of information created by satellite imaging, airborne sensors, and other technologies could not possibly be evaluated or acted upon quickly enough with traditional methods of analysis. Artificial intelligence changed everything.
    Today, AI-driven systems combine multiple forms of data: video images from cameras, audio intercepts from communications devices, temperature readings from thermal imagers, computer network activity, etc., to identify relationships and flag potential threats immediately.
    Project Maven started with an AI system designed to analyze drone video feed. Since then, the entire spectrum of AI-based targeting tools has expanded through continuous updates since the start of the Iranian conflict.
  2. Autonomous Drone Swarms: The New Blitzkrieg
    One drone is just a piece of equipment. One thousand inexpensive drones operating together using an AI system would be very different. The U.S. Military has identified drone swarm capabilities as the greatest technological leap forward in the coming years.
    Drone swarming refers to an operation where multiple unmanned aerial vehicles (UAVs), also referred to as drones, coordinate their actions through AI control. These swarms are able to overwhelm any defensive missile system simply due to the quantity of incoming drones. It does not matter if a swarm loses some UAVs because they will be constantly replenished with others moving into position.
    In terms of cost, an inexpensive UAV ($200) can destroy an expensive air defense system ($10M). Iran has demonstrated the use of this method when it launched a drone swarm at Kuwait International Airport in 2026 during the conflict with Saudi Arabia. This demonstration provided evidence that the use of drone swarming is viable and provides significant disruption.
  1. AI-Powered Cyber Warfare: The Invisible Front
    While the majority of people think about AI as being used for destructive purposes, such as exploding drones, some of the most impactful uses of AI involve no destruction at all. Many types of attacks occur outside the public eye.
    AI systems continuously probe opponent networks for vulnerabilities, execute those vulnerabilities while adapting to any defenses that are put up, and continue to do so. Modern cyber warfare tools powered by AI are self-modifying and learn from each attempt to break into a target’s systems and adjust their attack accordingly before humans can even react.
    These new forms of warfare create a new level of conflict that falls beneath the threshold of war and is difficult to detect. They include economic instability and degradation of critical infrastructure as well as damage to military systems, all without firing a shot.
  1. Humanoid Soldiers: From Science Fiction to US Army Testing
    The idea of humanoid robot soldiers may seem like science fiction until you view the videos showing them in action. Companies like Anduril Defense Systems (founder Palmer Luckey) and Foundation AI are currently testing and building these systems, and there is active evaluation within both the Pentagon and Ukrainian military regarding whether AI-based humanoid robots should be employed in warfare.
    Palmer Luckey’s company, Anduril Defense Systems, has developed a number of AI-enabled defense systems, including Roadrunner, a drone-interceptor, a full 360-degree wearable head-mounted display for soldiers, and electromagnetic warfare systems that can disrupt enemy drone swarms. Also, Anduril’s Ghost Shark is a completely autonomous underwater vehicle (AUUV) that is currently operational by the Australian Navy. The transition from remotely controlled to fully autonomous systems is occurring rapidly.
  1. Decision Speed: When Milliseconds Determine Outcomes
    The least apparent yet most significant change brought by AI is likely in decision-making speed. The OODA Loop (Observe, Orient, Decide, Act) is the classic model for how militaries make decisions. However, in terms of cycle speed, this model rewards whichever party makes decisions quickest.
    With regards to decision speed, AI has compressed every phase of this loop. For example, DARPA’s ACE program has successfully demonstrated AI-controlled F-16 fighters to defeat manned fighter pilots in simulated dogfights, not because the AI had better tactics, but because it made decisions based on sensor input and executed them at machine speeds. To engage and fire on a target takes milliseconds for a human pilot to consciously decide to do so; however, for an AI system to make a decision takes microseconds.

The Global Arms Race: Who’s Winning?

Every significant armed force globally is aware of what is now at risk. The nation that will be first to master the use of AI as a weapon system will have an opportunity to establish a permanent, structural position of superiority over its opponents that may continue for years to come. A high-stakes competition is underway.

United States
Project Maven, DARPA ACE, Anduril, Palantir. Budgeted for Autonomy $13.4 billion. Testing human-like (humanoid) robots in combat environments. AI targeting technology was used during attacks on Iranian assets.

China
Planning to deploy 1 million low-cost, tactical drones by 2026. They are developing large-scale drones capable of launching swarms of 100 + UAVs. China has committed significant resources to creating an AI-based military doctrine.

Russia
Deploying hundreds of small, camera-equipped drones using First Person View (FPV), into Ukraine. Russia is also developing a variety of AI-based combat systems. They have begun testing their marker unmanned ground vehicle. They acknowledged that they believe autonomous drones already operate as “killers” on the battlefield.

Israel
Used AI-based targeting systems at a large scale against both Gaza and Iran. Israel has developed an AI-based processing system designed to process multiple domains and thousands of targets during each campaign. Israel has the most mature operational AI warfare doctrine among all nations.

Ukraine
Currently, the leaders in AI innovation are on the battlefield in real time. Tested and successfully operated autonomous drones up to 15 kilometers from a command center. Have tested 70+ unmanned ground vehicles. Currently working on developing swarm-based AI.

Europe
Plan to allocate €800 Billion in EU funds to rearm European militaries under the “ReArm Europe Plan”. Allocate €1 Billion for R&D on AI and drone-based systems in 2026. Successfully deployed HX-2 drones made by Helsing AI to Ukraine. Now racing to catch up with the U.S. and China.

The Ethical Abyss: Who’s Responsible When a Machine Kills?

No one has addressed the following query, and it may well be the most vital issue of the 21st Century: Who shall be legally accountable for a death caused by an Autonomous Weapon System (AWS)?

The Commander in Chief who gave the order for deployment? The Engineer(s) who developed the Algorithm? The Company that designed/developed and/or supplied the AWS System? The Soldier that activated the system?

Currently, under all existing International Law, i.e., the Geneva Convention and Laws of Armed Conflict, there is always a Human Being accountable for each lethal Action taken. However, Autonomous Weapons Systems using AI completely sever that chain of Accountability.

This is Not Hypothetical Hand-Wringing. The UN Secretary General is calling for an internationally-binding Treaty Prohibiting Fully-Autonomous Weapons Systems, which will act autonomously without “Meaningful Human Control”, with a goal to complete the Treaty by 2026. Over 120 Nations are supporting this initiative. The International Committee of the Red Cross also endorses the Call for this Treaty.

However, the Three Countries that have the Most Influence on whether or not this Treaty becomes effective, namely, the U.S.A., Russia, and Israel, are resisting the Call. These Three Countries have Operational AI Weapon Systems currently deployed, and do not wish these systems to be bound by International Law. Therefore, the Gap between the Urgency of the Ethical Problem and the Pace of International Governance continues to grow.

The “Oppenheimer Moment” Question

Researchers at the U.S. Army War College have made it clear they believe there exists an “Oppenheimer moment,” the point in time when we pass a boundary beyond which we cannot go back. For example, Robert Oppenheimer watched as he witnessed the detonation of the first nuclear device. He quoted lines from the Bhagavad Gita stating, “I am death, the destroyer of worlds.” This was before the bomb was ever built.

Has this already happened for autonomous lethal weaponry? Some say yes, that completely autonomous lethal systems are already in use; we have passed this threshold, and what we need to be debating is how to regulate them, not prevent them.

Some others feel that this is a fight worth fighting with safety nets (guardrails) that AI-enabled weapons, if used responsibly, will provide fewer civilian casualties because the ability to target is much greater than that provided through human decisions. In fact, the results of some of the recent AI-assisted military operations demonstrate a significant amount of better precision than traditional bombing. Precision and ethics, however, are two separate issues. It may kill the correct person more effectively, but a machine still makes a lethal decision regarding another human’s life.

The Verdict: A Transformation We Can’t Reverse

AI in combat is no longer something that is going to happen sometime in the future. The technology already exists today, and it’s being used to kill people. We’ve seen autonomous drones carry out killing operations. And we’ve seen the use of AI for targeting and decision-making for large-scale strikes. There are even humanoid robots being tested as potential soldier units.

This isn’t an issue about when AI will impact the way we go to war; it has already done so. Instead, this is an issue about whether there will be a framework ethically and legally established in sufficient time to control how we operate these types of technologies, or if the first major “catastrophic” threshold of operation is reached before we’re able to establish some sort of controls. And whether the nations with the largest militaries will ultimately agree to place limitations on technologies that provide them with such significant advantages over their competitors.

Historical data suggests that each new generation of major weapon technologies (gunpowder, aerial bombing, nuclear) created regulatory frameworks around those weapons, although they were often imperfect. In addition to the Biological/Chemical Weapons Treaty, we also have the Nuclear Non-Proliferation Treaty, which is clearly under duress. But both examples demonstrate that history shows that it typically requires a horrific demonstration of the consequences of the unregulated use of a particular type of technology to create regulatory frameworks.

What makes AI unique is the speed at which it operates. These systems iterate or improve themselves in weeks. They spread over months. Regulatory frameworks develop over many years, sometimes decades. Therefore, the gap between what machines are capable of doing and what human-created institutions can reasonably regulate represents one of the most hazardous spaces in the world today.

Disclaimer: This article is for informational and educational purposes only. It reflects analysis based on publicly available geopolitical developments and does not constitute prediction or professional advice.

Want More Guides on Contemporary Politics? Check out this One

Share your love

Leave a Reply

Your email address will not be published. Required fields are marked *

Stay informed and not overwhelmed, subscribe now!