Lethal A.I. Drones Are No Longer Fictional, But A Dangerous Fact

Drones Are Already Incredibly Lethal Weapons, And Traditionally, A Human Operator Has Always Been Required To Pull The Trigger.

Lethal A.I. Drones Are No Longer Fictional, But A Dangerous Fact

In August 2020, an AI-driven “pilot” developed by a relatively unknown 11-person company went undefeated against a top gun pilot in a simulated dogfight organized by US defence research arm DARPA.  Less than a year later, the United States Air Force awarded contracts to three defence industry juggernauts Boeing, General Atomics and Kratos to create artificially intelligent drones which will be ready by 2023. This falls under a little known program entitled ‘Skyborg’, which envisions fleets of intelligent mini-fighter drones flying alongside actual pilots like loyal wingmen, anticipating needs, carrying out dangerous missions, or even flying ahead in a scouting capacity.  It may not seem like much, but an unmanned intelligent drone would wreak havoc on any battlefield. The most well-trained pilots can handle 9 G’s (gravities) of force in difficult manoeuvres, and even then only with specialized equipment and for very short periods of time.

An unmanned intelligent drone not restricted by a fragile human pilot, or a comparatively slow ground operator would outperform humans time and time again, and only get better at it.  In 2018, the Boston-based Future of Life Institute launched a petition against lethal autonomous weapons, signed by over 150 leading AI-firms and nearly 2400 scientists around the world. But for the world’s militaries, the time is now, the only alternative is being left behind, and the first to master the magic of AI, will conquer the world and in many senses it’s too late.

New arms race

During the Cold War the world’s superpowers raced to see who could stockpile more nuclear missiles. Today, an unspoken race between militaries marks a contest to see who will reach the holy grail of applied military technology: artificial intelligence.  But here’s the deal, limited artificial intelligence has been around for a while now. In fact, it’s quickly permeating nearly every aspect of modern life. While it may not be the self-conscious, run-away artificial intelligence found in the domain of science fiction, modern learning machines have quickly established that in most cases, they can do it better than humans, and keep getting better. That promises significant growth for nearly every industry, but also flips the world of military technology and defence on its head. Where major strategic advantages once required significant spending and the brightest minds, a small start-up in India or China with a few dozen coders can quickly produce a piece of software to render the offensive edge of a superpower obsolete.

For countries like the United States, that’s a serious risk and there’s little they can do to counter it. For one thing, its domestic private sector will never be able to compete with Beijing’s state sponsorship of AI development in the amount of $5 billion a year, especially with growing alarm among leading ethicists and scientists who warn that creating near-sentient lethal autonomous weapons puts humanity and civilization itself at risk.  Some countries have no qualms whatsoever about this. If anything, nations like China and Russia see a strategic advantage in being able to go where others may not follow. Russian President Vladmir Putin certainly gets that.  “Artificial intelligence is the future, not only for Russia but for all humankind… Whoever becomes the leader in this sphere will become the ruler of the world,” he said in an address to over a million students in September 2017.

Fear of Missing Out (FOMO)

This triggers a chain reaction, as more nations feel pressured to push the edge of military AI, while increasingly overlooking key issues such as safety mechanisms, algorithmic bias, or ethics.  For lethal autonomous weapons, this is a problem. Drones are already incredibly lethal weapons, and traditionally, a human operator has always been required to pull the trigger. Take the feared MQ-9 Reaper responsible for the killing of Iranian General Qassem Soleimani, or the 563 drone strikes carried out by the Obama administration around the world.

Unconscious prejudice

Even with traditional drones, algorithmic bias was already a major factor in identifying targets. For instance, the Obama administration considered every military-aged male in a warzone to be a legitimate target. An investigative report conducted by the Intercept revealed that algorithms used by US Homeland Security to identify possible security risks and create no-fly lists were deeply biased by design, basing its decisions, growth and learning on a demographic sample created by someone with inherent prejudices to begin with. In a nutshell, it guarantees civilian casualties and false positives on targets that are otherwise innocent. With military applications of artificial intelligence though, there’s often not much room for revision or correcting an algorithm if a target is killed in error, after all, the dead don’t speak. As such, bias by design can actually act as a self-fulfilling prophecy with time.

Gazing into the abyss

But automation has another tragic consequence: removing humans from the loop. When ethicists unanimously stand against a machine taking a life, military representatives and defence planners talk about safeguards, and getting more humans involved.  On October 31, 2019, the US Department of Defense’s Defense Innovation Board published a draft report recommending some principles for ethical use of artificial intelligence. One of them suggested a human operator would look into the ‘black box’ to understand the “kill-chain process.”

What it failed to mention is even the best computer scientists today aren’t sure how an AI reaches the conclusions it does. Instead, they design it’s reality, giving it pats on the head for getting a task right, and punishing it if it does wrong. In a virtual environment, you can speed up this process to an incredible degree. The end result? An evolving AI that learns from the mistakes of its forebears, burning through hundreds of generations in minutes trying to solve the same task again and again, until it formulates the most efficient way of solving it. A computer scientist may be able to figure out the input and output of the entire machine learning process, and even tweak it for maximum efficiency, accuracy or faster learning; but figuring out its thought process is something still in the realm of impossibility for one simple reason: for the time being, artificial intelligences don’t think as we do, and can’t be taught to share their acquired knowledge. But with ineffective safeguards in place, and an ongoing global AI arms competition where each country feels like it’s falling behind its competitors, the prospects for putting the genie back in the bottle look grim.

This news was originally published at TRT World