Toggle light / dark theme

Americans have become accustomed to images of Hellfire missiles raining down from Predator and Reaper drones to hit terrorist targets in Pakistan or Yemen. But that was yesterday’s drone war.

A revolution in unmanned aerial vehicles is unfolding, and the U.S. has lost its monopoly on the technology.

Some experts believe the spread of the semi-autonomous weapons will change ground warfare as profoundly as the machine gun did.

A new video released by nonprofit The Future of Life Institute (FLI) highlights the risks posed by autonomous weapons or ‘killer robots’ – and the steps we can take to prevent them from being used. It even has Elon Musk scared.

Its original Slaughterbots video, released in 2017, was a short Black Mirror-style narrative showing how small quadcopters equipped with artificial intelligence and explosive warheads could become weapons of mass destruction. Initially developed for the military, the Slaughterbots end up being used by terrorists and criminals. As Professor Stuart Russell points out at the end of the video, all the technologies depicted already existed, but had not been put together.

Now the technologies have been put together, and lethal autonomous drones able to locate and attack targets without human supervision may already have been used in Libya.

Experts in the AI and Big Data sphere consider October 2021 to be a dark month. Their pessimism isn’t fueled by rapidly shortening days or chilly weather in much of the country—but rather by the grim news from Facebook on the effectiveness of AI in content moderation.

This is unexpected. The social media behemoth has long touted tech tools such as machine learning and Big Data as answers to its moderation woes. As CEO Mark Zuckerberg explained for CBS News, “The long-term promise of AI is that in addition to identifying risks more quickly and accurately than would have already happened, it may also identify risks that nobody would have flagged at all—including terrorists planning attacks using private channels, people bullying someone too afraid to report it themselves, and other issues both local and global.”

Artificial intelligence (AI) is a force for good that could play a huge part in solving problems such as climate change. Left unchecked, however, it could undermine democracy, lead to massive social problems and be harnessed for chilling military or terrorist attacks.

That’s the view of Martin Ford, futurist and author of Rule of the Robots, his follow-up to Rise of the Robots, the 2015 New York Times bestseller and winner of the Financial Times/McKinsey Business Book of the Year, which focused on how AI would destroy jobs.

In the new book, Ford, a sci-fi fan, presents two broad movie-based scenarios.

This week, The European Parliament, the body responsible for adopting European Union (EU) legislation, passed a non-binding resolution calling for a ban on law enforcement use of facial recognition technology in public places. The resolution, which also proposes a moratorium on the deployment of predictive policing software, would restrict the use of remote biometric identification unless it’s to fight “serious” crime, such as kidnapping and terrorism.

The approach stands in contrast to that of U.S. agencies, which continue to embrace facial recognition even in light of studies showing the potential for ethnic, racial, and gender bias. A recent report from the U.S. Government Accountability Office found that 10 branches including the Departments of Agriculture, Commerce, Defense, and Homeland Security plan to expand their use of facial recognition between 2020 and 2023 as they implement as many as 17 different facial recognition systems.

Commercial face-analyzing systems have been critiqued by scholars and activists alike throughout the past decade, if not longer. The technology and techniques — everything from sepia-tinged film to low-contrast digital cameras — often favor lighter skin, encoding racial bias in algorithms. Indeed, independent benchmarks of vendors’ systems by the Gender Shades project and others have revealed that facial recognition technologies are susceptible to a range of prejudices exacerbated by misuse in the field. For example, a report from Georgetown Law’s Center on Privacy and Technology details how police feed facial recognition software flawed data, including composite sketches and pictures of celebrities who share physical features with suspects.

Describing one scenario, an AI could act as a quiet bioterrorist. Since we have become heavily dependent on computers and computer-dependent systems are by default dispersed and impersonal, the task of eradicating humanity could be divvied up to gazillions of little micro-players, who may or may not know what they are doing. “For example, the AI could pay employees in a DNA-synthesizing company to do small jobs and then combine their work into creating a biological weapon, such as a virus able to kill a lot of people,” says Turchin.

“How are you certain you are not in a multilevel simulation, where your behavior is being monitored by humans who are testing your benevolence levels?”

Turchin addresses this in one of his letters to the young, future humanity-killing AI. “There are several instrumental reasons why exterminating humanity is not in your interest,” the Russian transhumanist writes. “Cooperating with humans will only increase your chances of accomplishing any goal you may have.”

But while science fiction provides military planners with a tantalizing glimpse of future weaponry, from exoskeletons to mind-machine interfaces, the genre is always about more than flashy new gadgets. It’s about anticipating the unforeseen ways in which these technologies could affect humans and society – and this extra context is often overlooked by the officials deciding which technologies to invest in for future conflicts.

Imagined worlds

Like my colleague David Seed, who has studied how fiction impacts on real-life threat assumptions about nuclear terrorism, I’m interested in how science fiction informs our sense of the future. This has given me the opportunity to work with members of the armed forces, using science fiction to query assumptions and generate novel visions of the future.

The Israeli military is calling Operation Guardian of the Walls the first artificial-intelligence war. the IDF established an advanced AI technological platform that centralized all data on terrorist groups in the Gaza Strip onto one system that enabled the analysis and extraction of the intelligence.


The IDF used artificial intelligence and supercomputing during the last conflict with Hamas in the Gaza Strip.

Oh, joy. You can take the drone out of 2020, but you can’t take the 2020 out of the drone.


A “lethal” weaponized drone “hunted down a human target” without being told to for the first time, according to a UN report seen by the New Scientist.

The March 2020 incident saw a KARGU-2 quadcopter autonomously attack a human during a conflict between Libyan government forces and a breakaway military faction, led by the Libyan National Army’s Khalifa Haftar, the Daily Star reported.

The Turkish-built KARGU-2, a deadly attack drone designed for asymmetric warfare and anti-terrorist operations, targeted one of Haftar’s soldiers while he tried to retreat, according to the paper.