By: Ali H. Askar


For thousands of years, warfare has been essentially the same: one group of men have gone to another group of men’s territory and fought over it. Whether using spears or rockets, the essence of war hasn’t changed, though the technology has. Chinese General Sun Tzu, writing in 500 BCE, created The Art of War, still taught in military colleges to this day, because it deals in universal truths about warfare, not temporary technical developments: the crossbow was once outlawed by the Pope as a heinous weapon because it could kill armored knights — both the crossbow and knights are long gone from countries’ military inventories. But military basics: training, secrecy, surprise, deception, attacking enemy weaknesses, remain constant.

Or do they? Perhaps we are on the brink of a generational change in warfare. If it is conducted by drones, computers, robots, will there be a revolution in warfare, comparable to the emergence of firearms in the late middle ages, or the innovation of air fighting in World War One?

Some people have suggested that using non-human-crewed war machines, like drones and robots, might lower the threshold for warfare because by shooting down a drone, you are not killing anyone. Equally, it is possible to argue that wars could be ended via peace talks because the loss of robots is easier to write off than the sinking of the navy’s flagship with all the loss of life and prestige that it embodies.

The US Department of Defense started 2017 with a demonstration of a micro-drone swarm in California consisting of 103 drones collaborating in decision-making, adaptive formation flying and self-healing. These drones would be too small for conventional anti-air weapons to target.

Already submarine drones are in production. It is quite possible that large manned platforms, like aircraft carriers, are as obsolete as the horsed knight. Certainly, they seem much more vulnerable than they used to be.

There are two issues that confront the projections of “killer robots” beloved of Science Fiction.

Firstly the ethical component: can we decide to give an AI the power of life and death? Most countries have armies, but those soldiers and the people who give the orders are in a framework of legality. Although in practice this is flawed, war criminals do face legal sanction. How do you prosecute a machine that kills? Do you put the software developer on trial?

Secondly, the practical difficulties: as all computer users know, software flaws, crashes, data loss, and corruption, as well as hacking and viruses, are widespread. In the 1987 film Robocop (recently remade) during a demonstration, a robot law enforcer has a software glitch and kills a member of staff pretending to be a criminal. Can AI be sufficiently intelligent to take very serious decisions? Currently, robots have trouble walking, making tea, and building Lego. Small children do these things easily.

Hacking Systems — the Danger of Cyberattacks

Modern societies are extremely dependent on networked, computerized infrastructure. For example, food is delivered to supermarkets “just in time” in quantities that can be sold quickly, especially perishable items. Unexpected crises leave shelves bare: there is no overcapacity in the system, that goes against the ethos of modern economics. Therefore societies are vulnerable to disruption from cyber hacking of the computer-nervous system our countries run on. If you could shut down (or credibly threaten to shut down) a major infrastructure component, say the electricity grid, you could get a government to agree to your terms, without having anything more military than a laptop. It’s a lot easier and safer to procure malware, than it is to buy Kalashnikovs or weapons-grade Uranium, and may well be more effective. This is a definite security issue for most governments, and we have seen how even the Pentagon is not safe from hacking and has had various security breaches, most famously by Asperger’s sufferer Gary MacKinnon, who was looking for information on UFOs and left insulting messages for US security in their files.

Cyberwarfare will likely be a significant part of future conflict. We may see covert teams of hackers attacking and defending their countries’ networks. As satellite systems are so important for both military and civilian communications, they will be targets. This may lead to space warfare as missiles or killer satellites attempt to knock out opposing satellites.

Big Data To Monitor And Prevent Conflict

Although many people think of AI and Big Data surveillance as methods to track and neutralize terrorists (for example through facial recognition via CCTV cameras mounted everywhere), there is another possible route. Using masses of data from multiple sources, AI systems are being proposed to predict and reduce conflict: Organizations including the U.S. Defense Department, the United Nations, and the CIA have all created big data initiatives recently, with the objective to predict and anticipate “political crises, disease outbreaks, economic instability, resource shortages, and natural disasters” in order to hopefully catch them early and apply remediation measures. That is surely better than having to send either army, or aid convoys, or both.

The U.N.’s Global Pulse initiatives use masses of public data, scraping and analyzing details from tweets, blog posts, market information, and numerous other sources to track “human well-being and emerging vulnerabilities in real-time, in order to better protect populations from shocks.”

Wars may be conducted by drones, robotics, AI surveillance systems, and cyberhackers. But it is probably too early to write off the ordinary soldier: it is they who have fought the wars since before Sun Tzu’s time. Although when he was writing, 2,500 years ago, without even knowing of computers and drones, he was aware of a stealthy and covert undermining of the enemy: “Subtle and insubstantial, the expert leaves no trace; divinely mysterious, he is inaudible. Thus he is master of his enemy’s fate.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here