In late April the Guardian newspaper reported that the RAF had begun controlling operations by some of their Reaper unmanned air systems (vehicles) over Afghanistan from a new facility at RAF Waddington in Lincolnshire.
Shortly thereafter, the Ministry of Defence confirmed figures that showed the UK had around 500 aerial drones in its inventory. Both announcements added fuel to a rigorous debate as to the morality of armed, autonomous vehicles and their employment.
As the UK does not participate in US air strikes against anti-terrorism targets, the debate (when not muddle with these attacks) is focused on the argument that allowing machines to ‘decide’ to kill represents a new weakening of the rules of armed conflict.
General discussions of the utility and cost-effectiveness of autonomous aircraft have become deeply intertwined with questions as to the morality of preemptive attacks against individuals. Unfortunately, this debate is not focused on the critical issues that UCAVs and the like will bring to the fore. In effect, the problem is one of the fundamental questions of war and self-defence. Such questions are simply being laid bare by the circumstances of man versus machine.
Despite the cause being modern technology, the underlying questions are the same as they ever have been since it first became possible to send a projectile beyond one's own regard, to kill whomsoever should happen to be in the wrong place at the wrong time. Yet again, technology displays its ability to remind us of the moral or ethical questions we have failed to answer.
At it's heart, the UCAV debate is about the immediacy of self-defence. Killing, out-of-sight, heightens the sense of disregard for the consequences of one’s actions, as the concept of self-defence is most easily grasped as an imminent personal danger. But what of the consequences for innocent bystanders of one's defensive actions? The heart of the self-defence debate is the ambiguity caused by repercussions for a theoretical third person. The distant nature of such a third person gives rise to a very wide range of emotional responses, within a group or population.
Can it be moral to cause a persons death to prevent a killing?
If so, how can causing an unintended death be excused by the rationale of saving a life?
It is this moral problem that drives the debate about the use of lethal, autonomous machines, just as it does arguments as to the morality of any form or arena of conflict in which non-combatants are present. It is the impossibility of assigning binary moral judgments and binary fates to 3 or more parties. This remains the crux of the matter.
But believing that autonomous vehicles represent some fundamentally new moral landscape in this debate is mistaken.
THE LINE IT IS DRAWN
All land on Earth is claimed as the legal jurisdiction of at least one country. All of the air above that land is similarly the sovereign territory of someone. Finally, all of the littoral waters of the planet, and the skies above them, are the property of a state.
Entering such an environment, deploying forces or launching weapons into them is an act of aggression designed to create some form of malevolence for the owner. Whenever an armed, autonomous vehicle is deployed into foreign territory the intention is to cause or facilitate destruction. In all cases, physical separation from the destructive act is mere semantics.
Can a machine legitimately defend itself against a human? This question hinges on whether one believes that the machine is a genuine barrier to harm befalling yourself or others. It is simply an extension of the logic of killing in defence of oneself or others, the remoteness of the weapon is a mechanical irrelevancy.
This is the central conceptual problem with drones, they allow the construction of enormously abstract definitions of self-defence. Such definitions depend on predicted threats emerging over vast geographic and temporal distances. Further, they permit such action without the political cost and forewarning of deploying conventional military forces.
Once deployed, these autonomous weapon systems are glorified missiles, reliant upon the sophistication of a difference engine to decide where and when to explode. Whether they find and engage targets or not is irrelevant. Logically and morally it is the intention behind their launching or deployment that governs the morality of their use.
To demonstrate the point here’s a little thought experiment. Consider a cruise missile equipped to identify targets of opportunity within the broad geographical confines of a military installation. Once identified such targets would be prosecuted with small, precision guided munitions.
Such a system would be a missile, yes? Not a UCAV? But all such a weapon requires is the means to bring its expensive power-plant and sensors home (to be reused) and it would be the very definition of a UCAV. In short, present definitions of drones and missiles turn on how financial favourable their operating model is, not on the finely balanced differences between the ‘determined’ fate of a launched projectile and the ‘decided’ fate of a munition that selects its own targets.
Whenever the deployment of an autonomous weapons system has hostile intent from its outset - such as penetrating another’s sovereign territory - the distinction between vehicle and missile is synthetic. Even in the global commons of oceans and skies, the decision to switch from the peace-time prohibition against action to one of war is to launch a weapon in the hope that it functions correctly and attacks only those one intends.
Please don’t misunderstand me, this is not an argument in favour of excusing the inevitable deaths of innocent people that must come to pass in the future at the hands of autonomous vehicles.
It is merely a plea to not allow the technical and practical complexity of this situation alter reality. Responsibility for these deaths must not be transferred from where they belong - with those who choose to employ such weapon systems. Yes, I understand and am comfortable with the command responsibility implications of that statement.
Autonomous vehicles are not ‘people’, they cannot be legal ‘persons’ because they cannot be punished for their actions - they cannot take ‘responsibility’. They are glorified missiles, mines and torpedoes, nothing more. Responsibility for their use must remain precisely where it does for other weapons, 'smart' or otherwise.
Decisions as to where and when to employ these, or any other weapons, is an embodiment of how a nation or society answers the moral questions of war and self-defence. This ‘moral logic’ is what determines the justification for and morality of armed force. As such the legal consequences of this logic must rest with those who took it upon themselves to proffer an answer. Such is the price of authority - responsibility.
Society must stop being distracted by the ‘how’ of death and war, it blinds us to the far more important questions of why, when and where a genuine argument can be made for self-defence - and where that argument is too abstract.
UCAVs and their brethren have the potential to make it safe and easy to kill. As that happens it will be all the more important to be clear what we are killing for.