Lethal Robots and Law Enforcement
As everyone across the country and across the world knows by now, on Thursday July 7 a gunman opened fire on police and protesters at a Black Lives Matter rally in Dallas, TX killing 5 officers. After a tense standoff, the Dallas police used a bomb disposal robot armed with an explosive device to kill the cornered suspect in the interest of protecting the lives of police and civilians alike. This situation is unprecedented and raises some very novel and profound legal and ethical questions.
Lethal force is often used by police and there are laws that allow and limit the use of such force. State laws generally allow for lethal force against suspects when the suspect poses an “imminent threat” to the officer or to innocent third parties. The use of force is further regulated by requirements that it be “proportional and necessary.” Absent an imminent threat and the proportional and necessary use of force, a suspect is entitled by right to Due Process through our judicial system.
But what happens when police outsource the use of deadly force to a robot? Does the method of killing the suspect matter so long as the suspect posed an “imminent threat” and the use of force was “proportional and necessary?” It would depend on the facts of each case. Most likely this was within the bounds of the law, but the law has suddenly become outdated. The era of drones and policing is rapidly approaching and we will need to carefully tailor our laws to ensure our society and our law enforcement are safe.
A new precedent for using a weaponized drone for domestic policing?
In this instance, the circumstances were so extraordinary that it is difficult to imagine there were true Due Process concerns – absent taking extreme measures, there no doubt would have been additional bloodshed. The real concern to my mind is mission creep. Police departments routinely adapt new technology to their law enforcement purposes. If this instance has set a new precedent in policing, and if police departments across the country start to use robots or drones as a matter of routine (beyond their intended purpose of bomb disposal), allowing those machines to have lethal capabilities becomes far more problematic. We already see what occurs when police are disattenuated from the communities they are policing. Armed robots would no doubt increase the divide between communities and police, decrease situational awareness, and literally “dehumanize” the entire police-civilian interaction. The consequences could be profound.
What About Security? Hacking Weaponized Machines
If one thing has been repeatedly made clear over the last year, it’s that our connected world is extremely vulnerable to cyber threats. It is very important to ensure these robots employed by our law enforcement are not used against the police or innocent civilians by bad actors. Looking at the current state of cyber-security in our nation’s institutions, I think this issue takes on a greater sense of urgency as we ponder entering a world of lethally capable machines before we have ensured that they (and other critical networks) cannot be hacked. This will require more than just a technical solution but also smart and evolving policy to keep up with the ever-changing technical landscape.
We are no doubt going to see more and more applications of automation and robotics in our policing and in all aspects of our life. We as a society need to have a very serious conversation about the ethics and legality involved in giving these machines the ability to deal out lethal force, whether fully autonomous or under the remote control of a human or humans. While this instance of lethal force issued by a robot is such an extreme circumstance, we cannot allow the reasonable application of this technology to lull us into thinking these issues do not need to be addressed with great urgency. The longer this conversation is delayed, the more likely we will be forced to confront this issue after things have taken a frightening turn.
Tim LeFebvre, Esq.