Skip to content

Strict-ish liability? An experiment in the law as algorithm

Some researchers in the US recently conducted an ‘experiment in the law as algorithm’. (One of the researchers involved with the project was interviewed by Ars Technia, here.) At first glance, this seems like quite a simple undertaking for someone with knowledge of a particular law and mathematical proficiency: laws are clearly defined rules, which can be broken in clearly defined ways. This is most true for strict liability offences, which require no proof of a mental element of the offence (the mens rea). An individual can commit a strict liability offence even if she had no knowledge that her act was criminal and had no intention to commit the crime. All that is required under strict liability statutes is that the act itself (the actus reus) is voluntary. Essentially: if you did it, you’re liable – it doesn’t matter why or how. So, for strict liability offences such as speeding it would seem straightforward enough to create an algorithm that could compare actual driving speed with the legal speed limit, and adjudicate liability accordingly.

This possibility of law as algorithm is what the US researchers aimed to test out with their experiment. They imagined the future possibility of automated law enforcement, especially for simple laws like those governing driving. To conduct their experiment, the researchers assigned a group of 52 programmers the task of automating the enforcement of driving speed limits. A late-model vehicle was equipped with a sensor that collected actual vehicle speed over an hour-long commute. The programmers (without collaboration) each wrote a program that computed the number of speed limit violations and issued mock traffic tickets.

Despite the seemingly clear-cut nature of what it means to break the speed limit, the experiment demonstrated that even relatively narrow and straightforward ‘rules’ can be problematically indeterminate in practice. Even though the programmers worked with quantitative data for both vehicle speed and the speed limit, the number of tickets issued varied from none to one per sensor sample above the speed limit. The results demonstrated significant deviation in number and type of tickets issued during the course of the commute, based on legal interpretations and assumptions made by programmers untrained in the law.

It is perhaps surprising that assumptions would bias an algorithm designed to indicate the frequency and magnitude of speeding offences. What assumptions could be involved when deciding whether the actual driving speed X is greater than the limit of Y? However, the researchers point out that laws were not created with automated enforcement in mind, and that even seemingly simple laws have subtle features that require programmers to make assumptions about how to encode them. For example:

An automated system […] could maintain a continuous flow of samples based on driving behavior and thus issue tickets accordingly. This level of resolution is not possible in manual law enforcement. In our experiment, the programmers were faced with the choice of how to treat many continuous samples all showing speeding behavior. Should each instance of speeding (e.g. a single sample) be treated as a separate offense, or should all consecutive speeding samples be treated as a single offense? Should the duration of time exceeding the speed limit be considered in the severity of the offense? [p.11]

When we manually enforce laws relating to speeding– or even when we use speed cameras – we know that these mechanisms capture only a fraction of the total number of instances of speeding. There is also usually a ‘buffer zone’ of a few miles per hour within which a driver might technically be speeding but would not get picked up. Particularly when police officers use speed guns to measure drivers’ speeds, there is room for discretion which cannot be built in to an algorithm. As the researchers say, bias can be encoded into the system but, once encoded, the code is unbiased in its execution. The researchers conclude that discretion after the fact may actually be important even for the simplest of offences, like speeding. Offences requiring the mental element in addition to commission of the prohibited act are likely to be even harder to effectively encode ex ante:

The question arises, then: What is the societal cost of automated law enforcement, particularly when involving artificially-intelligent robotic systems unmediated by human judgment? Our tradition of jurisprudence rests, in large part, on the indispensable notion of human observation and consideration of those attendant circumstances that might call—or even mandate—mitigation, extenuation, or aggravation. When robots mediate in our stead either on the side of law enforcement or the defendant, whether for reasons of frugality, impartiality, or convenience—an essential component of our judicial system is, in essence, stymied. Synecdochically embodied by the judge, the jury, the court functionary, etc., the human component provides that necessary element of sensibility and empathy for a system that always, unfortunately, carries with it the potential of rote application, a lady justice whose blindfold ensures not noble objectivity but compassionless indifference. [p. 28]

This, perhaps, is an unsurprising view when considering complex offences that require that the offender acted with intention or knowledge or recklessness. But it also raises interesting questions for strict liability. Might it be the case that strict liability statutes are not only enacted under the assumption but perhaps even the hope that not all volitions will be picked up? Is the lower resolution of manual law enforcement actually preferable for less serious offences? The answer to this will depend in part on the seriousness of the offence in question and the justifications for the attendant sanctions: Deterrence? Retribution? Generation of revenue?

There is, of course, an important difference between seeing the algorithm as inadequate because it gets something factually wrong and seeing it as inadequate because some discretion might be preferable. For example, the discretion involved in deciding how offences should be delineated as a driver meanders above and below the speed limit is something we might wish to preserve. Further, the experiment demonstrated that hilly terrain caused the vehicle to exceed the speed limit despite the cruise control being set at the speed limit. This inability for a driver to have precision control over her speed provides justification for a buffer zone. Thus, despite the conceptual simplicity of what it means to break the speed limit, the experiment in law as algorithm at least raises the possibility that, in some cases, strict-ish liability is actually what we optimally want.

Share on

6 Comment on this post

  1. This is all very well, but your post neglects two things:

    1. Law is an embedded constituent of our own communities and therefore the very notion of algorithm is not subject to the kind of platono-kantian abstractology to which you subject it.

    (B) You haven’t solved free will and until you do that your whole argument is moot!

  2. In denying that language is an algorithm we deny our own community constituted non-abstract humanity. This constitutes an anti-proto-kantian approach to philosophy which feeds upon stripto-enlightenment canards.

    1. Anthony Drinkwater

      I guess that you studied with Jean-Baptiste Botul, professor Sandel. It must have been inspiring to be in the presence of the author of such classics as “La Vie Sexuelle d’Emanuel Kant”. Cf my unpublished monograph “Some thoughts on Botulism and contemporary algorithms, with particular reference to “soft” approaches to hegemony”.

  3. Language; what definition is being used and does a language alone always enable a complete reflection of all ongoing community constituted humanity. You appear to conflate the answer to that within free will so it seems to require clarification and this is not a Wittgensteinian response to Kant.

    Although it may ready people for those issues in life, illustrating the tone of the DVLA advertising campaign when they amalgamated data to automate the enforcement process in various responses regarding free will still does not fully illuminate the potential breadth of complexity contained within free will in any given community.

  4. This was an especially interesting, given that last year at We Robot in Miami, some authors presented a paper on the problems of removing humans from the loop and turning law enforcement over to a computer (Confronting Automated Law Enforcement). This year, the authors took the question a step further. In an experiment, they look at what happens when you convert laws into algorithms.

Comments are closed.