Skip to content

Oxford Uehiro Prize in Practical Ethics: What, if Anything, is Wrong About Algorithmic Administration?

  • by

This essay received an honourable mention in the undergraduate category.

Written by University of Oxford student, Angelo Ryu.

 

Introduction

 The scope of modern administration is vast. We expect the state to perform an ever-increasing number of tasks, including the provision of services and the regulation of economic activity. This requires the state to make a large number of decisions in a wide array of areas. Inevitably, the scale and complexity of such decisions stretch the capacity of good governance.

In response, policymakers have begun to implement systems capable of automated decision making. For example, certain jurisdictions within the United States use an automated system to advise on criminal sentences. Australia uses an automated system for parts of its welfare program.

Such systems, it is said, will help address the costs of modern administration. It is plausibly argued that automation will lead to quicker, efficient, and more consistent decisions – that it will ward off a return to the days of Dickens’ Bleak House.

So far, these systems have mostly relied on fixed algorithms. In other words, they use a predetermined formula, which does not change, to make decisions. Recent advancements in technology, however, have made it feasible for the state to implement more flexible systems, such as those which rely on machine learning.

Fixed algorithms will result in the consistent application of rules. It does not, however, have the capacity of being flexible in response to unforeseen problems. That is, fixed algorithms work by considering a set of predetermined factors. Here, an ancient problem belies a modern dilemma. The problem, as raised by Aristotle, is that a legislator cannot predict every possible application of the laws they enact. Thus, the legislator cannot ensure that the law, as applied, will conform to the reasons which justify its implementation.

A possible response is to develop systems capable of adapting to new circumstances. This is made possible with dynamic algorithmic systems, like machine learning. In this essay, however, I raise two issues with such systems in the context of public administration. First, they are inconsistent with procedural justice. Second, they are incompatible with the traditional avenues of judicial review.

Procedural Justice

Two models of procedural justice

Recent scholarship has thrown light on the risk of algorithms in relation to their ability to perpetuate social injustice.[1] This is a matter of substantive justice since it is an evaluative claim about the justice of outcomes. It therefore lies beyond the scope of this essay. Less explored, though, is the compatibility of algorithms with procedural justice. This aspect of justice relates to evaluative claims about the process by which a decision is made.

There are two distinct views of procedural justice in the literature. From an instrumentalist perspective, the value of procedural justice lies in promoting better decisions. It is therefore consequentialist. On this view, the benefits of making the correct decision must be weighed against the process costs.[2] The aim is to reach an optimal balance.[3] Here, there is little difficulty with algorithms as a matter of procedural justice, since machines are better than people at consistently applying rules.

Non-instrumentalists are in the opposite camp. They argue the value of procedural justice cannot be reduced to the correctness of the ultimate decision. In English law, there is a prominent 18th century case, known as Dr Bentley’s Case, which offers a striking defence of this view.[4] What makes this judgement remarkable is how the court justified the requirements of procedural justice. The judges pointed to Genesis 3.11, stating that even God did not pass judgement on Adam before calling upon him to present a defence. The point, of course, is that God allowed Adam a chance to present a defence despite His omniscience. Since God must already know of Adam’s guilt, it is commonly argued that his purpose must be non-instrumental. Specifically, the orthodox view grounds this purpose in the importance of human dignity. For example, Lawrence Tribe and Jeremy Waldron suggest that it is needed to avoid the disrespect of treating citizens as mere ‘things’[5] or ‘a rabid animal’.[6] In short, human dignity requires that we have a say in what happens to us, as opposed to being things which are acted upon. Algorithms, then, are more likely to raise concerns from the non-instrumentalist perspective.

A new approach

I generally adopt the non-instrumentalist view that procedural justice has value independent of the outcome. But, in my view, it would be wrong to ground its value in human dignity alone. To see why, let us return to Adam and God. Before passing judgement, God asks Adam whether he had ‘eaten of the tree’. Following Dr Bentley’s Case, this has commonly been interpreted as a request for a defence. However, I argue it is better understood as an opportunity for confession. What, though, is the good of confession? A plausible answer, I suggest, is that participating in an honest and considered confession brings us closer to God. By placing faith in His unending mercy, we do our part in a process leading to absolution.

This view of confession has something important to tell us about the good of procedural justice. For it, too, affords us the opportunity to bring us closer to those charged with the power of decision. It promotes an important, and valuable, way in which we interact with administrative officials.

This is a novel defence of the value of procedural justice. It, in turn, gives us a new way to think about the downsides of implementing algorithms to perform public tasks. One problem with algorithmic administration, I argue, is its inability to secure the moral good which comes from procedural justice.

Algorithms and procedural injustice 

When administrative decision making works well, both officials and stakeholders work together to promote the common good. Part of what it means to have a just procedure, then, is the ability to contribute to a certain type of joint activity. In this way, it is something which officials and citizens do together. This activity broadly takes two forms. First, it might involve deliberating as to the right policy to implement. Secondly, it might involve the adjudication of a dispute.

To see what makes this good, it might be helpful to consider the opposite of procedural justice. This, I suggest, is procedural arbitrariness. By this I mean a process which is utterly indifferent to the reasons for or against a decision. It obtains, in my view, when a decision making process fails to have the minimum procedures necessary to have the capacity to be responsive to reasons. Importantly, this is different from procedures which encourage the wrong result. An arbitrary procedure does not necessarily promote the considering of wrong reasons; it is simply indifferent to proper reasons.

There are, of course, many reasons why procedural arbitrariness is wrong. For one, decisions which turn on the whims of an official, or another irrelevant reason, are simply poor ways to govern. It is also wrong, however, because of its indifference towards those whose interests are at stake. In so doing it contributes to a sense of alienation with the administration.  In short, it is a perversion of the good of procedural justice, which brings together stakeholders and officials. Faceless bureaucrats, working within the depths of a faceless building, making decisions about the fate of citizens without notice or reason: this is Kafka’s image of procedural arbitrariness.

We are now in a position to evaluate the procedural justice of algorithmic administration. It is wrong because it fails to secure the good of procedural justice; indeed, it may alienate it. In an important sense, the use of algorithmic systems as primary decision makers cannot contribute to a sense of closeness between officials and stakeholders. We may be close to our phones, but not in the same way as friends, colleagues, or family. Our phones tell us things, and we tell our phones to do things. But we do not do things together with our phones. What is missing is that sense of mutuality, of a shared intention, of teamwork. So too with algorithmic administration. It lacks the ability to promote the joint activity of administration between the polity and its officials. In this way it is procedurally unjust.

Judicial Review 

In response, though, one might point to the possibility of judicial review. If the courts could effectively supervise the administration, it is plausible they might be capable of curing the procedural deficiencies of algorithmic administration. So long as we have our day in court, one might say, these procedural concerns might be remedied. This brings us to the second problem with algorithmic administration: they confound traditional avenues of judicial review.

Law is, at bottom, a conceptual enterprise. It consists of the human application of concepts to new situations. We begin with pre-set concepts. We assign labels to those concepts.  We then apply those concepts to the world to help make our decisions. It is, in this sense, a top-down approach.

In some ways, this is similar to how machine learning algorithms work. A system based on machine learning looks at data, and ‘learns’ from new situations. To do so, it might adjust the weighting of various nodes in its neural network. That is, it might learn that certain factors have more, or less, predictive power. It might also learn that certain factors, which it did not initially consider, have predictive power. For example, if we were building a machine learning algorithm to classify animals, it would learn to look for things such as eyes, noses, and the like. By doing so, it has learned to construct higher-level features, which helps it analyse the information it receives.

The difficulty, however, is the inability of such systems to label these features. Although an algorithmic system might learn to construct a higher-level feature which looks for eyes or noses, it lacks the ability to call them eyes or noses. Of course, when the algorithm is used to identify pictures of animals, we can ‘cheat’ by looking at which sets of image pixels the algorithm is considering. We can see that the algorithm, when considering a feature it has constructed, is looking at the part of the animal which we call a ‘nose’. But such a workaround is difficult to do with other types of tasks.

This may present a difficult challenge to our current system of judicial review. Say, for example, we implement a machine learning system for criminal punishment, based on the premise that the length of a criminal sentence should depend on the likelihood of recidivism. We can program a system which considers various factors within a data set to come up with a recommended sentence. But as it keeps getting fed data, it adjusts the weighting of these factors. It may even create new features as it discovers other relevant factors. It will be difficult, however, to review whether this new feature involves the consideration of a wrongful factor, such as race.

In this way, the nature of dynamic algorithms present unique challenges to our current framework for judicial review. First, algorithms are unable to label their higher-level features, which prevents them from having the capacity to give reasons for their decisions. Second, these algorithms present unique difficulties as it relates to judicial competence. The review of algorithmic processes will likely require specialised knowledge in computer science. But judges, at least currently, are not experts in computer science. All this, I suggest, hampers the possibility of effective judicial review.

Conclusion

The technical capabilities of dynamic algorithms are rapidly expanding. It appears likely, given current trends, for governments to increasingly rely on algorithms to conduct public tasks. Such algorithmic administration promises significant upsides, including efficiency and consistency. This raises many ethical concerns, such as the ability of algorithms to advance substantially unjust outcomes. In this essay, I have suggested two other reasons for concern, which have been less explored. First, such algorithmic systems fail to secure the good of procedural justice. Second, they are difficult to square with traditional avenues of judicial review.

[1] See, e.g., Safiya Noble, Algorithms of Oppression: How Search Engines Reinforce Racism (NYU Press, 2018); Solon Barocas and Andrew Selbst, ‘Big Data’s Disparate Impact’ 104 California Law Review 671 (2016)

[2] Richard Posner, ‘An Economic Approach to Legal Procedure and Judicial Administration’ (1973) JLS 399 at 401

[3] Adrian Vermeule, ‘Optimal Abuse of Power’ (2014) Nw U L Rev 673 at 676

[4] R v The Chancellor of Cambridge (1722) 93 E.R. 698, 704

[5] Lawrence Tribe, American Constitutional Law (2nd ed) (Mineola 1998) at 666

[6] Jeremy Waldron, ‘How the Law Protects Dignity’ (2012) CLJ 200 at 210

Share on