Skip to content

Don’t write evil algorithms

Google is said to have dropped the famous “Don’t be evil” slogan. Actually, it is the holding company Alphabet that merely wants employees to “do the right thing”. Regardless of what one thinks about the actual behaviour and ethics of Google, it seems that it got one thing right early on: a recognition that it was moving in a morally charged space.

Google is in many ways an algorithm company: it was founded on PageRank, a clever algorithm for finding relevant web pages, scaled up thanks to MapReduce algorithms, use algorithms for choosing adverts, driving cars and selecting nuances of blue. These algorithms have large real world effects, and the way they function and are used matters morally.

Can we make and use algorithms more ethically?

The algorithmic world

The basic insight is that the geosphere, ecosphere, anthroposphere and technosphere are getting deeply entwined, and algorithms are becoming a key force in regulating this global system.

The word “algorithm” (loosely defined here as a set of rules that precisely defines a sequence of operations to reach a certain goal) may be the fashionable way of saying “software” right now, but it applies just as well to mathematical methods and formal institutions and social praxis. There is an algorithm for becoming a UK citizen. However, it is the algorithms in our technology that leverage our power at an accelerating pace. When technology can do something better than humans it usually does it far better, and it can also be copied endlessly or scaled up.

Some algorithms enable new activities (multimedia is impossible without FFT and CRC), change how activities are done (data centres happen because virtualization and MapReduce make them scale well), or enable faster algorithmic development (compilers and libraries).

Algorithms used for decision support are particularly important. Logistics algorithms (routing, linear programming, scheduling, and optimization) affect the scope and efficiency of the material economy. Financial algorithms the scope and efficiency of the economy itself. Intelligence algorithms (data collection, warehousing, mining, network analysis but also human expert judgement combination methods), statistics gathering and risk models affect government policy. Recommender systems (“You May Also Enjoy…”) and advertising influence consumer demand.

Since these algorithms are shared, their properties will affect a multitude of decisions and individuals in the same way even if they think they are acting independently. There are spillover effects from the groups that use algorithms to other stakeholders from the algorithm-caused actions. We sometimes outsource moral decisions to them. And algorithms have a multitude of non-trivial failure modes: machine learning can create opaque bias or sudden emergent misbehaviour, human over-reliance on algorithms can cause accidents or large-scale misallocation of resources, some algorithms produce systemic risks, and others embody malicious behaviours.

In short, algorithms – whether in computers or as a formal praxis in an organisation – matters morally because they have significant and nontrivial effects.

The Biosphere Code

This weekend I contributed to a piece of manifesto writing, producing the Biosphere Code Manifesto. The Guardian has a version on its blog. Admittedly, it is not as dramatic as Marinetti’s Futurist Manifesto but perhaps more constructive:

Principle 1. With great algorithmic powers come great responsibilities

Those implementing and using algorithms should consider the impacts of their algorithms.

Principle 2. Algorithms should serve humanity and the biosphere at large.

Algorithms should be considerate of human needs and the biosphere, and facilitate transformations towards sustainability by supporting ecologically responsible innovation.

Principle 3. The benefits and risks of algorithms should be distributed fairly

Algorithm developers should consider issues relating to the distribution of risks and opportunities more seriously. Developing algorithms that provide benefits to the few and present risks to the many are both unjust and unfair.

Principle 4. Algorithms should be flexible, adaptive and context-aware

Algorithms should be open, malleable and easy to reprogram if serious repercussions or unexpected results emerge. Algorithms should be aware of their external effects and be able to adapt to unforeseen changes.

Principle 5. Algorithms should help us expect the unexpected

Algorithms should be used in such a way that they enhance our shared capacity to deal with shocks and surprises – including problems caused by errors or misbehaviors in other algorithms.

Principle 6. Algorithmic data collection should be open and meaningful

Data collection should be transparent and respectful of public privacy. In order to avoid hidden biases, the datasets which feed into algorithms should be validated.

Principle 7. Algorithms should be inspiring, playful and beautiful

Algorithms should be used to enhance human creativity and playfulness, and to create new kinds of art. We should encourage algorithms that facilitate human collaboration, interaction and engagement – with each other, with society, and with nature.

The aim is to explore and critically discuss the ways by which the algorithmic revolution impacts the world. Or, how do we turn algorithms into a force of good?

Ethics of the principles

The first principle simply urges recognizing the power of algorithms and that wielding it carries a moral weight. The wrong choice of discount rate makes a system ignore the future and focus on instant gratification. Making certain data such as legal records easily searchable has powerful social implications. Just because something is merely code doesn’t mean it is less dangerous than a material device.

The second principle tries to link “good” with serving humanity and the biosphere. Much of the initiative to the code came from the Stockholm Resilience Centre and I guess it shows. In many ways the issue is similar to the one in the AI open letter: pushing for beneficial rather than just capable systems. Algorithms represent simple forms of autonomy, and applied blindly they are likely to produce adverse outcomes. It is perhaps more obvious in the case of dumb algorithms rather than putative superintelligence that they can be profoundly mis-aimed.

The third principle is a justice principle, arguing for compensation or inclusion of those affected by an algorithm: externalities should be internalized.

The fourth principle is more of a prudential principle than ethics. Still, unyielding algorithms pose many of the most worrisome social issues of new technology: forcing people to disclose private information, manipulating what social personas they can have, forcing humans to act non-autonomously.

The fifth principle is more about designing flexible, adaptive and context-aware human use of algorithms than the algorithms themselves. Many large technological and ecological disasters occur because human institutions misuse or misunderstand the algorithms, producing maladaptive behaviour. The context is not just the material world but the human world of incentives, vested interests and gaming of systems.

The sixth principle is back in the land of normal information ethics: privacy, responsible disclosure, avoiding bias and opaqueness, achieving transparency and openness. Still, algorithms go beyond mere data: they are active processes that shape information, knowledge and action. There are interesting issues here to consider about what meaningful openness actually is when dealing with potentially unpredictable algorithms – what is informed consent vis-à-vis algorithms?

The seventh principle may seem out of place. But aesthetic value is a form of value too, and often a profound driver of human action. Creativity is needed to invent and harness algorithms in new ways. Algorithms never occur in a vacuum: they are embedded in the world of soft human interaction. Assuming they are independent of the world and autonomous is very much a human choice of how to interpret them.

What is the point?

Could a code like the Biosphere Code actually do anything useful? Isn’t this yet another splashy “wouldn’t it be nice if everybody were moral and rational in engineering/politics/international relations?”

I think it is a first step towards something useful.

There are engineering ethics codes for software engineers. But algorithms are created in many domains, including by non-engineers. We cannot and should not prevent people from thinking, proposing, and trying new algorithms: that would be like attempts to regulate science, art, and thought. But we can as societies create incentives to do constructive things and avoid known destructive things. In order to do so, we should recognize that we need to work on the incentives and start gathering information.

Algorithms and their large-scale results must be studied and measured: we cannot rely on theory, despite its seductive power since there are profound theoretical limitations about our predictive abilities in the world of algorithms, as well as obvious practical limitations. Algorithms exist in the human or biosphere context, and they are an active part of what is going on. An algorithm can be totally correct and yet be misused in a harmful way because of its framing.

But even in the small, if we can make one programmer think a bit more about what they are doing and choosing a better algorithm than they otherwise would have done, the world is better off. In fact, a single programmer can have surprisingly large impact.

I am more optimistic than that. Recognizing algorithms as the key building blocks that they are for our civilization, what peculiarities they have, and learning better ways of designing and using them has transformative power. There are disciplines dealing with parts of this, but the whole requires considering interdisciplinary interactions that are currently rarely explored.

Google got that part right. It is up to the rest of us to figure out how not to be evil (or do the right thing) with our algorithmic power.

 

(This post is an expanded version of an earlier post on my blog)

Share on

8 Comment on this post

  1. Only a liberal would write such a code, which treats ‘algorithm’ as morally unproblematic. Replace ‘algorithm’ by ‘market mechanism’ and ‘market forces’, and you will see how politically loaded it is. This is not the first attempt to rephrase liberal principles in technological language: many were produced in the early days of ‘The Internet’, as it was then called. The Biosphere Code deserves critical and sceptical examination, and one good way to do that is to consider how it could be rewritten for differing ideologies.

    1. Hahaha! I think I was the most right-wing person at that meeting. Of course market mechanisms are algorithms!

      Conservatives would of course stress the important of “if it ain’t broke, don’t fix it” – there are costs to change agreed algorithms, protocols and platforms. While socialists may recognize algorithms as means of production and argue that they need to be under public control. Fascists would of course stress the national interest in having the government and companies deeply aligned in their algorithms. And anarchists would point out that anyone should be allowed to code, but use of the code belongs to the community. And so on.

      The real question is what these ideological filters overlook. In particular, I think many of them are far too optimistic about our ability to predict the consequences of running algorithms – whether software or government programs or market mechanisms.

  2. I gave some quick thought to alternatives, and here are a few suggestions. The speed indicates that the original Biosphere Code was flawed. If it had been seriously analysed before publication, it would not be so easy to think of alternatives.

    1. Veto right: a natural person should have the right to veto any algorithm, meaning that no other natural or legal person may use it, if that has consequences for the person who vetoed it.

    2. Right of appeal: any natural person should be able to appeal any outcome of any algorithm, if that person is personally affected by that outcome. That would mean, for instance, that anyone who was dissatisfied with their blog’s rank on Google searches could take Google to court, probably after a simpler appeals procedure.

    3. Freedom of choice: consumers of products that are at least partly the result of operating an algorithm, e.g. search engines, should be allowed a free choice of algorithm. That would for instance compel search engine companies to offer alternative versions with alternative results, for instance a women’s Google, and an Islamic Google, and a Marxist Google.

    4. Predictability and transparency: the outcome of the algorithm under specified inputs must be public, and consistent. That is a precondition for rule 3. In practice it would preclude commercial secrecy about the algorithm.

    5. Design for outcome: an algorithm must designed to produce at least some specified outputs regardless of the input. The state would specify such constraints, derived from values. For instance, all financial algorithms could be required to redistribute wealth and income.

    6. Design for innovation: one obvious flaw in the Biosphere Code is that it does not address innovation. The default outcome for all algorithms should be innovative, rather than conservative. However, it would be acceptable for conservatives to use alternative conservative algorithms, if no-one other than conservatives were affected by their use.

    1. Huh? I can instantly come up with alternatives to anything, but that doesn’t tell us anything about the quality of the original.

      Also, your alternatives actually look very problematic. The veto right, for example: what if I veto long division, alphabetical sorting, or compound interest as being applied to me? Or appeal the result of the sorting? How do you run a search engine if somebody demands that it gather data through an exponentially slow algorithm? Would encryption be allowed by your point 4? And why do you think your point 6 is not covered by principle 7?

      I am confident that one can make a far better set of algorithm design/handling principles than our current set. Some of your points are relevant – I think we need to have proper appeal processes for all functions in our societies. But developing such principles (and the practices that actually implement them) requires a very broad discussion. Starting out by putting a particular political view into a principle (like the end of your point 5) is unlikely to get everybody onboard.

  3. What one could expect from Principle 5 is the formalization (verification of) the safety of the algorithm. For example, proving that a program is “thread safe” is hard. It is possible to formally define the term “data race” and search the execution space of a specific run of a program to see if it does or does not have that “data race” in time proportional to the size of the trace. However, many recent developments in metaprogramming and self-modifying algorithms make pre-existing proofs of a certain source code even harder because the machine that helps to construct a proof and verify it at the same time cannot explore all the possible execution spaces. For big algorithms that are ecologically complex, long and automated, human debuggers will have a hard time convincing other human debuggers (programmers, politicians, authorities, regulators) of the correctness of such evolving systems. It may help to think of the correctness of an algorithm as a goal that we seek to optimize in conjunction with other goals, for example, all the other principles you’ve listed above, plus computational steps, memory, energy costs, etc. There will be several trade-offs, for example, between the probability that an algorithm results in an unexpected outcome and speed. The analogy between markets and algorithms can be illuminating. Market mechanisms are algorithms. Humans debuggers developed this thing called economics to describe and convince other human debuggers of the correctness of market mechanisms. It is still a developing experiment. Economists, regulators and other humans agree on higher-level principles about market mechanisms. We would need a lot of high-level principles on the trade-offs we are expecting to see when these ecologically complex systems are running embroiled in our everyday life.

    1. Yes, proving something is safe is hard, even when it is a well-specified algorithm. We can and should check it as far as possible, but in the end the only real way is testing – there has to be more usage-hours than a given mean time between failures for us to be confident that the system is better than that limit, and some risks are only detected by having experience with both the algorithm and environment, seeing how they could interact in a bad way.

      In the end, our algorithm usage is subject to learning, and we can actually develop “meta-algorithms” for improving it. The principles may be first crude stab in this direction. Of course, often we just want to have informal procedures rather than iron-clad rules/steps.

      1. “But so far, the few who have understood the work have struggled to explain it to anyone else. “Everybody who I’m aware of who’s come close to this stuff is quite reasonable, but afterwards they become incapable of communicating it,” says one mathematician who did not want his name to be mentioned. The situation, he says, reminds him of the Monty Python skit about a writer who jots down the world’s funniest joke. Anyone who reads it dies from laughing and can never relate it to anyone else.”

        http://www.nature.com/news/the-biggest-mystery-in-mathematics-shinichi-mochizuki-and-the-impenetrable-proof-1.18509

  4. The authors of the Biosphere Code did put a ‘particular political view’ into their proposed principles. It’s called liberalism. It is characteristic of liberals that they present liberalism as a neutral framework, rather than as a specific political demand, or set of demands. That’s how the Code comes across, as evasive and propagandistic. On initial reading it seems harmless, but when you think about what it means, it is clear that it will allow morally unacceptable outcomes to continue, and could make things worse.

    The Biosphere Code was written by an elite, without participation or input from the weak, the dispossessed, the disadvantaged, the underclass, the poor, the persecuted minorities. Its stance is generally pro-business, which is not surprising given the background of its authors. My alternative principles are motivated by a desire to protect the weak and prevent injustice, and the Biosphere Code is not. So there is no point in trying to “get everybody onboard”, because the issues are politicised, polarised, and disputed from the start. There are many principles and values on this planet, and many different attitudes. The fact of moral difference precludes compromise, and in practice precludes human social life.

    So I think it’s wrong to be dismissive of alternative proposals. Anders Sandberg for instance uses as a reductio ad absurdum a possible veto on alphabetical sorting. In fact alphabetical sorting is known to have discriminatory effects (jstor.org/stable/30033639). So do other allegedly neutral methods of selection. The disadvantaged are the best placed to know which principles and methods disadvantage them, and a veto right would enable them to disable the harm mechanism, including algorithms. Of course some individuals will demand absurd vetos, but elite contempt for their choices is misplaced. My proposed veto right only applies to the extent that the individual is personally affected.

    However I also understand, that right-wing individuals would see such prohibitions as ‘political correctness gone mad’. I understand that individualised general veto rights would have huge negative impacts on the economy. I understand that my alternative principles will hurt business. They would for instance lead to bankruptcy for Google and other general search engines. They would block automated trading, and possible cause a global economic collapse. Nevertheless these are the kind of policies that must be followed, if injustice and disadvantage and oppression are to be eliminated.

    The Biosphere Code will not have any such effects: business and banking could sign up to most of it, and everything would still be much the same. That’s what I mean about its pro-business stance. Its general failures include the biased authorship, lack of non-elite representation, failure to address the outcome of algorithms, and failure to offer alternative algorithms, or any procedure which might facilitate them. I don’t know how deliberate these omissions are, however, and I have tried to avoid an accusatory tone in this comment.

Comments are closed.