Don’t write evil algorithms
Google is said to have dropped the famous “Don’t be evil” slogan. Actually, it is the holding company Alphabet that merely wants employees to “do the right thing”. Regardless of what one thinks about the actual behaviour and ethics of Google, it seems that it got one thing right early on: a recognition that it was moving in a morally charged space.
Google is in many ways an algorithm company: it was founded on PageRank, a clever algorithm for finding relevant web pages, scaled up thanks to MapReduce algorithms, use algorithms for choosing adverts, driving cars and selecting nuances of blue. These algorithms have large real world effects, and the way they function and are used matters morally.
Can we make and use algorithms more ethically?
The algorithmic world
The basic insight is that the geosphere, ecosphere, anthroposphere and technosphere are getting deeply entwined, and algorithms are becoming a key force in regulating this global system.
The word “algorithm” (loosely defined here as a set of rules that precisely defines a sequence of operations to reach a certain goal) may be the fashionable way of saying “software” right now, but it applies just as well to mathematical methods and formal institutions and social praxis. There is an algorithm for becoming a UK citizen. However, it is the algorithms in our technology that leverage our power at an accelerating pace. When technology can do something better than humans it usually does it far better, and it can also be copied endlessly or scaled up.
Some algorithms enable new activities (multimedia is impossible without FFT and CRC), change how activities are done (data centres happen because virtualization and MapReduce make them scale well), or enable faster algorithmic development (compilers and libraries).
Algorithms used for decision support are particularly important. Logistics algorithms (routing, linear programming, scheduling, and optimization) affect the scope and efficiency of the material economy. Financial algorithms the scope and efficiency of the economy itself. Intelligence algorithms (data collection, warehousing, mining, network analysis but also human expert judgement combination methods), statistics gathering and risk models affect government policy. Recommender systems (“You May Also Enjoy…”) and advertising influence consumer demand.
Since these algorithms are shared, their properties will affect a multitude of decisions and individuals in the same way even if they think they are acting independently. There are spillover effects from the groups that use algorithms to other stakeholders from the algorithm-caused actions. We sometimes outsource moral decisions to them. And algorithms have a multitude of non-trivial failure modes: machine learning can create opaque bias or sudden emergent misbehaviour, human over-reliance on algorithms can cause accidents or large-scale misallocation of resources, some algorithms produce systemic risks, and others embody malicious behaviours.
In short, algorithms – whether in computers or as a formal praxis in an organisation – matters morally because they have significant and nontrivial effects.
The Biosphere Code
This weekend I contributed to a piece of manifesto writing, producing the Biosphere Code Manifesto. The Guardian has a version on its blog. Admittedly, it is not as dramatic as Marinetti’s Futurist Manifesto but perhaps more constructive:
Principle 1. With great algorithmic powers come great responsibilities
Those implementing and using algorithms should consider the impacts of their algorithms.
Principle 2. Algorithms should serve humanity and the biosphere at large.
Algorithms should be considerate of human needs and the biosphere, and facilitate transformations towards sustainability by supporting ecologically responsible innovation.
Principle 3. The benefits and risks of algorithms should be distributed fairly
Algorithm developers should consider issues relating to the distribution of risks and opportunities more seriously. Developing algorithms that provide benefits to the few and present risks to the many are both unjust and unfair.
Principle 4. Algorithms should be flexible, adaptive and context-aware
Algorithms should be open, malleable and easy to reprogram if serious repercussions or unexpected results emerge. Algorithms should be aware of their external effects and be able to adapt to unforeseen changes.
Principle 5. Algorithms should help us expect the unexpected
Algorithms should be used in such a way that they enhance our shared capacity to deal with shocks and surprises – including problems caused by errors or misbehaviors in other algorithms.
Principle 6. Algorithmic data collection should be open and meaningful
Data collection should be transparent and respectful of public privacy. In order to avoid hidden biases, the datasets which feed into algorithms should be validated.
Principle 7. Algorithms should be inspiring, playful and beautiful
Algorithms should be used to enhance human creativity and playfulness, and to create new kinds of art. We should encourage algorithms that facilitate human collaboration, interaction and engagement – with each other, with society, and with nature.
The aim is to explore and critically discuss the ways by which the algorithmic revolution impacts the world. Or, how do we turn algorithms into a force of good?
Ethics of the principles
The first principle simply urges recognizing the power of algorithms and that wielding it carries a moral weight. The wrong choice of discount rate makes a system ignore the future and focus on instant gratification. Making certain data such as legal records easily searchable has powerful social implications. Just because something is merely code doesn’t mean it is less dangerous than a material device.
The second principle tries to link “good” with serving humanity and the biosphere. Much of the initiative to the code came from the Stockholm Resilience Centre and I guess it shows. In many ways the issue is similar to the one in the AI open letter: pushing for beneficial rather than just capable systems. Algorithms represent simple forms of autonomy, and applied blindly they are likely to produce adverse outcomes. It is perhaps more obvious in the case of dumb algorithms rather than putative superintelligence that they can be profoundly mis-aimed.
The third principle is a justice principle, arguing for compensation or inclusion of those affected by an algorithm: externalities should be internalized.
The fourth principle is more of a prudential principle than ethics. Still, unyielding algorithms pose many of the most worrisome social issues of new technology: forcing people to disclose private information, manipulating what social personas they can have, forcing humans to act non-autonomously.
The fifth principle is more about designing flexible, adaptive and context-aware human use of algorithms than the algorithms themselves. Many large technological and ecological disasters occur because human institutions misuse or misunderstand the algorithms, producing maladaptive behaviour. The context is not just the material world but the human world of incentives, vested interests and gaming of systems.
The sixth principle is back in the land of normal information ethics: privacy, responsible disclosure, avoiding bias and opaqueness, achieving transparency and openness. Still, algorithms go beyond mere data: they are active processes that shape information, knowledge and action. There are interesting issues here to consider about what meaningful openness actually is when dealing with potentially unpredictable algorithms – what is informed consent vis-à-vis algorithms?
The seventh principle may seem out of place. But aesthetic value is a form of value too, and often a profound driver of human action. Creativity is needed to invent and harness algorithms in new ways. Algorithms never occur in a vacuum: they are embedded in the world of soft human interaction. Assuming they are independent of the world and autonomous is very much a human choice of how to interpret them.
What is the point?
Could a code like the Biosphere Code actually do anything useful? Isn’t this yet another splashy “wouldn’t it be nice if everybody were moral and rational in engineering/politics/international relations?”
I think it is a first step towards something useful.
There are engineering ethics codes for software engineers. But algorithms are created in many domains, including by non-engineers. We cannot and should not prevent people from thinking, proposing, and trying new algorithms: that would be like attempts to regulate science, art, and thought. But we can as societies create incentives to do constructive things and avoid known destructive things. In order to do so, we should recognize that we need to work on the incentives and start gathering information.
Algorithms and their large-scale results must be studied and measured: we cannot rely on theory, despite its seductive power since there are profound theoretical limitations about our predictive abilities in the world of algorithms, as well as obvious practical limitations. Algorithms exist in the human or biosphere context, and they are an active part of what is going on. An algorithm can be totally correct and yet be misused in a harmful way because of its framing.
But even in the small, if we can make one programmer think a bit more about what they are doing and choosing a better algorithm than they otherwise would have done, the world is better off. In fact, a single programmer can have surprisingly large impact.
I am more optimistic than that. Recognizing algorithms as the key building blocks that they are for our civilization, what peculiarities they have, and learning better ways of designing and using them has transformative power. There are disciplines dealing with parts of this, but the whole requires considering interdisciplinary interactions that are currently rarely explored.
Google got that part right. It is up to the rest of us to figure out how not to be evil (or do the right thing) with our algorithmic power.
(This post is an expanded version of an earlier post on my blog)