Skip to content

trust

It is not about AI, it is about humans

Written by Alberto Giubilini

We might be forgiven for asking so frequently these days whether we should trust artificial intelligence. Too much has been written about the promises and perils of ChatGPT to escape the question. Upon reading both enthusiastic and concerned accounts of it, there seems to be very little the software cannot do. It can provide or fabricate a huge amount of information in the blink on an eye, reinterpret it and organize it into essays that seem written by humans, produce different forms of art (from figurative art to music, poetry, and so on) virtually indistinguishable from human-made art, and so much more.

It seems fair to ask how we can trust AI not to fabricate evidence, plagiarize, defame, serve anti-democratic political ends, violate privacy, and so on.

One possible answer is that we cannot. This could be true in two senses.

In a first sense, we cannot trust AI because it is not reliable. It gets things wrong too often, there is no way to figure out if it is wrong without doing ourselves the kind of research that the software was supposed to do for us, and it could be used in unethical ways. On this view, the right attitude towards AI is one of cautious distrust. What it does might well be impressive, but not reliable epistemically or ethically.

In a second sense, we cannot trust AI for the same reason why we cannot distrust it, either. Quite simply, trust (and distrust) is not the kind of attitude we can have towards tools. Unlike humans, tools are just means to our ends. They can be more or less reliable, but not more or less trustworthy. In order to trust, we need to have certain dispositions – or ‘reactive attitudes’, to use some philosophical jargon – that can only be appropriately directed at humans. According to Richard Holton’s account of ‘trust’, for instance, trust requires the readiness to feel betrayed by the individual you trust[1]. Or perhaps we can talk, less emphatically, of readiness to feel let down.

Read More »It is not about AI, it is about humans

Trust and Institutions

Last week I attended part of a fascinating conference on Trust, organized by the Blavatnik School of Government in Oxford. In her opening paper, Katherine Hawley raised many interesting questions, including those of whether trustworthiness is a virtue and whether it can be a virtue of institutions.Read More »Trust and Institutions

Oxford Martin School Seminar: Robert Rogers and Paul Van Lange on Social Dilemmas

In a joint event on November 15th, Prof Robert Rogers and Prof Paul van Lange presented their scientific work related to social dilemmas.

Social dilemmas are situations in which private interests conflict with collective interests. This means that people facing a social dilemma have to decide whether to prioritise either their own short-term interests or the long-term interests of a group. Many real-life situations are social dilemmas. For example, as individuals we would (economically) benefit from using public motorways without paying taxes to maintain them, but if all acted according to their self-interest, no motorways would be built and the whole society would be worse off. In the academic literature, the three types of social dilemmas that are discussed most prominently are the Prisoner’s Dilemma, the Public Goods Dilemma, and the Tragedy of the Commons. All three types have been modelled as experimental games, and research from different fields like psychology, neuroscience, and behavioural economics uses these games to tackle the question of under which conditions people are willing to cooperate with one another in social dilemmas, instead of maximising their self-interest. The ultimate goal of such research is to be able to give recommendations about how to solve social dilemmas in society.

Read More »Oxford Martin School Seminar: Robert Rogers and Paul Van Lange on Social Dilemmas

Lying in the least untruthful manner: surveillance and trust

When I last blogged about the surveillance scandal in June, I argued that the core problem was the reasonable doubts we have about whether the oversight is functioning properly, and that the secrecy makes these doubts worse.  Since then a long list of new revelations have arrived. To me, what matters is not so much whether foreign agencies get secretly paid to spy, doubts about internal procedures or how deeply software can peer into human lives, but how these revelations put a lie to many earlier denials. In an essay well worth reading Bruce Schneier points out that this pattern of deception severely undermines our trust in the authorities, and this is an important social risk: democracies and market economies require us to trust politicians and companies to an appropriate extent.

Read More »Lying in the least untruthful manner: surveillance and trust