Skip to content

AI

Video Series: Is AI Racist? Can We Trust it? Interview with Prof. Colin Gavaghan

Should self-driving cars be programmed in a way that always protects ‘the driver’? Who is responsible if an AI makes a mistake? Will AI used in policing be less racially biased than police officers? Should a human being always take the final decision? Will we become too reliant on AIs and lose important skills? Many interesting… Read More »Video Series: Is AI Racist? Can We Trust it? Interview with Prof. Colin Gavaghan

World funds: implement free mitigations

The future is uncertain and far. That means, not only do we not know what will happen, but we don’t reason about it as if it were real: stories about the far future are morality tales, warnings or aspirations, not plausible theories about something that is going to actually happen.

Some of the best reasoning about the future assumes a specific model, and then goes on to explore the ramifications and consequences of that assumption. Assuming that property rights will be strictly respected in the future can lead to worrying consequences if artificial intelligence (AI) or uploads (AIs modelled on real human brains) are possible. These scenarios lead to stupidly huge economic growth combined with simultaneous obsolescence of humans as workers – unbelievable wealth for (some of) the investing class and penury for the rest.

This may sound implausible, but the interesting thing about it is that there are free mitigation strategies that could be implemented right now. Read More »World funds: implement free mitigations