Skip to content

AI

A Sad Victory

I recently watched the documentary AlphaGo, directed by Greg Kohs. The film tells the story of the refinement of AlphaGo—a computer Go program built by DeepMind—and tracks the match between AlphaGo and 18-time world champion in Go Lee Sedol.

Go is an ancient Chinese board game. It was considered one of the four essential arts of aristocratic Chinese scholars. The goal is to end the game having captured more territory than your opponent. What makes Go a particularly interesting game for AI to master is, first, its complexity. Compared to chess, Go has a larger board, and many more alternatives to consider per move. The number of possible moves in a given position is about 20 in chess; in Go, it’s about 200. The number of possible configurations of the board is more than the number of atoms in the universe. Second, Go is a game in which intuition is believed to play a big role. When professionals get asked why they played a particular move, they will often respond something to the effect that ‘it felt right’. It is this intuitive quality why Go is sometimes considered an art, and Go players artists. For a computer program to beat human Go players, then, it would have to mimic human intuition (or, more precisely, mimic the results of human intuition).

Read More »A Sad Victory

Cross Post: Biased Algorithms: Here’s a More Radical Approach to Creating Fairness

  • by

Written by Dr Tom Douglas

File 20190116 163283 1s61b5v.jpg?ixlib=rb 1.1

Our lives are increasingly affected by algorithms. People may be denied loans, jobs, insurance policies, or even parole on the basis of risk scores that they produce.

Yet algorithms are notoriously prone to biases. For example, algorithms used to assess the risk of criminal recidivism often have higher error rates in minority ethic groups. As ProPublica found, the COMPAS algorithm – widely used to predict re-offending in the US criminal justice system – had a higher false positive rate in black than in white people; black people were more likely to be wrongly predicted to re-offend.

Corrupt code.
Vintage Tone/Shutterstock

Read More »Cross Post: Biased Algorithms: Here’s a More Radical Approach to Creating Fairness

Should PREDICTED Smokers Get Transplants?

By Tom Douglas

Jack has smoked a packet a day since he was 22. Now, at 52, he needs a heart and lung transplant.

Should he be refused a transplant to allow a non-smoker with a similar medical need to receive one? More generally: does his history of smoking reduce his claim to scarce medical resources?

If it does, then what should we say about Jill, who has never touched a cigarette, but is predicted to become a smoker in the future? Perhaps Jill is 20 years old and from an ethnic group with very high rates of smoking uptake in their 20s. Or perhaps a machine-learning tool has analysed her past facebook posts and google searches and identified her as a ‘high risk’ for taking up smoking—she has an appetite for risk, an unusual susceptibility to peer pressure, and a large number of smokers among her friends. Should Jill’s predicted smoking count against her, were she to need a transplant? Intuitively, it shouldn’t. But why not?

Read More »Should PREDICTED Smokers Get Transplants?

Cross Post: Common Sense for A.I. Is a Great Idea. But it’s Harder Than it Sounds.

Written by Carissa Veliz Crosspost from Slate.  Click here to read the full article At the moment, artificial intelligence may have perfect memories and be better at arithmetic than us, but they are clueless. It takes a few seconds of interaction with any digital assistant to realize one is not in the presence of a… Read More »Cross Post: Common Sense for A.I. Is a Great Idea. But it’s Harder Than it Sounds.

Video Series: Is AI Racist? Can We Trust it? Interview with Prof. Colin Gavaghan

Should self-driving cars be programmed in a way that always protects ‘the driver’? Who is responsible if an AI makes a mistake? Will AI used in policing be less racially biased than police officers? Should a human being always take the final decision? Will we become too reliant on AIs and lose important skills? Many interesting… Read More »Video Series: Is AI Racist? Can We Trust it? Interview with Prof. Colin Gavaghan

World funds: implement free mitigations

The future is uncertain and far. That means, not only do we not know what will happen, but we don’t reason about it as if it were real: stories about the far future are morality tales, warnings or aspirations, not plausible theories about something that is going to actually happen.

Some of the best reasoning about the future assumes a specific model, and then goes on to explore the ramifications and consequences of that assumption. Assuming that property rights will be strictly respected in the future can lead to worrying consequences if artificial intelligence (AI) or uploads (AIs modelled on real human brains) are possible. These scenarios lead to stupidly huge economic growth combined with simultaneous obsolescence of humans as workers – unbelievable wealth for (some of) the investing class and penury for the rest.

This may sound implausible, but the interesting thing about it is that there are free mitigation strategies that could be implemented right now. Read More »World funds: implement free mitigations