Skip to content

machine learning

Reflective Equilibrium in a Turbulent Lake: AI Generated Art and The Future of Artists

Stable diffusion image, prompt: "Reflective equilibrium in a turbulent lake. Painting by Greg Rutkowski" by Anders Sandberg – Future of Humanity Institute, University of Oxford

Is there a future for humans in art? Over the last few weeks the question has been loudly debated online, as machine learning did a surprise charge into making pictures. One image won a state art fair. But artists complain that the AI art is actually a rehash of their art, a form of automated plagiarism that threatens their livelihood.

How do we ethically navigate the turbulent waters of human and machine creativity, business demands, and rapid technological change? Is it even possible?

Read More »Reflective Equilibrium in a Turbulent Lake: AI Generated Art and The Future of Artists

Should PREDICTED Smokers Get Transplants?

By Tom Douglas

Jack has smoked a packet a day since he was 22. Now, at 52, he needs a heart and lung transplant.

Should he be refused a transplant to allow a non-smoker with a similar medical need to receive one? More generally: does his history of smoking reduce his claim to scarce medical resources?

If it does, then what should we say about Jill, who has never touched a cigarette, but is predicted to become a smoker in the future? Perhaps Jill is 20 years old and from an ethnic group with very high rates of smoking uptake in their 20s. Or perhaps a machine-learning tool has analysed her past facebook posts and google searches and identified her as a ‘high risk’ for taking up smoking—she has an appetite for risk, an unusual susceptibility to peer pressure, and a large number of smokers among her friends. Should Jill’s predicted smoking count against her, were she to need a transplant? Intuitively, it shouldn’t. But why not?

Read More »Should PREDICTED Smokers Get Transplants?

Video Series: Is AI Racist? Can We Trust it? Interview with Prof. Colin Gavaghan

Should self-driving cars be programmed in a way that always protects ‘the driver’? Who is responsible if an AI makes a mistake? Will AI used in policing be less racially biased than police officers? Should a human being always take the final decision? Will we become too reliant on AIs and lose important skills? Many interesting… Read More »Video Series: Is AI Racist? Can We Trust it? Interview with Prof. Colin Gavaghan

Hide your face?

A start-up claims it can identify whether a face belongs to a high-IQ person, a good poker player, a terrorist, or a pedophile. Faception uses machine-learning to generate classifiers that signal whether a face belongs in one category or not. Basically facial appearance is used to predict personality traits, type, or behaviors. The company claims to already have sold technology to a homeland security agency to help identify terrorists. It does not surprise me at all: governments are willing to buy remarkably bad snake-oil. But even if the technology did work, it would be ethically problematic.

Read More »Hide your face?

Asking the right questions: big data and civil rights

Alastair Croll has written a thought-provoking article, Big data is our generation’s civil rights issue, and we don’t know it. His basic argument is that the new economics of collecting and analyzing data has led to a change in how it is used. Once it was expensive to collect, so only data needed to answer particular questions was collected. Today it is cheap to collect, so it can be collected first and then analyzed – “we collect first and ask questions later”. This means that the questions asked can be very different from the questions the data seem to be about, and in many cases they can be problematic. Race, sexual orientation, health or political views – important for civil rights – can be inferred from apparently innocuous information provided for other purposes – names, soundtracks, word usage, purchases, and search queries.

The problem as he notes is that in order to handle this new situation is that we need to tie link what the data is with how it can be used. And this cannot be done just technologically, but requires societal norms and regulations. What kinds of ethics do we need to safeguard civil rights in a world of big data?

Croll states:

…governments need to balance reliance on data with checks and balances about how this reliance erodes privacy and creates civil and moral issues we haven’t thought through. It’s something that most of the electorate isn’t thinking about, and yet it affects every purchase they make.
This should be fun.

Read More »Asking the right questions: big data and civil rights