Web/Tech

Music Streaming, Hateful Conduct and Censorship

Written by Rebecca Brown

Last month, one of the largest music streaming services in the world, Spotify, announced a new ‘hate content and hateful conduct’ policy. In it, they state that “We believe in openness, diversity, tolerance and respect, and we want to promote those values through music and the creative arts.” They condemn hate content that “expressly and principally promotes, advocates, or incites hatred or violence against a group or individual based on characteristics, including, race, religion, gender identity, sex, ethnicity, nationality, sexual orientation, veteran status, or disability.” Content that is found to fulfil these criteria may be removed from the service, or may cease to be promoted, for example, through playlists and advertisements. Spotify further describe how they will approach “hateful conduct” by artists: 

We don’t censor content because of an artist’s or creator’s behavior, but we want our editorial decisions – what we choose to program – to reflect our values. When an artist or creator does something that is especially harmful or hateful (for example, violence against children and sexual violence), it may affect the ways we work with or support that artist or creator.

An immediate consequence of this policy was the removal from featured playlists of R. Kelly and XXXTentacion, two American R&B artists. Whilst the 20 year old XXXTentacion has had moderate success in the US, R. Kelly is one of the biggest R&B artists in the world. As a result, the decision not to playlist R. Kelly attracted significant attention, including accusations of censorship and racism. Subsequently, Spotify backtracked on their decision, rescinding the section of their policy on hateful conduct and announcing regret for the “vague” language of the policy which “left too many elements open to interpretation.” Consequently, XXXTentacion’s music has reappeared on playlists such as Rap Caviar, although R. Kelly has not (yet) been reinstated. The controversy surrounding R. Kelly and Spotify raises questions about the extent to which commercial organisations, such as music streaming services, should make clear moral expressions. 
Continue reading

Ethical AI Kills Too: An Assement of the Lords Report on AI in the UK

Hazem Zohny and Julian Savulescu
Cross-posted with the Oxford Martin School

Developing AI that does not eventually take over humanity or turn the world into a dystopian nightmare is a challenge. It also has an interesting effect on philosophy, and in particular ethics: suddenly, a great deal of the millennia-long debates on the good and the bad, the fair and unfair, need to be concluded and programmed into machines. Does the autonomous car in an unavoidable collision swerve to avoid killing five pedestrians at the cost of its passenger’s life? And what exactly counts as unfair discrimination or privacy violation when “Big Data” suggests an individual is, say, a likely criminal?

The recent House of Lords Artificial Intelligence Committee’s report acknowledges the centrality of ethics to AI front and centre. It engages thoughtfully with a wide range of issues: algorithmic bias, the monopolised control of data by large tech companies, the disruptive effects of AI on industries, and its implications for education, healthcare, and weaponry.

Many of these are economic and technical challenges. For instance, the report notes Google’s continued inability to fix its visual identification algorithms, which it emerged three years ago could not distinguish between gorillas and black people. For now, the company simply does not allow users of Google Photos to search for gorillas.

But many of the challenges are also ethical – in fact, central to the report is that while the UK is unlikely to lead globally in the technical development of AI, it can lead the way in putting ethics at the centre of AI’s development and use.

Continue reading

Should PREDICTED Smokers Get Transplants?

By Tom Douglas

Jack has smoked a packet a day since he was 22. Now, at 52, he needs a heart and lung transplant.

Should he be refused a transplant to allow a non-smoker with a similar medical need to receive one? More generally: does his history of smoking reduce his claim to scarce medical resources?

If it does, then what should we say about Jill, who has never touched a cigarette, but is predicted to become a smoker in the future? Perhaps Jill is 20 years old and from an ethnic group with very high rates of smoking uptake in their 20s. Or perhaps a machine-learning tool has analysed her past facebook posts and google searches and identified her as a ‘high risk’ for taking up smoking—she has an appetite for risk, an unusual susceptibility to peer pressure, and a large number of smokers among her friends. Should Jill’s predicted smoking count against her, were she to need a transplant? Intuitively, it shouldn’t. But why not?

Continue reading

Scrabbling for Augmentation

By Stephen Rainey

 

Around a decade ago, Facebook users were widely playing a game called ‘Scrabulous’ with one another. It was pretty close to Scrabble, effectively, leading to a few legal issues.

Alongside Scrabulous, the popularity of Scrabble-assistance websites grew. Looking over the shoulders of work colleagues, you could often spy a Scrabulous window, as well as one for scrabblesolver.co.uk too. The strange phenomenon of easy, online Scrabulous cheating seemed pervasive for a time.

The strangeness of this can hardly be overstated. Friends would be routinely trying to pretend to one another that they were superior wordsmiths, by each deploying algorithmic anagram solvers. The ‘players’ themselves would do nothing but input data to the automatic solvers. As Charlie Brooker reported back in 2007,

“We’d rendered ourselves obsolete. It was 100% uncensored computer-on-computer action, with two meat puppets pulling the levers, fooling no one but themselves.”

Back to the present, and online Scrabble appears to have lost its sheen (or lustre, patina, or polish). But in a possible near future, I wonder if some similar issues could arise. Continue reading

How Social Media Distorts Our Perceptions of Groups.

We know that groups are internally diverse. For any group you care to pick out (Brexit supporters, feminists, tea drinkers), we know intellectually that they will disagree among themselves about a great deal. When people identify as a group member, they may feel pressure to conform to the group view, but there are countervailing pressures in the other direction which limit the effects of group conformity. Disputes internal to groups are often as – or more – heated than those between them. Continue reading

The Clickbait Candidate

By James Williams (@WilliamsJames_)
Note: This is a cross-post with Quillette magazine.

While ‘interrobang’ sounds like a technique Donald Trump might add to the Guantanamo Bay playbook, it in fact refers to a punctuation mark: a disused mashup of interrogation and exclamation that indicates shock, surprise, excitement, or disbelief. It looks like this: ‽ (a rectangle means your font doesn’t support the symbol). In view of how challenging it seems for anyone to articulate the fundamental weirdness of Trump’s proximity to the office of President of the United States, I propose that we resuscitate the interrobang, because our normal orthographic tools clearly are not up to the task.

Yet even more interrobang-able than the prospect of a Trump presidency is the fact that those opposing his candidacy seem to have almost no understanding of the media dynamics that have enabled it to rise and thrive. Trump is perhaps the most straightforward embodiment of the dynamics of the so-called ‘attention economy’—the pervasive, all-out war over our attention in which all of our media have now been conscripted—that the world has yet seen. He is one of the geniuses of our time in the art of attentional manipulation.

If we ever hope to have a societal conversation about the design ethics of the attention economy—especially the ways in which it incentivizes technology design to push certain buttons in our brains that are incompatible with the assumptions of democracy—now would be the time. Continue reading

The Chinese pleasure room: ethics of technologically mediated interaction

The author of the webcomic Left Over Soup proposed a sexual equivalent (or parody?) of Searle’s Chinese Room argument, posing some interesting questions about what it means to have sex, consent and relationships if there is technological mediation:

Continue reading

Hide your face?

A start-up claims it can identify whether a face belongs to a high-IQ person, a good poker player, a terrorist, or a pedophile. Faception uses machine-learning to generate classifiers that signal whether a face belongs in one category or not. Basically facial appearance is used to predict personality traits, type, or behaviors. The company claims to already have sold technology to a homeland security agency to help identify terrorists. It does not surprise me at all: governments are willing to buy remarkably bad snake-oil. But even if the technology did work, it would be ethically problematic.

Continue reading

Guest Post: Does Humanity Want Computers Making Moral Decisions?

Albert Barqué-Duran
Department of Psychology
CITY UNIVERSITY LONDON

A runaway trolley is approaching a fork in the tracks. If the trolley is allowed to run on its current track, a work crew of five will be killed. If the driver steers the train down the other branch, a lone worker will be killed. If you were driving this trolley what would you do? What would a computer or robot driving this trolley do? Autonomous systems are coming whether people like it or not. Will they be ethical? Will they be good? And what do we mean by “good”?

Many agree that artificial moral agents are necessary and inevitable. Others say that the idea of artificial moral agents intensifies their distress with cutting edge technology. There is something paradoxical in the idea that one could relieve the anxiety created by sophisticated technology with even more sophisticated technology. A tension exists between the fascination with technology and the anxiety it provokes. This anxiety could be explained by (1) all the usual futurist fears about technology on a trajectory beyond human control and (2) worries about what this technology might reveal about human beings themselves. The question is not what will technology be like in the future, but rather, what will we be like, what are we becoming as we forge increasingly intimate relationships with our machines. What will be the human consequences of attempting to mechanize moral decision-making?

Continue reading

Virtually reality? The value of virtual activities and remote interaction

By Hannah Maslen    

The Oxford Martin School recently held a two-day symposium on virtual reality and immersive technologies. The aim was to examine a range of technologies, from online games to telepresence via a robot avatar, to consider the ways in which such technologies might affect our personal lives and our interactions with others.

These sorts of technologies reignite traditional philosophical debates concerning the value of different experiences – could a virtual trip to Rome ever be as valuable (objectively or subjectively) as a real trip to Rome? – and conceptual questions about whether certain virtual activities, say, ‘having a party’ or ‘attending a concert’, can ever really be the activity that the virtual environment is designed to simulate. The prospect of robotic telepresence presents particular ethical challenges pertaining to moral responsibility for action at a distance and ethical norms governing virtual acts.

In what follows, I introduce and discuss the concern that virtual experiences and activities are to some extent deficient in value, especially where this relates to the formation and maintenance of close personal relationships. Continue reading

Authors

Subscribe Via Email

Affiliations