We’ve come a long way, as a species. And we’re better at many things than we ever were before – not just slightly better, but unimaginably, ridiculously better. We’re better at transporting people and objects, we’re better a killing, we’re better at preventing infectious diseases, we’re better at industrial production, agricultural and economic output, we’re better at communications and sharing of information.
But in some areas, we haven’t made such dramatic improvements. And one of those areas is parenting. We’re certainly better parents than our own great-great-grandparents, if we measure by outcomes, but the difference is of degree, not kind. Why is that? Continue reading
Channel 4 was censured by Ofcom this week for cutting to a light-hearted sponsorship advert just after viewers had watched the particularly graphic and disturbing rape scene in the film The Girl with the Dragon Tattoo. The Phones 4 U sponsorship ad was thought to be especially inappropriate for that moment as it features a couple apparently having sex, during which the woman pauses and asks to the camera ‘I’m faking it, can I upgrade’? Ofcom received 17 complaints about the timing of the advert and this week concluded that ‘the juxtaposition of a light-hearted sponsorship credit featuring a woman during sex with a disturbing and distressing rape scene in a film was clearly unsuitable… In Ofcom’s view this clearly had the potential to be offensive to viewers’.
The timing was clearly unfortunate, but to say that the juxtaposition was offensive is a stronger claim. Of course, the psychological effect of being immersed in a violent scene at one moment and then confronted with the same(ish) subject matter presented trivially will not do much for the viewer’s aesthetic experience. But the regulator’s suggestion seemed not only to be that the juxtaposition detracted from the viewer’s enjoyment, but also that it was in some way wrong. Continue reading
Advocates of even the mildest gun control reform in the US were dealt a serious blow yesterday, as the Senate failed to enact an expansion of background checks for gun purchases online and at gun shows. Some have been quick to gloat over the result, while others were taken aback that the Senate could so blatantly ignore the will of the American people. A number of polls have indeed shown massive support for background checks on gun purchases (upwards of 90%) – according to one survey, the proposal is even more popular than kittens. This level of support predates the Sandy Hook massacre. Political analysts will go to great lengths to explain how such a popular measure was voted down (the strength of the National Rifle Association’s lobbying efforts play a large part, no doubt), but we can also ask whether it should have been – in particular, independent of the merits of the bill, whether politicians should not have flaunted the will of the people. Continue reading
Some researchers in the US recently conducted an ‘experiment in the law as algorithm’. (One of the researchers involved with the project was interviewed by Ars Technia, here.) At first glance, this seems like quite a simple undertaking for someone with knowledge of a particular law and mathematical proficiency: laws are clearly defined rules, which can be broken in clearly defined ways. This is most true for strict liability offences, which require no proof of a mental element of the offence (the mens rea). An individual can commit a strict liability offence even if she had no knowledge that her act was criminal and had no intention to commit the crime. All that is required under strict liability statutes is that the act itself (the actus reus) is voluntary. Essentially: if you did it, you’re liable – it doesn’t matter why or how. So, for strict liability offences such as speeding it would seem straightforward enough to create an algorithm that could compare actual driving speed with the legal speed limit, and adjudicate liability accordingly.
This possibility of law as algorithm is what the US researchers aimed to test out with their experiment. They imagined the future possibility of automated law enforcement, especially for simple laws like those governing driving. To conduct their experiment, the researchers assigned a group of 52 programmers the task of automating the enforcement of driving speed limits. A late-model vehicle was equipped with a sensor that collected actual vehicle speed over an hour-long commute. The programmers (without collaboration) each wrote a program that computed the number of speed limit violations and issued mock traffic tickets. Continue reading
Covertly filming shocking animal abuse in the meat industry (and other industries involving animals) is a common tactic of animal welfare charities such as the Humane Society, Mercy for Animals, Animal Aid, and PETA. The footage is generally obtained by workers for the charities who gain employment at slaughterhouses, farms, laboratories and the like; and it has been instrumental in prosecuting abusers and applying pressure on meat producers to improve welfare standards, as the New York Times reported at the weekend.
The same article also reports a disturbing response to this practice by several US states:
They proposed or enacted bills that would make it illegal to covertly videotape livestock farms, or apply for a job at one without disclosing ties to animal rights groups. They have also drafted measures to require such videos to be given to the authorities almost immediately, which activists say would thwart any meaningful undercover investigation of large factory farms.
Those who flout this so-called ‘ag-gag’ legislation may, among other things, be placed on a ‘terrorist registry’.
The most recent St. Cross Ethics Seminar took place on February 28th, 2013. Kyle Edwards, who is currently a DPhil Candidate at Oxford, led it. Her informative and compelling presentation was entitled “Methods of Legitimation: How Ethics Committees Decide Which Reasons Count.”
(A podcast of the seminar is located here: http://media.philosophy.ox.ac.uk/uehiro/HT13STX_KE.mp3)
By Charles Foster
When you click ‘Like’ on Facebook, you’re giving away a lot more than you might think. Your ‘Likes’ can be assembled by an algorithm into a terrifyingly accurate portrait.
Here are the chances of an accurate prediction: Single v in a relationship: 67%; Parents still together when you were 21: 60%; Cigarette smoking: 73%; Alcohol drinking: 70%; Drug-using: 65%; Caucasian v African American: 95%; Christianity v Islam: 82%; Democrat v Republican: 85%; Male homosexuality: 88%; Female homosexuality: 75%; Gender: 93%. Continue reading
Would you trust a minister of finance explaining how he just fixed the latest euro-zone deal if he came out of the summit chambers tipsily waving a glass of wine? No? What about if he gave a press conference after an all-night session? Most likely nobody would even notice.
Yet 24 hours without sleep has (roughly) the same effect on decision-making as a 0.1% blood alcohol content (six glasses of wine in an hour). You would not be allowed to drive at this alcohol level, but you are apparently allowed to make major political decisions.
The example is from a blog essay (in Swedish) by Andreas Cervenka, where he asks the sensible question: can we trust sleep-deprived political leaders?
New York City contemplates using aerial drones for surveillance purposes, while North Korea buys thousands of cameras to spy on its impoverished population. Britain has so many cameras they cease being newsworthy. The stories multiply – it is trivial to note we are moving towards a surveillance society.
In an earlier post, I suggested surrendering on surveillance might be the least bad option – of all likely civil liberty encroachments, this seemed the less damaging and hardest to resist. But that’s an overly defensive way of phrasing it – if ubiquitous surveillance and lack of privacy are the trends of the future, we shouldn’t just begrudgingly accept them, but demand that society gets the most possible out of them. In this post, I’m not going to suggest how to achieve enlightened surveillance (a 360 degree surveillance would be a small start, for instance), but just outline some of the positive good we could get from it. We all know the negatives; but what good could come from corporations, governments and neighbours being able to peer continually into your bedroom (and efficiently process that data)? In the ideal case, how could we make it work for us? Continue reading