The Samaritans have launched a controversial new app that alerts Twitter users when someone they ‘follow’ on the site tweets something that may indicate suicidal thoughts.
To use the app, named ‘Samaritan Radar’, Twitter members must visit the Samaritans’ website, and choose to activate the app on their device. Having entered one’s twitter details on to the site to authorize the app, Samaritan Radar then scans the Twitter users that one ‘follows’, and uses an algorithm to identify phrases in tweets that suggest that the tweeter may be distressed. For example, the algorithm might identify tweets that involve phrases like “help me”, “I feel so alone” or “nobody cares about me”. If such a tweet is identified, an email will be sent to the user who signed up to Samaritan Radar asking whether the tweet should be a cause for concern; if so, the app will then offer advice on what to do next. Continue reading
You do not have a right not to be offended, insulted or verbally abused. You do not have that right because it might be right to offend, insult or verbally abuse you. You might believe stupid things, or even sensible things, and take offence at any and all critiques, rebuttals and refutations. You might be a pompous prig, a sanctimonious sop, an officious orifice. Even if you are not these things, there would be very little wrong in telling you you are. After all, you are not a six-year old child: you’re an adult. You can take it.
What of someone expressing their detestation of you, their hatred of you, wishing you ill, wishing you dead?
Happy internet slowdown day! Here are some apropos practical ethics questions for all to discuss as we sit patiently, waiting for the internet to load. What kind of internet ought we to have? Should sovereign nations decide for themselves what kind of internet they will have, or is this an international issue, requiring cooperation between nations? What do particular internet companies owe their competitors, and more vaguely, the internet? What right does an individual or social entity have to know about or to police the storage and usage of data about that individual or social entity? What right does an individual or corporation have to access data or restrict access to data at certain speeds?
These kinds of questions are of massive practical importance to big internet companies like Google, who finds itself embroiled in an ongoing antitrust dispute with various entities in Europe, and like American cable company Comcast, who might stand to profit from a change in current net neutrality regulations.
And yet interestingly – and unsurprisingly, I suppose, given the power of moral language – much of the debate surrounding this issue is cast in moral, rather than practical, terms. Continue reading
As I write this, at least 1,474 people have died in the recent outburst of violence in Gaza. A vast majority (1,410) of those are Palestinians. Throughout the last weeks, those of us who are open-minded enough to consume different types of news will have read very, very different assessments of what is happening. Some express the in other contexts quite popular opinion that we don’t measure ethics by counting dead bodies. A group of medical doctors published an open letter in The Lancet denouncing the aggression in Gaza by Israel. Washington Post published an opinion piece with the title “Moral Clarity in Gaza” which proclaimed that the situation is very clear: it is Hamas’ fault, and Israel is only exercising its rights. The New York Times made an attempt at being impartial by letting three experts on each side publish their views of what goes on. A group of prominent International Law experts wrote a joint declaration calling the international community to, among other things, use its power to stop the violence, and encouraged the UN Security Council to exercise its responsibilities and refer the situation in Palestine to the Prosecutor of the International Criminal Court. And so on. The disagreements run abysmally deep. Imprudent as it might feel to open ones mouth about a topic as infested as this, as someone working on ethics, I feel compelled to think about what ethics can do in this situation.
The purpose of this blog is, as you know, to comment on ethics in the news. It is written here just above: “Practical Ethics – Ethics in the News”. In this post, I am going to diverge from this purpose, and address a somewhat different topic. Numerous recent events that have been reported in the news raise the following question: what is the ethics of news? What should they be? Below, I outline what I perceive to be a very problematic tension that currently exists between the reality that journalists work in, and the ethical ideals that they subscribe to, and that we as consumers expect of them. I finish with speculating in what we can do about this, on the ethical side of things. Continue reading
Packets of cigarettes carry pictures showing purchasers what their lungs or their arteries will look like if they carry on smoking. Consumers International and the World Obesity Federation are now suggesting that some foods should bear similar images.
Assume for the sake of argument that the practice would be effective in discouraging the purchase of health-truncating foods. If the images work by telling consumers something about what they are buying that they would not otherwise know, surely there can be no coherent objection to them. Knowledge of that sort is always good – assuming that the consumer has a real choice as to whether to buy the bad product or a better one.
If they work by pushing to the forefronts of consumers’ minds information that their grosser appetites conveniently suppress when they are wandering down the mall, there may be an argument against them. This would presumably be on the broad basis that the images manipulate the person away from being what they authentically are (a fructose-guzzling cardiac-cripple-in-waiting) towards something else. This argument would assert that there’s a sort of ethical imperialism at work: that those would stamp pictures of limbless diabetics on junk sweet packs are tyrannously seeking to impose an arbitrary normative idea of the good life.
I have little sympathy with this second view. If anyone says in a normative voice that it’s good to be diabetic, they’re insane. If anyone says in an empirical voice that it’s better to be diabetic than non-diabetic, they’re misinformed. If anyone says in the voice of a hedonistic utilitarian that the overall pleasure gained by the consumption of lard outweighs the detriments, I’d invite them to get thin, do all the Munros, and then revisit their original judgment. If anyone thinks that they’re more authentically themselves by being ill might have a point once their illness is long-standing and has truly become a defining characteristic. But before the illness is triggered, aren’t they more themselves without clogged arteries or the need to inject insulin five times a day?
If the packaging proposal is adopted, some interesting questions arise. Should good foods be branded with pictures of the condition you’ll be in or the advantages you’ll have if you eat them? Aphrodisiac oysters would display the beaming visages of satisfied sexual partners. Green tea would show lean centenarians on trampolines. Or perhaps those good foods should show the things that they’ll spare you: prostate-preserving tinned tomatoes might show an unoccupied midnight toilet.
Perhaps other, wider concerns should feature. Tins of palm oil should show dead orangutans. Milk should show the mournful face of a calf-less cow alongside the pictures of healthy, non-osteoporotic bone-scans.
While it’s easy to multiply absurdities, the proposal is basically a very good thing. It’s a good thing for at least some of the reasons that the notion of informed consent to medical treatment is endorsed. If you’re keen on informed consent to treatment, a fortiori you’ll be keen on food package images. In fact, I suggest, you should be more keen on those images. They’re more important. Continue reading
This week, a landmark ruling from the European Court of Justice held that a Directive of the European Parliament entailed that Internet search engines could, in some circumstances, be legally required (on request) to remove links to personal data that have become irrelevant or inadequate. The justification underlying this decision has been dubbed the ‘right to be forgotten’.
The ruling came in response to a case in which a Spanish gentleman (I was about to write his name but then realized that to do so would be against the spirit of the ruling) brought a complaint against Google. He objected to the fact that if people searched for his name in Google Search, the list of results displayed links to information about his house being repossessed in recovery of social security debts that he owed. The man requested that Google Spain or Google Inc. be required to remove or conceal the personal data relating to him so that the data no longer appeared in the search results. His principal argument was that the attachment proceedings concerning him had been fully resolved for a number of years and that reference to them was now entirely irrelevant. Continue reading
A study published last week (and summarized here and here) demonstrated that a computer could be trained to detect real versus faked facial expressions of pain significantly better than humans. Participants were shown video clips of the faces of people actually in pain (elicited by submerging their arms in icy water) and clips of people simulating pain (with their arms in warm water). The participants had to indicate for each clip whether the expression of pain was genuine or faked.
Whilst human observers could not discriminate real expressions of pain from faked expression better than chance, a computer vision system that automatically measured facial movements and performed pattern recognition on those movements attained 85% accuracy. Even when the human participants practiced, accuracy only increased to 55%.
The authors explain that the system could also be trained to recognize other potentially deceptive actions involving a facial component. They say:
In addition to detecting pain malingering, our computer vision approach maybe used to detect other real-world deceptive actions in the realm of homeland security, psychopathology, job screening, medicine, and law. Like pain, these scenarios also generate strong emotions, along with attempts to minimize, mask, and fake such emotions, which may involve dual control of the face. In addition, our computer vision system can be applied to detect states in which the human face may provide important clues about health, physiology, emotion, or thought, such as drivers’ expressions of sleepiness and students’ expressions of attention and comprehension of lectures, or to track response to treatment of affective disorders.
The possibility of using this technology to detect when someone’s emotional expressions are genuine or not raises interesting ethical questions. I will outline and give preliminary comments on a few of the issues: Continue reading