A start-up claims it can identify whether a face belongs to a high-IQ person, a good poker player, a terrorist, or a pedophile. Faception uses machine-learning to generate classifiers that signal whether a face belongs in one category or not. Basically facial appearance is used to predict personality traits, type, or behaviors. The company claims to already have sold technology to a homeland security agency to help identify terrorists. It does not surprise me at all: governments are willing to buy remarkably bad snake-oil. But even if the technology did work, it would be ethically problematic.
A trial to see if it is possible to regenerate brains in patients that have been declared clinically dead has been approved. Reanima Advanced Biosciences aims at using stem cells, injections of peptides, and nerve stimulation to cause regeneration in brain dead patients. The primary outcome measure is “reversal of brain death as noted in clinical examination or EEG”, which at least scores high on ambition. The study accepts healthy volunteers, but they need to be brain dead due to traumatic brain injury, which might discourage most people.
Is there any problem with this? Continue reading
Scott Alexander has a thoughtful piece about who gets to set the default in disagreements about what is reasonable. He describes a couple therapy session where one member is bored with his sex life and goes kinky clubbing, to the anger of his strongly monogamous partner. Yet both want to stay together at least for the sake of the kids. Assuming the answer is an either-or situation where one has to give up on their demand (likely not the ideal response in an actual couple therapy setting), the issue seems to boil down to who has the unreasonable demand.
It resonated with another article I came across in my news flow today: What It’s Like to Be Chemically Castrated. This article is an interview with a man who wanted to be chemically castrated in order to manage his sex addiction and save his 45-year marriage. Is this an unreasonable intervention?
Google is said to have dropped the famous “Don’t be evil” slogan. Actually, it is the holding company Alphabet that merely wants employees to “do the right thing”. Regardless of what one thinks about the actual behaviour and ethics of Google, it seems that it got one thing right early on: a recognition that it was moving in a morally charged space.
Google is in many ways an algorithm company: it was founded on PageRank, a clever algorithm for finding relevant web pages, scaled up thanks to MapReduce algorithms, use algorithms for choosing adverts, driving cars and selecting nuances of blue. These algorithms have large real world effects, and the way they function and are used matters morally.
Can we make and use algorithms more ethically?
Today, I noticed two news stories: BBC future reported about the Korean work on killer robots (autonomous gun turrets that can identify, track and attack) and BBC news reported on the formation of a campaign to ban sex robots, clearly mirrored on the existing campaign to stop killer robots.
Much of the robot discourse is of course just airing hopes and fears about the future, projected onto futuristic devices. But robots are also real things increasingly used for real applications, potentially posing actual threats and affecting social norms. When does it make sense to start a campaign to stop the development of robots that do X?
By Ben Levinstein and Anders Sandberg
Almost everybody agrees factory farming is morally outrageous, with several billions of animals living lives that are likely not worth living. One possible solution to this moral disaster is to make in vitro meat technologically and commercially viable. In vitro meat is biologically identical to real meat but cultured in a tank: one day it may become cheaper, more efficient and safer than normal meat. On animal welfare grounds, then, in vitro meat seems like a clear win as it has the potential to eliminate or greatly reduce the need for factory farms. However, there is a problem…
by Anders Sandberg and Ben Levinstein
Over the past week we have been subsumed by the intense, final work phase just before the deadline of a big, complex report. The profanity-density has been high, mostly aimed at Google, Microsoft and Apple. Not all of it was deserved, but it brought home the issue that designing software carries moral implications. Continue reading
That people in all cultures around the world use plant drugs to heal, intoxicate, or enhance themselves is well known. What is less well known – at least to me – is that many cultures give drugs to their dogs to improve hunting success. A new paper in Journal of Ethnopharmacology by B.D. Bennett and R. Alarcón reviews the plants used in lowland Ecuador, Peru and elsewhere.
They find a wide variety of drugs used. Some are clearly medicinal or just hide the dog’s scent. Others are intended as enhancers of night vision or smell. Some are psychoactive and intended to influence behaviour – make it walk straight, follow game tenaciously, be more alert, understand humans, or “not become a vagrant”. Several drugs are hallucinogenic, which may appear bizarre – how could that possibly help? The authors suggest that in the right dose they might create synaesthesia or other forms of altered perception that actually make the dogs better hunters by changing their sensory gating. Is drugging dogs OK? Continue reading
A recent series of papers have constructed a biochemical pathway that allows yeast to produce opiates. It is not quite a sugar-to-heroin home brew yet, but putting together the pieces looks fairly doable in the very near term. I think I called the news almost exactly five years ago on this blog.
People, including the involved researchers, are concerned and think regulation is needed. It is an interesting case of dual-use biotechnology. While making opiates may be somewhat less frightening than making pathogens, it is still a problematic use of biotechnology: millions of people are addicted, and making it easier for them to get access would worsen the problem. Or would it?
Jonathan Moreno presented a special lecture the 18th about “Mind Wars”, the military applications of neurotechnology. Here are some of my notes and comments inspired by this stimulating lecture. Continue reading