The Economist has a leader “For life, not for an afterlife“, in which it argues that Elon Musk’s stated motivation to settle Mars – making humanity a multi-planetary species less likely to go extinct – is misguided: “Seeking to make Earth expendable is not a good reason to settle other planets”. Is it misguided, or is the Economist‘s reasoning misguided? Continue reading
Kuwait is planning to build a complete DNA database of not just citizens but all other residents and temporary visitors. The motivation is claimed to be antiterrorism (the universal motivation!) and fighting crime. Many are outraged, from local lawyers over a UN human rights committee to the European Society of Human Genetics, and think that it will not be very helpful against terrorism (how does having the DNA of a suicide bomber help after the fact?) Rather, there are reasons to worry about misuse in paternity testing (Kuwait has strict adultery laws), and in the politics of citizenship (which provides many benefits): it is strictly circumscribed to paternal descendants of the original Kuwaiti settlers, and there is significant discrimination against people with no recognized paternity such as the Bidun minority. Plus, and this might be another strong motivation for many of the scientists protesting against the law, it might put off public willingness to donate their genomes into research databases where they actually do some good. Obviously it might also put visitors off visiting – would, for example, foreign heads of state accept leaving their genome in the hands of another state? Not to mention the discovery of adultery in ruling families – there is a certain gamble in doing this.
Overall, it seems few outside the Kuwaiti government are cheering for the law. When I recently participated in a panel discussion organised by the BSA at the Wellcome Collection about genetic privacy, at the question “Would anybody here accept mandatory genetic collection?” only one or two hands rose in the large audience. When would it make sense to make mandatory genetic information collection? Continue reading
A start-up claims it can identify whether a face belongs to a high-IQ person, a good poker player, a terrorist, or a pedophile. Faception uses machine-learning to generate classifiers that signal whether a face belongs in one category or not. Basically facial appearance is used to predict personality traits, type, or behaviors. The company claims to already have sold technology to a homeland security agency to help identify terrorists. It does not surprise me at all: governments are willing to buy remarkably bad snake-oil. But even if the technology did work, it would be ethically problematic.
A trial to see if it is possible to regenerate brains in patients that have been declared clinically dead has been approved. Reanima Advanced Biosciences aims at using stem cells, injections of peptides, and nerve stimulation to cause regeneration in brain dead patients. The primary outcome measure is “reversal of brain death as noted in clinical examination or EEG”, which at least scores high on ambition. The study accepts healthy volunteers, but they need to be brain dead due to traumatic brain injury, which might discourage most people.
Is there any problem with this? Continue reading
Scott Alexander has a thoughtful piece about who gets to set the default in disagreements about what is reasonable. He describes a couple therapy session where one member is bored with his sex life and goes kinky clubbing, to the anger of his strongly monogamous partner. Yet both want to stay together at least for the sake of the kids. Assuming the answer is an either-or situation where one has to give up on their demand (likely not the ideal response in an actual couple therapy setting), the issue seems to boil down to who has the unreasonable demand.
It resonated with another article I came across in my news flow today: What It’s Like to Be Chemically Castrated. This article is an interview with a man who wanted to be chemically castrated in order to manage his sex addiction and save his 45-year marriage. Is this an unreasonable intervention?
Google is said to have dropped the famous “Don’t be evil” slogan. Actually, it is the holding company Alphabet that merely wants employees to “do the right thing”. Regardless of what one thinks about the actual behaviour and ethics of Google, it seems that it got one thing right early on: a recognition that it was moving in a morally charged space.
Google is in many ways an algorithm company: it was founded on PageRank, a clever algorithm for finding relevant web pages, scaled up thanks to MapReduce algorithms, use algorithms for choosing adverts, driving cars and selecting nuances of blue. These algorithms have large real world effects, and the way they function and are used matters morally.
Can we make and use algorithms more ethically?
Today, I noticed two news stories: BBC future reported about the Korean work on killer robots (autonomous gun turrets that can identify, track and attack) and BBC news reported on the formation of a campaign to ban sex robots, clearly mirrored on the existing campaign to stop killer robots.
Much of the robot discourse is of course just airing hopes and fears about the future, projected onto futuristic devices. But robots are also real things increasingly used for real applications, potentially posing actual threats and affecting social norms. When does it make sense to start a campaign to stop the development of robots that do X?
By Ben Levinstein and Anders Sandberg
Almost everybody agrees factory farming is morally outrageous, with several billions of animals living lives that are likely not worth living. One possible solution to this moral disaster is to make in vitro meat technologically and commercially viable. In vitro meat is biologically identical to real meat but cultured in a tank: one day it may become cheaper, more efficient and safer than normal meat. On animal welfare grounds, then, in vitro meat seems like a clear win as it has the potential to eliminate or greatly reduce the need for factory farms. However, there is a problem…
by Anders Sandberg and Ben Levinstein
Over the past week we have been subsumed by the intense, final work phase just before the deadline of a big, complex report. The profanity-density has been high, mostly aimed at Google, Microsoft and Apple. Not all of it was deserved, but it brought home the issue that designing software carries moral implications. Continue reading