Reflective Equilibrium in a Turbulent Lake: AI Generated Art and The Future of Artists
by Anders Sandberg – Future of Humanity Institute, University of Oxford
Is there a future for humans in art? Over the last few weeks the question has been loudly debated online, as machine learning did a surprise charge into making pictures. One image won a state art fair. But artists complain that the AI art is actually a rehash of their art, a form of automated plagiarism that threatens their livelihood.
How do we ethically navigate the turbulent waters of human and machine creativity, business demands, and rapid technological change? Is it even possible?
Your eyes will be discontinued: what are the long-term responsibilities for implants?
by Anders Sandberg
What do you do when your bionic eyes suddenly become unsupported and you go blind again? Eliza Strickland and Mark Harris have an excellent article in IEEE Spectrum about the problems caused when the bionics company Second Sight got into economic trouble. Patients with their Argus II eyes found that upgrades could not be made and broken devices not replaced. What kind of responsibility does a company have for the continued function of devices that become part of people?
Pandemic ethics: Never again – will we make Covid-19 a warning shot or a dud?
by Anders Sandberg
The Covid-19 pandemic is not the end of the world. But it certainly is a wake-up call. When we look back on the current situation in a year’s time, will we collectively learn the right lessons or instead quickly forget like we did with the 1918 flu? Or even think it was just hype, like Y2K?
There are certainly plenty of people saying this is the new normal, and that things will never be the same. But historically we have adapted to trauma rather well. Maybe too well – we have a moral reason to ensure that we do not forget the harsh lessons we are learning now.
The goodness of being multi-planetary
The Economist has a leader “For life, not for an afterlife“, in which it argues that Elon Musk’s stated motivation to settle Mars – making humanity a multi-planetary species less likely to go extinct – is misguided: “Seeking to make Earth expendable is not a good reason to settle other planets”. Is it misguided, or is the Economist‘s reasoning misguided? Continue reading
DNA papers, please
Kuwait is planning to build a complete DNA database of not just citizens but all other residents and temporary visitors. The motivation is claimed to be antiterrorism (the universal motivation!) and fighting crime. Many are outraged, from local lawyers over a UN human rights committee to the European Society of Human Genetics, and think that it will not be very helpful against terrorism (how does having the DNA of a suicide bomber help after the fact?) Rather, there are reasons to worry about misuse in paternity testing (Kuwait has strict adultery laws), and in the politics of citizenship (which provides many benefits): it is strictly circumscribed to paternal descendants of the original Kuwaiti settlers, and there is significant discrimination against people with no recognized paternity such as the Bidun minority. Plus, and this might be another strong motivation for many of the scientists protesting against the law, it might put off public willingness to donate their genomes into research databases where they actually do some good. Obviously it might also put visitors off visiting – would, for example, foreign heads of state accept leaving their genome in the hands of another state? Not to mention the discovery of adultery in ruling families – there is a certain gamble in doing this.
Overall, it seems few outside the Kuwaiti government are cheering for the law. When I recently participated in a panel discussion organised by the BSA at the Wellcome Collection about genetic privacy, at the question “Would anybody here accept mandatory genetic collection?” only one or two hands rose in the large audience. When would it make sense to make mandatory genetic information collection? Continue reading
Hide your face?
A start-up claims it can identify whether a face belongs to a high-IQ person, a good poker player, a terrorist, or a pedophile. Faception uses machine-learning to generate classifiers that signal whether a face belongs in one category or not. Basically facial appearance is used to predict personality traits, type, or behaviors. The company claims to already have sold technology to a homeland security agency to help identify terrorists. It does not surprise me at all: governments are willing to buy remarkably bad snake-oil. But even if the technology did work, it would be ethically problematic.
Crosspost: Bring back the dead
A version of this post was originally published at The Conversation.
A trial to see if it is possible to regenerate brains in patients that have been declared clinically dead has been approved. Reanima Advanced Biosciences aims at using stem cells, injections of peptides, and nerve stimulation to cause regeneration in brain dead patients. The primary outcome measure is “reversal of brain death as noted in clinical examination or EEG”, which at least scores high on ambition. The study accepts healthy volunteers, but they need to be brain dead due to traumatic brain injury, which might discourage most people.
Is there any problem with this? Continue reading
Recent Comments