Evolutionary Psychology has recently gained some public attention in Finland, as the University of Turku has announced that it will establish the discipline as a permanent study module from the beginning of autumn 2014. University of Turku reports itself to be among the first universities in Europe to provide studies in this discipline.
Evolutionary psychology (EP) is a debated discipline, and its institutionalisation adds some weight to the debate. A thorough discussion of its “pros and cons” are beyond this entry – instead, I am interested on the manner in which this relatively young and multidisciplinary discipline is debated.
Most debaters seem to have a strong opinion about EP. It can be seen as the Grand Theory answering all the questions of humanity, or as pseudoscience without slightest scientific background. Obviously, none of the extremist positions is sensible.
By Kimberly Schelle & Nadira Faulmüller
Horizon 2020, the European Union’s 2014-2020 largest research programme ever, includes the call to pursue ‘Responsible Research and Innovation’ (RRI). RRI stands for a research and innovation process in which all societal actors (e.g. citizens, policy makers, business and researchers) are working together in the process to align the outcomes with the values, needs, and expectations of the European Society. In a recently published paper on the importance of including the public and patients’ voices in bioethical reasoning, the authors describe, although in other words, the value of the RRI approach in bioethical issues:
“A bioethical position that fails to do this [exchange with the public opinion], and which thus avoids the confrontation with different public arguments, including ones perhaps based in different cultural histories, relations and ontological grounds […], not only runs the risk of missing important aspects, ideas and arguments. It also arouses strong suspicion of being indeed one-sided, biased or ideological—thus illegitimate.”
At some point, most people will have questioned the necessity of the existence of mosquitoes. In the UK at least, the things that might prompt us into such reflection are probably trivial; in my own case, the mild irritation of an itchy and unsightly swelling caused by a mosquito bite will normally lead me to rue the existence of these blood-sucking pests. Elsewhere though, mosquitoes lead to problems that are far from trivial; in Africa the Anopheles gambiae mosquito is the major vector of malaria, a disease that is estimated to kill more than 1 million people each year, most of whom are African children. Continue reading
On the 22nd of October 1707, more than 1400 British sailors died when a British naval fleet sank in stormy weather off the Isles of Scilly. The disaster was later attributed to failings in navigation and sailors’ difficulty in determining their location at sea. This was a perennial problem at the time, and had persisted despite intense scientific research. Seven years later, the UK government passed the Longitude Act, offering 20,000 pounds (more than 2 million pounds in today’s money) to anyone who could develop a method for reliably determining longitude at sea. The longitude prize was eventually won by John Harrison, a self-educated Lincolnshire clockmaker.
Yesterday, 300 years after the original Longitude act, the UK Technology Strategy Board launched a £10 million pound prize competition, a new ‘Longitude prize’. The money will be awarded to a scientist or group of scientists who come up with a solution to one of a set of major global challenges – inadequate food/clean water supply for everyone, antibiotic resistance, spinal cord injury, dementia, the large carbon impact of air-flight.
The new Longitude prize is the latest in a series of innovation inducement competitions over time. These competitions have offered monetary rewards for solving problems as diverse as the development of butter substitutes, the first trans-Atlantic air flight, reusable aircraft for space flight, or an alternative fertilizer to bird poo. One novel feature of the 2014 Longitude prize is that it is seeking public input into the specific challenge to be targeted. Public voting will decide which of the six global challenges above are to be the focus of the prize.
But are innovation prizes an effective or appropriate way to solve major global scientific challenges? Continue reading
It is often asserted that emerging cognitive science – especially work in psychology (e.g., that associated with work on automaticity, along with work on the power of situations to drive behavior) and cognitive neuroscience (e.g., that associated with unconscious influences on decision-making) – threatens free will in some way or other. What is not always clear is how this work threatens free will. As a result, it is a matter of some controversy whether this work actually threatens free will, as opposed to simply appearing to threaten free will. And it is a matter of some controversy how big the purported threat might be. Could work in cognitive science convince us that there is no free will? Or simply that we have less free will? And if it is the latter, how much less, and how important is this for our practices of holding one another morally responsible for our behavior? Continue reading
Back in 2010, I blogged about Craig Venter’s creation of the first synthetic organism, Synthia, a bacteria.
Now, in 2014, the next step has been made by a team at John Hopkins University, the use of synthetic biology in yeast, which, whilst still a simple organism, has a similar cell structure to humans (and other more complex organisms): a nuclei, chromosomes and organelles. The engineered yeast has been reproduced to over 100 generations, passing on its new DNA.
The pace is breathtaking. Moore’s law describes a phenomenon in computing, where computer capacity (so far) doubles every two years. Kurzweil uses Moore’s law to predict the ‘singularity’: a state where humans no longer control, or even comprehend, the progress that technology continues to make.
It’s difficult to measure scientific progress in the same way as computer power, but it’s clear that leaps in progress are now measured in years, not decades. Yet still we wait until technology is upon us before we act.
Things I’ve learned (so far) about how to do practical ethics
I had the opportunity, a few months back, to look through some old poems I’d written in high school. Some, I thought, were pretty good. Others I remembered thinking were good when I wrote them, but now they seem embarrassingly bad: pseudo-profound, full of clichés, marked by empty rhetoric instead of meaningful content. I’ve had a similar experience today with my collection of articles here at the Practical Ethics blog. And Oh, the things I have learned!
Here are just a few of the lessons that have altered my thinking, or otherwise informed my views about “doing” practical ethics — particularly in a public-engagement context — since my very first blog post appeared in 2011:
A study published last week (and summarized here and here) demonstrated that a computer could be trained to detect real versus faked facial expressions of pain significantly better than humans. Participants were shown video clips of the faces of people actually in pain (elicited by submerging their arms in icy water) and clips of people simulating pain (with their arms in warm water). The participants had to indicate for each clip whether the expression of pain was genuine or faked.
Whilst human observers could not discriminate real expressions of pain from faked expression better than chance, a computer vision system that automatically measured facial movements and performed pattern recognition on those movements attained 85% accuracy. Even when the human participants practiced, accuracy only increased to 55%.
The authors explain that the system could also be trained to recognize other potentially deceptive actions involving a facial component. They say:
In addition to detecting pain malingering, our computer vision approach maybe used to detect other real-world deceptive actions in the realm of homeland security, psychopathology, job screening, medicine, and law. Like pain, these scenarios also generate strong emotions, along with attempts to minimize, mask, and fake such emotions, which may involve dual control of the face. In addition, our computer vision system can be applied to detect states in which the human face may provide important clues about health, physiology, emotion, or thought, such as drivers’ expressions of sleepiness and students’ expressions of attention and comprehension of lectures, or to track response to treatment of affective disorders.
The possibility of using this technology to detect when someone’s emotional expressions are genuine or not raises interesting ethical questions. I will outline and give preliminary comments on a few of the issues: Continue reading
You can get experienced meditators to produce, on demand, feelings of timelessness and spacelessness. Tell them ‘Try to be outside time’, and ‘try not to be in the centre of space’, and they will.
These sort of sensations tend to happen together – so strikingly so that Walter Stace proposed, as one combined element of mystical experience, ‘non-spatial-and-non-temporal’.1
Why should that be? asked an Israeli research group in a recent and fascinating paper. And was the generation of these sensations related to alterations in the sense of the body? Continue reading