In his recent seminar (a recording of which can be found here), Australian philosopher Tony Coady seeks to criticize the entrenched dichotomy of ‘emotion’ and ‘reason’. He argues that this rigid division is outdated and unsophisticated, and that its persistence is limiting the quality of both philosophical debate and wider scientific investigation.
Coady opens his talk by noting the derogatory accusations of ‘appealing to emotion’ that have been levied at opponents in the enhancement debate. He contends that this simply follows in a long philosophical tradition of separating and placing reason above emotion, from Plato’s allegory of the Charioteer (reason) harnessing his Horses (the passions), to the Christian concept of conflict between the higher desires of the Spirit and the desires of the Flesh that must be tamed. Coady claims that this view of reason, which he terms rationalism, has been the dominant paradigm in Western philosophical thought.
Having established the philosophical separation of emotion and reason, Coady then moves to findings from the fields of neuroscience and experimental psychology. Even though the empirical research thus far suggests that emotion and reason cannot be easily demarcated from each other, Coady believes that researchers in these fields have continued to wrongly adhere to this dichotomy. Antonio Damasio’s influential book, Descartes’ Error: Emotion, Reasoning and the Human Brain, for example, provides evidence that there are significant overlaps between areas in the brain processing emotion and reasoning, and that emotional input is vital for organized mental reasoning.
Despite this, Coady highlights that there is still a view, particularly in moral psychology, that emotional input in decision-making constitute ‘moral errors’, and that only a calculative, utilitarian account of moral decision-making is rational and thus optimum. Baron and Ritov, for example, claim that “decisions made on the basis of deontological principles usually lead to results that are not as good as the best that could be achieved”. Jonathan Greene, even more radically, seems to suggest that all non-utilitarian decisions are simply moral errors. Coady attributes this preoccupation with finding ‘errors’ and ‘biases’ to the extremely successful work of Kahneman and Tversky, who spent much of their careers identifying and explaining cognitive biases. He notes, however, that there is the ‘hard’ mathematical background of probability theory to test such ‘errors’ against, while there is no such framework for morality.
Coady points to a study by Bartels and Pizarro regarding utilitarian preferences in psychopathic individuals both as a counter-example to utilitarianism as the ‘correct’ moral system, and as an example of the methodological flaws that he feels are rife in experimental moral psychology experiments. The study subjected a group of participants to a battery of personality assessments, and then asked them to choose between utilitarian and non-utilitarian options for a series of moral dilemmas. The findings indicated that participants who endorsed utilitarian views also were higher on measures of psychopathy, Machiavellianism, and life meaninglessness – suggesting that, counter-intuitively, individuals least prone to ‘moral errors’ also displayed traits that are generally associated with immoral agents.
Coady also calls attention to wider flaws within the study, and in the field in general – the differences between decision-making in real-life and laboratory scenarios, the reference to utilitarianism as a single, homogenous ethical theory, and the inability to capture subtleties of thoughts and motivations in choosing a decision. With regard to the latter, in particular, Coady notes that there is nothing stopping someone from agreeing with a consequentialist option for non-consequentialist justifications, and that the multiple-choice layout is unable to detect this.
In the final part of his talk, Coady sets out his view of emotions, how they relate to cognition and reason, and makes a case for trusting our emotions as much as our reason. Emotions, for him, are first and foremost associated with bodily sensations and reactions – but are also more than just these. Emotions, he contends, are also defined by their cognitive appropriateness, and can be inappropriate if there is a mismatch. Fear, for example, is defined by the presence of something that is perceived to be dangerous, such as an axe-wielding murderer. Fear, or a phobia, of a small, harmless spider, on the other hand, might be judged as inappropriate.
Furthermore, emotions, like reason, motivate and stimulate reactions to the world around us – the appropriate response to fear is to run away, for example. Emotions, also, can help to guide us as to how to live a good life. Some emotions, for instance, are inherently undesirable – hate, spite and envy – while other are more desirable – kindness, love and admiration – and this distinction can aid us in choosing how to live.
There is also reason to think that we should trust our emotions much like we trust our cognitive faculties. Sensory perceptions, memory and inferential capabilities, for instance, are also fallible and can lead us astray. Why then, Coady asks, do we seem to instinctively trust our cognition, when any proof of reliability seems to be doomed to circularity? He refers to work by Linda Zagzebski, who argues that trust in our basic cognitive faculties is implicit to anything that we can understand as a rational enterprise. Both her and Coady share the belief that this basic self-trust can and should be extended to our desires and emotions. This is not to advocate for blind belief in our emotions – rather that we should engage in constant self-reflection and trust that this will improve the appropriateness of our emotions. There might, however, be some reasons to trust our emotions less than our reason – if there is significant cultural diversity or individual variations in emotional faculties, or if they are less stable and more inaccurate than cognition. Nevertheless, there are significant issues with our cognitive faculties as well.
Coady concludes his talk with two pieces of wisdom – that perhaps the findings of neuroscientific and psychological investigations into morality should be interpreted more carefully, and that the enhancement project should be cautious of attempting to enhance either just reason or emotion, as the two would seem to be inextricably entangled.
I tend to instinctively favour a rational perspective that is inherently biased towards a pro-social view, where that is genuinely inclusive. Which is why, for example, I am opposed to multiculturalism (a culturally relativist view that tolerates inclusion of socially exclusionist perspectives, in the name of “respecting cultural traditions”) but in favour of liberal pluralism (a more demandingly principled perspective that embraces diversity on the scale of individuals, as long as those individuals respect the rights of all other individuals). I certainly experience my ethical positions emotionally, but the emotion is anchored to the insistence that ethical views should make sense, and be motivated by a desire for stable social systems incorporating equality of rights and opportunities, and resistant to superstitious intrusions into rational ethical deliberations.
Comments are closed.