Skip to content

Guest Post: Are dilemmas really useful for analysing moral judgment?

Pedro Jesús Pérez Zafrilla.

Lecturer in Moral Philosophy.

Department of Moral Philosophy.

(University of Valencia)

The development of neurosciences has had a major impact on the field of philosophy. In this respect, Spanish philosophy is no exception. In particular, the Valencia School led by Adela Cortina has played a leading part in the momentum of neuroethics in Spain. Our research has included the tackling of various areas such as human enhancement, free will or moral psychology. My intention in this post is to briefly present a critique referring to cognitive psychology. Specifically, I want to argue that moral dilemmas are not an appropriate method of analysing moral judgment. In my opinion dilemmas are misrepresentations of the way in which people form their moral judgments.

Dilemmas are tragic situations with only two possible and incompatible responses. The subject has to choose one and reject the other. There is no possibility of seeking alternative responses. In fact, the word “dilemma” comes from the Greek lémma, which means “anything received or taken” and the prefix dis, meaning “two”. Thus, “dilemma” means “to choose between two”.

Dilemmas such as those used in the field of cognitive psychology are arbitrary representations composed by scientists with a few variables. The objective is to create a tragic situation with only two possible solutions. Sometimes psychologists modify variables within a dilemma (such as the trolley problem) in order to provoke different responses in the individuals facing these new but equally tragic versions.

Nevertheless, a more careful examination of the way in which subjects make moral judgments shows that the dilemmatic method of cognitive psychology does not appropriately tackle moral evaluation. In real life, subjects do not make moral judgments facing laboratory dilemmas nor do they choose between two concrete pre-set options. The field of moral evaluation corresponds rather to what we might call problems. These can have multiple solutions based on a comprehensive set of circumstances which people take into account.

Etymologically the term “problem” comes from the Greek problema, meaning “put forth”. This term is derived from the prefix pro (forward) and the verb ballein (to throw). The problem is that which is in front of us and requires a response. But the problem does not lay down that there should be only two possible solutions. Problems are always an open situation, whose solution is not given in advance.

The central idea is that the response to a problem is not reduced to a choice between preselected alternatives. In the face of a moral problem, various alternative courses of action are open depending on the circumstances the subject is experiencing. This is because a person morally evaluates according to the context surrounding them. A variation in the context will also vary the alternative courses of action open to the person and the decision they finally take. Thus, problems are solved based on the values of individuals and reflection on the facts. The response to the problem will be that most appropriate to the given circumstances and those things most valued by the subject. In addition, it is necessary in moral life to be familiar with the greatest number of circumstances in order to find the best solution to problems. Because moral reality is full of nuances and cannot be confined by abstract dilemmas devised by psychologists in a laboratory.

An example to illustrate the moral problem and the complexity of moral evaluation could be the following: the president of a company asks a manager to participate in illegal transactions to increase profits. In exchange, the manager will receive additional payment. But if she refuses, she will be dismissed. To reduce this scenario to a dilemma would be erroneous for two fundamental reasons. Firstly, because in real life the motives that would lead the manager to make a decision might be various: the manager might reject the proposal because of her moral principles; or because she does not believe that the boss’s plan can succeed. Or perhaps she would accept the proposal out of fear of losing her job or because she would like to earn more money. But above all, it would be a mistake to think that the alternative courses of action are in reality reduced to two opposites (collaborate and commit a crime or do not collaborate and lose her job). Other options are possible: the manager could threaten to report everything she knows to the police, if the president goes ahead with his illegal plans or if he fires her.  The manager’s decision will be influenced by various factors such as her moral principles, her economic and family situation, her position in the company, the information received, or the trust she has in the president, among others. And in the end, the decision she takes will depend on which aspects weigh more in her evaluation of the situation. Because a decision will always be taken after an evaluation of the circumstances. So the factors people take into account when they evaluate reality are varied which means that possible courses of action are also varied.

This would happen in the case of the trolley problem. If the footbridge problem arose in reality, the decision whether or not to throw the heavy person onto the track would not be determined by physical contact with the person. Other variables absent from the formulation of the dilemma would influence, such as the person’s fear of being accused of murder or how well he knew the obese person or the workmen working on the track. Thinking that these and other factors are mere post hoc confabulations and that the factor of physical contact alone influences the decision is simply not to recognise the complexity of moral judgment. This could also be applied to the situations presented by Hauser relating to, for example, drinking juice from a sterilised used urinal. Perhaps a person would not drink from there in normal conditions. But in a different context she might not mind doing it. For example, if she had been lost in the desert for two days without water.

So in moral life subjects form their judgments on what is correct and make their decisions based on various aspects of reality. Because moral judgment is always conducted within an open and rationally considered context. In the face of this, the dilemmas employed in cognitive psychology force subjects to choose between two equally tragic situations in which there is nothing morally better. These situations are created limiting the variables that the person can take into account, thus forgetting the richness of the moral world. That is why the dilemmatic method, which mutilates reality in order to fit it into an arbitrary scheme, is a mistake.

Finally, there are two other aspects open to criticism when considering dilemmas in the field of cognitive psychology. On the one hand, that the dilemmas represent imaginary situations. On the other hand, the fact that the dilemmas are presented to the subjects in a laboratory, an atmosphere remote from the taking of real decisions. Both facts provoke a completely arbitrary response in the subject, thus the response lacks moral sense.

For all these reasons, the use of dilemmas to analyse moral judgments is problematic. This method employed by cognitive psychology is reductionist and incapable of recognising the complexity of moral reality. This circumstance should lead us to reconsider the normative value of cognitive psychology.


Cortina, Adela (2011). Neuroética y neuropolítica. Sugerencias para la educación moral. Madrid: Tecnos.

Gracia, Diego (2000). “La deliberación moral. El papel de las metodologías en la ética clínica”. In J. Sarabia y M. De los Reyes (eds.). Comités de ética asistencial. Madrid: Asociación de Bioética Fundamental y Clínica, pp.21-41.

Share on

3 Comment on this post

  1. It’s truistic that morally charged situations we face in real life are more complex and have more extraneous factors (or ‘variables’) than constructed dilemmas. So?

    The post has not made clear precisely why presenting people with choices between narrowly defined dilemmas does not tell us something about moral judgement. Sure, people’s responses in the trolley dilemma don’t tell us about what they would do in a real trolley case where fear of being arrested or anxiety in the moment or a host of other factors might influence their decision. But the point of presenting people with trolley cases is not to tell us about how people act when presented with real life runaway trolleys! They’re supposed to tell us something about how people respond to different factors (which are highlighted in the abstract trolley case) in sacrificial dilemmas. What trolley cases and other hypothetical dilemmas do is abstract from most considerations that might inform our judgements in real concrete scenarios like ‘how much do I like this particular workman’ and show us the different responses people make to this or that choice (will you sacrifice 1 to save 5? how about if you merely cause the 1 to be sacrifices indirectly? how about if you have to sacrifice the 1 through physical contact or so on).

    This kind of abstraction from certain features of concrete cases, to pose choices (‘dilemmas’) between isolated features of scenarios is in fact totally mundane and routinely plays a useful role in ordinary life. Let’s say we need to solve a problem by making a choice (between restaurants, cities to move to, jobs to take, favourite music albums or anything). There will invariably be lots of features that have some influence on our decisions, but abstracting from other considerations to pose a choice between a couple of difference variables or choices can be enlightening. For example: “Would you rather live in a city that was too noisy or one that was sometimes too boring?” “OK, you prefer a place that has pizza to a place that only serves curry. But what if the place didn’t serve pizza, but did serve salads?” etc. Real life concrete decisions may well be influenced by many factors but we can perfectly legitimately bring to into relief our own judgements about different features and their relative weightings as well as learn about what things other people are responding to in their judgements, by drawing out specific features in hypothetical choices. This isn’t “forgetting the richness of the… world” it’s just drawing attention to choices between particular features.

    “the dilemmas are presented to the subjects in a laboratory, an atmosphere remote from the taking of real decisions. Both facts provoke a completely arbitrary response in the subject, thus the response lacks moral sense.”

    I’m sensitive to concerns about ecological validity and indeed I work on this myself. But the assertion above (“Both facts provoke a completely arbitrary response in the subject”) is just baseless. To know whether the responses people give to choices made in the lab differ from choices made in more or less similar scenarios in real life one needs to actually *do* empirical psychology. One can’t simply presume that responses are “completely arbitrary.”

    “it is necessary in moral life to be familiar with the greatest number of circumstances in order to find the best solution to problems.”

    That’s true. Being a competent moral agents necessitates a creative capacity to think up and seek out different possibilities. But this doesn’t count against the utility and validity of asking individuals to make choices between discrete options with isolated features in order to discern how they respond to those discrete options. Part of moral agency is being able to creatively think up alternatives and part of moral agency (a large part) is being able to weigh and make choices between distinct options.

  2. It seems that there are three criticisms. I am not sure that I am convinced.

    (Criticism 1 and 2) Unlike psychologists’ dilemmas, real moral judgments involve more than (1) two options and (2) a few factors.

    The methods of science and statistical analysis require that scientists control for as many factors as possible and narrow options in order to increase confidence in finding a signal in the data (if a signal exists). These methods and analyses *do* create experimental settings that are rare or abnormal, but it does not follow that studying such rare phenomena fails to yield important knowledge about the phenomena. If it did, then we would have to reject the deliverances of particle physics in light of the rare conditions into which they place particles. So it is not clear that these criticisms can be taken seriously without casting doubt on even the most respected science.

    (Criticism 3) Real moral judgments (3) take place in the real world; not in the lab.

    This involves both an empirical claim and a key concept: moral judgment. Until we can agree on what we mean by ‘moral judgment’ and then look at the carefully test the claim that such phenomena cannot happen in lab settings, it is unclear how seriously to take the criticism.

    Empirical claims
    While I’m at it, it seems to be worth noting that all of these criticisms assume certain empirical claims. So until these empirical claims are supported by compelling evidence, the criticisms appear to be merely hypothetical. Take the hypothesis that moral judgment involves reflection about the facts. If this hypothesis is true, then we would think that moral judgments would not change dramatically when the facts remain the same, but the order of the facts change. However, this prediction fails to be borne out among even philosophers (Schwitzgebel and Cushman 2015) — and philosophers might be the participants who are most likely to reflect when making judgments (Byrd 2014, Livengood et al 2010). There might be ways to account for this finding while preserving the claim about reflection on facts. But if this account involves empirical claims then it too might be only hypothetical.

    What do the psychologists say?
    Experimental psychologists who use moral dilemmas are already thinking through these very criticisms. As a result, experimental psychologists are developing better experimental designs and methods of analysis that can dissociate the effect or more factors on judgments made in the lab or online — e.g., process dissociation (Conway 2013, Conway and Gawronski 2013, Jacoby 1991).


    Byrd, N. (2014). Intuitive And Reflective Responses In Philosophy. University of Colorado. Retrieved from PhilPapers.

    Conway, P. (2013). The Process Dissociation of Moral Judgments: Clarifying the Psychology of Deontology and Utilitarianism. Electronic Thesis and Dissertation Repository.

    Conway, P., & Gawronski, B. (2013). Deontological and utilitarian inclinations in moral decision making: A process dissociation approach. Journal of Personality and Social Psychology, 104(2), 216–235.

    Jacoby, L. L. (1991). A process dissociation framework: Separating automatic from intentional uses of memory. Journal of Memory and Language, 30(5), 513–541.

    Livengood, J., Sytsma, J., Feltz, A., Scheines, R., & Machery, E. (2010). Philosophical temperament. Philosophical Psychology, 23(3), 313–330.

    Schwitzgebel, E., & Cushman, F. (2015). Philosophers’ biased judgments persist despite training, expertise and reflection. Cognition, 141, 127–137.

  3. I agree with Nick–As a psychologist studying moral dilemmas I think there is a ton that we can learn about real-world decision-making.

    At their core, I see dilemmas as asking the age-old question “When is it acceptable to cause harm?”

    Clearly, causing harm is a regular feature of human history–right now, for example, the United States and other countries are conducting airstrikes against ISIS targets in Sryia. Police officers arrest and detain people (sometimes roughly); parents spank their children; surgeons slice open patients; generals order their soldiers to advance upon the enemy, and many many other examples abound of cases where people decide to cause harm to another person. Note that all such cases are effecvtively dilemmas in that people must decide whether or not to cause harm (regardless of how many other action possibilities exist).

    What we are learning from dilemma research is that most normal people (i.e., who are not psychopaths) experience a visceral emotional reaction to the thought of causing harm–for example, a surgeon may feel squeamish at the though of slicing open someone. People may also heuristically apply moral rules–either of these processes should lead them to advocate avoiding causing harm.

    However, to the degree that people engage in cognitive operations about outcomes, they may decide that causing harm is necessary and worthwhile. For example, a surgeon may realize that unless they slice open their patient, the patient will die.

    Now, there are also other reasons people may accept causing harm–for example, for self-interest, for vengeance against a perpetrator, or simply because they don’t mind/enjoy inflicting suffering.

    Scientists studying dilemmas have been making huge mistakes by lumping different reasons for accepting harm together (e.g., Kahane, 2015) but there are dissociable concepts.

    Using better techniques and considering the newest theorizing which has moved beyond Greene’s simplistic dual-process model can be very informative to real people who face real-world analogues of laboratory dilemmas.

Comments are closed.