Skip to content

Taj’s Choice

In a story reminiscent of the film ‘Sophie’s Choice’ Taj Mohammed, a refugee in afghanistan, tells the BBC that he chose to sell his six-year-old daughter Naghma to pay off a debt to a distant relative. To keep his family alive, he took out a loan of $2,500. When the relative demanded the money back, Taj’s three-year-old son and uncle had just died from the cold, and he had no means of repaying the debt. That’s when he took advice and offered his young daughter in lieu of the money.

How might a moral philosopher counsel Taj, and what value might the advice be? Two of the obviously dominant schools are deontology and consequentialism. Each might counsel a different course of action. A stereotypical deontologist might insist on the application of principles; presumably ‘never sell your children’ (rather than ‘always honour your debts’). By contrast, a stereotypical consequentialist might insist that Taj do whatever has the best consequences, and honouring a debt so that he remains in good standing with his family and is able to prevent all his family from starving in the cold is the best course of action.

Some philosophers have tried to understand this tension between different deontological and consequentialist prescriptions experimentally. Taj’s situation is also reminiscent of an experimental scenario devised and researched by Joshua Greene and his collaborators. ‘Crying baby’ is a dilemma presented to experimental subjects. The dilemma is ‘whether to smother one’s own baby in order to prevent enemy soldiers from finding and killing oneself, one’s baby, and several others’ (Greene et al, 2007, pp 1147-48). These type of dilemmas have been termed ‘high conflict’ by Koenigs et al (2007, p 909-10) on the basis that they show little consensus between subjects and subjects also took longer to decide on such dilemmas.

Greene et al have suggested that these scenarios pit consequentialist reason against deontological emotion (and that the former should be preferred as being more ‘rational’). However, I’m not going to address this approach. I think a different analysis is appropriate.

The basis for an alternative analysis lies in the theory of ‘bounded rationality’ as identified by Herbert Simon (1955, 1956). Simon pointed out that there are two distinct challenges that any organism, including a human, faces when navigating in the world. The first is uncertainty in the environment, in that we simply do not always know what the consequence will be of our actions. The other is cognitive limitations, in other words, that some calculations rapidly become so complicated that they require more cognitive capacity than we posses. Where these situations prevail, a tradeoff may have to be made. Calculating consequences is therefore not always possible. Instead, heuristics (more straightforward rules of behaviour) may be adopted.

My suggested explanation for the different courses of action recommended by our stereotypical deontologist and consequentialist is that each is a different type of guide to behaviour. Principles could be seen as heuristic guides to behaviour that do not depend on consequences. Ie, do not lie, do not steal, do not kill. Consequences are also heuristic guides that depend more on consequences. The types of consequences that we generally consider important are not ‘all things considered’ consequences, but rather consequences that normally correlate with good outcomes (eg, more people being alive).

Principles and consequences will often coincide (ie, murder will breach the injunction not to kill and will have bad consequences). However, as neither are necessarily optimising stragegies, there will be situations where fate (in the case of Taj), or philosophers, devise scenarios that confound these means of decision making. Where this happens, greater cognitive resources need to be dedicated to deciding what to do.

So where does this leave individuals faced with invidious choices such as Taj? Perhaps not to be either a strict (stereotypical) deontologist or consequentialist. Rather to dedicate more resources to the problem, by reflection, and consulting with others. In other words, using the skills of philosophical reasoning rather than the dogma of particular schools.

References:

Joshua D. Greene, Sylvia A. Morelli, Kelly Lowenberg, Leigh E. Nystrom, Jonathan D. Cohen, Cognitive load selectively interferes with utilitarian moral judgment, Cognition, Volume 107, Issue 3, June 2008, Pages 1144-1154.

Koenigs, Michael, Liane Young, Ralph Adolphs, Daniel Tranel, Fiery Cushman, Marc Hauser, and Antonio Damasio. “Damage to the prefrontal cortex increases utilitarian moral judgements.” Nature 446, no. 7138 (2007): 908-911.

Simon, Herbetrt. A. “Rational Choice and the Structure of the Environment” Psychological Review, Vol 63(2), Mar 1956, 129-138

Simon, Herbert A. “A behavioral model of rational choice.” The quarterly journal of economics 69, no. 1 (1955): 99-118.

Share on