Neuroscience’s Role in Moral Judgment
The following paper was my midterm examination submission for a graduate level Philosophy of Neuroscience course.
Joshua Greene’s argument for neuroscience’s explanatory potential and Selim Berker’s argument for neuroscience’s normative insignificance engage with neuroscience’s potential congruence with modern ethics. At stake in this debate is neuroscience’s potential to illuminate our moral intuitions and serve as a blueprint for understanding moral truth. Greene supports neuroscience’s capacity to explain what underpins our moral intuitions while Berker challenges neuroscience’s capacity to establish normative ethical principles. Neuroscience might provide evidence to at least discount deontology as a valuable moral perspective, but I will contend that Greene ought to provide an evolutionary account for the consequentialist alternative. I will first use Greene’s fMRI findings, his opinion piece, and Berker’s summary of Greene’s additional ideas to outline Greene’s overall argument against deontology. I will then harken back to my explanation of Greene’s overarching conclusions to elucidate Berker’s critical response. Finally, I will argue that Greene’s critique of deontology based on evolutionary history necessitates further exploration into the evolutionary history of its consequentialist counterpart.
Joshua Greene’s Argument for Neuroscience’s Explanatory Capacities
In this section, I will present four moral dilemmas Greene uses to characterize two types of moral dilemmas, explain how Greene links the two types of moral dilemmas to consequentialism and deontology, elucidate Greene’s experimental hypothesis and his evolutionary explanation for the results, then situate his argument against moral realism in the context of the results.
Greene’s opinion piece and fMRI piece work through dichotomous moral dilemmas that may be examined in tandem with one another. Using the ‘foreign aid’ dilemma from the opinion piece and the classic trolley problem from the fMRI piece in conjunction with one another provides a cohesive picture of what Greene deems impersonal moral phenomena. Similarly, using the ‘injury’ dilemma from the opinion piece and the footbridge trolley variation from the fMRI piece in tandem with one another constructs a fuller idea of what Greene labels personal moral phenomena. Personal phenomena or scenarios involve situations in which an autonomous person’s ultimate moral decision has the power to inflict serious physical harm on another person(s). Impersonal phenomena or scenarios do not adhere to this criteria (Berker 303).
In the impersonal foreign aid dilemma, the moral subject (let us call them Charlie) receives a letter in the mail from a well-known international aid organization. The letter inquires that Charlie send a $200 donation to the organization, explaining that the donation will supplement the organization’s efforts to provide much-needed medical aid to impoverished people across the globe. Charlie must consider whether to send the $200 donation or save the money for personal use. Greene thinks most people would consider Charlie’s decision to keep the money for personal allowable (Greene 848). The foreign aid scenario may be linked to the classic, impersonal, instance of the trolley problem. Here, a trolley is headed down a track towards five people. The trolley is on track to hit and kill the five individuals. However, Charlie may elect to throw a lever that will consequently switch the trolley’s direction to an alternative track. In switching to the alternative track, the trolley will kill one individual. Charlie must consider whether to switch the trolley’s path. Greene believes most people would find a decision to flip the switch and kill one individual to be allowable (Berker 296).
In the personal dilemma involving injury, Charlie drives down a rural road and hears someone shout for help. Charlie observes that the individual is covered in blood. The individual explains that they had an accident during their hike and requests a ride to the nearest hospital. Charlie’s initial intuition is to assist the injured person, for they will likely need an amputation if they do not receive immediate medical attention. However, Charlie also recalls that their car has new leather upholstery and fears that the individual’s blood will tarnish the upholstery’s appearance. Charlie must consider whether they should abandon the person on the side of the road so as not to damage the car’s leather upholstery. Greene argues that most people would find abandoning the injured individual to be unallowable (Greene 848). The injury scenario may be linked to the personal, “footbridge” variation of the trolley problem. Here, Charlie is standing on a footbridge above the trolley tracks in the company of a large individual. The trolley is approaching and is on track to hit and kill five individuals. However, Charlie may elect to push the large individual off the footbridge to stop the trolley and save the five individuals. The large individual would perish. Charlie must consider whether to push the large individual off the footbridge. Greene believes most people would find a decision to push the large individual off the footbridge to be unallowable (Berker 297).
Greene harnesses consequentialist and deontological perspectives to examine why our moral intuitions toward the allowability of impersonal moral phenomena (such as the foreign aid scenario and trolley scenario) conflict with our moral intuitions toward the allowability of personal moral phenomena (such as the injury scenario and the footbridge scenario). A consequentialist perspective is a normative (related to or capable of establishing an evaluative standard) ethical theory that privileges the outcome of one’s decisions as a basis for morality. For instance, pulling the switch in the trolley dilemma might be considered a consequentialist judgment since saving five lives equates to a better aggregate outcome than saving one life (Berker 298). A deontological perspective is a normative ethical theory that appeals to a set of fundamental rules of right or wrong to determine the allowability of one’s decisions. A decision made via deontological ethics does not take the consequences of the decision into account. For example, neglecting to push the large individual off the footbridge because one thinks it is bad to push people off bridges might be considered a deontological judgment because one appeals to a standard of good vs. bad with disregard to the fact that pushing the individual might be for some greater good.
Greene hypothesizes that we tend to respond to impersonal phenomena with cognitive processes that spawn consequentialist judgments and that we tend to respond to personal phenomena with emotional responses that spawn deontological judgments (Berker 301). He characterizes cognitive or rational processes as slow, neutral, and not specific to a given situation or subject. He characterizes emotional processes as fast, valenced (positive or negative), and specific to a given situation or subject (Greene et al. 2107). Greene uses fMRI brain scans to reify his proposed hypothesis. As predicted, he saw that the brain areas associated with cognitive processes, such as the dorsolateral prefrontal cortex, were more engaged when presented with impersonal moral dilemmas. Conversely, he saw higher activity in brain areas associated with emotional processes, such as the medial prefrontal cortex, when subjects were presented with personal moral dilemmas (Greene et al. 2107). Furthermore, he saw that the minority of subjects who elected to push the large individual off the footbridge in the impersonal footbridge dilemma had longer reaction times. Greene explains that these individuals had delayed responses because they had to use the rational parts of their brain to suppress the intuitive, initially-activated emotional parts (Greene et al. 2107). Greene posits that we should discount our deontological intuitions because they are based on these quick, emotional reactions rather than deliberate, rational considerations.
Greene uses an evolutionary approach to further explain why the results from the brain imaging tests should lead us to discount our emotion-based deontological judgments. He cites natural selection’s capacity to favor altruistic instincts, explaining that our present altruistic tendencies do not reflect our modern environment, but are rather reflective of the primitive environment in which they evolved (Greene 849). Acting altruistically in close-proximity moral dilemmas likely provided our ancestors with advantages to survival, so Greene believes we developed a sort of cognitively-entrenched heuristic (quick means of reasoning based on mental shortcuts) that allowed us to make quick moral judgments. This evolved heuristic is why we are keen to respond altruistically when presented with up-close, emotionally-stimulating moral dilemmas such as the injury scenario. However, Greene suggests evolved snap judgments are unreliable in our modern environment because moral judgments are no longer constrained to close-proximity ethical contexts (Greene 849).
To reiterate Greene’s findings thus far, his experiments found that we tend to respond emotionally to personal phenomena and he suggests personal phenomena typically invoke deontologist judgments. Thus, he argues that deontologist judgments are based on emotional factors. However, he attempts to argue against deontologist judgments on the grounds that their emotional basis is a morally irrelevant factor. His arguments for the moral irrelevance of emotion center around the lack of slow, reason-based deliberation involved in emotional processes and emotion’s evolutionary bondage to close-proximity moral contexts.
He then uses emotion’s moral irrelevance to challenge our adherence to moral realism (thinking there are genuine, objective moral facts that we based our decisions on) as a means to extrapolate moral truth. Greene argues that our inclination towards moral realism is based on the same evolutionary tendency to recognize close-proximity perceptual phenomena and make quick, emotion-based judgments about the given perceptual phenomena (Greene 849). For example, Charlie might see a group of individuals torturing a stray cat and tell them torturing cats is wrong. Charlie is not expressing a personal belief that it is wrong or reiterating a group’s belief it is wrong. Rather, they perceive a cat being tortured and make a snap judgment based on the inherent wrongness of torturing cats (Greene 849). Greene does not think we can extrapolate moral truths about inherent wrongness from such scenarios given that evolution whittled our judgments about personal phenomena to be emotional and personal rather than objective.
To synthesize Greene’s arguments, he finds that our responses to personal phenomena invoke our emotions and argues that these emotions form the basis for our subsequent deontological judgments regarding moral allowability. Yet, he thinks emotions are morally-irrelevant factors in making moral decisions because they are made rapidly, independent of deliberate reason, and are a product of an evolutionary environment that is incongruent with modernity (Greene 849). Thus, he argues that we should discount our emotionally-laden deontological judgments and privilege a deliberate, rational, consequentialist consideration of our moral decisions’ potential effects. Greene thinks neuroscience has the potential to challenge our allegiance to deontology and moral realism by providing data that explains what underpins our moral decisions besides a fallacious appeal to ‘objective’ moral truths.
Selim Berker’s Counterargument to Neuroscience’s Normative Potential
In this section, I will enumerate Berker’s three empirical concerns, explain his support for emotion’s moral relevance and his critique of emotion as the distinguishing factor between consequentialist and deontological judgments, relay one of his contentions about Greene’s evolutionary argument, and piece together his greater argument against neuroscience’s normative capacity.
Berker expresses three methodological concerns with Greene’s fMRI experiments. First, Greene posits that consequentialist judgments are linked to cognitive processes while deontological judgments are linked to emotional processes. Yet, Berker notes that the posterior cingulate (a brain area associated with emotion) predicted individuals’ responses to characteristically consequentialist dilemmas, such as pulling the lever in the original trolley dilemma (Berker 307). This finding is not cohesive with Greene’s hypothesis and suggests that the experiment’s results are not univocal. However, this empirical concern does not in and of itself dismantle the hypothesis. Greene responds by conceding the posterior cingulate’s predictions might challenge the hypothesis, but that most moral judgments will involve some sort of affective basis (Berker 307-308).
Second, Greene and his colleagues compiled the average time it took subjects to make uncharacteristic responses (responses that went against the hypothesis) and the average time it took subjects to make characteristic responses (responses that conformed with the hypothesis) (Berker 308). However, Berker points out that lumping together the average response times for every uncharacteristic response and the average response times for every characteristic response without respect to each individual question does not yield a fruitful comparison (Berker 308). In fact, Berker argues that it creates a problematic outlier effect within the results. The characteristic response times for a particular disturbing moral scenario (such as whether it is appropriate to hire someone to rape your partner) in Greene’s experiment are much faster than the characteristic response times for other, less-striking questions (Berker 309).
Third, Greene distinguishes between personal cases and impersonal cases in a manner that Berker believes is unconducive to tracking which dilemmas are deontological by nature and which dilemmas are consequentialist by nature (Berker 311). Berker presents the Lazy Susan case to demonstrate how not all personal cases lead to deontological conclusions. Here, a trolley is set to hit five people laying on a Lazy Susan. One can push the Lazy Susan so that the five people are pushed out of the trolley’s path, but doing so will kill one bystander in the process. Greene refers to this dilemma as personal in nature because it is an instance of an autonomous person’s ultimate moral decision having the power to inflict serious physical harm on another person(s). Yet, it corresponds to a consequentialist moral judgment when one decides to maximize the aggregate lives saved and kill the innocent bystander (Berker 311). Greene thinks personal moral dilemmas should coincide with deontological moral judgments, which would mean that deontological moral judgments coincide with instances of bodily harm. Berker does not think all deontological judgments respond to bodily harm and that some consequentialist judgments might respond to bodily harm, which muddles deontology’s link to personal moral dilemmas (Berker 311).
Berker then redirects his attention, enumerating and addressing failed arguments towards Greene’s conclusion. First, he challenges Greene’s argument that cognitively-based decisions are good and emotionally-based decisions are bad. Berker suggests that Greene’s argument necessitates a better explanation as to why emotion-based intuitions are less reliable than reason-based intuitions given that our emotions often seem to track moral truths (Berker 316). For example, Berker might argue that Charlie’s repulsed emotions in the aforementioned tortured cat dilemma led them to discern a moral truth in the form of ‘people should not torture cats because it is wrong’. Furthermore, Berker argues that the aforementioned empirical worry in which the posterior cingulate lights up for consequentialist decisions also plays a role here. If Greene’s reasoning for why consequentialist intuitions have a distinguished normative force is because they are based on good reason rather than bad emotion, then it seems logically problematic for an emotional part of the brain to predict a consequentialist judgment. Berker inquires what specifically distinguishes consequentialist judgments from deontological judgments if both judgments might involve emotion (Berker 317).
Berker then addresses Greene’s evolutionary account of deontology’s derivation. Here, Berker takes Greene’s argument to mean that emotion-driven deontological intuitions stemmed from a close-proximity primitive environment that is incompatible with our modern environment; thus, deontological intuitions cannot have legitimate normative forces in our modern environment (Berker 318). Berker thinks this argument includes a false dichotomy that claims deontological moral intuitions are a product of evolution while consequentialist intuitions are a product of reason. However, he thinks consequentialist intuitions are just as likely to be products of evolutionary history and thus susceptible to scrutiny (319).
Finally, Berker considers Greene’s integral argument that deontological intuitions do not have any normative force because they respond to emotions and emotions respond to morally irrelevant, personal factors. Berker thinks classifying emotional factors as morally irrelevant is a normative intuition that is not derivable from the experiment’s data (Berker 294). This claim was made after the fact without respect to the empirical process. He argues that, even if we grant the moral irrelevance argument’s empirical validity, it does not show why we should prefer consequentialist intuitions because the third empirical worry still has not been addressed. That is, the distinction between personal and impersonal does not sufficiently link deontological judgments to personal moral dilemmas.
Berker surmises that neuroscientific data will not tell us anything about how to regard our moral intuitions. According to Berker, Greene’s experiments seek to determine whether deontological or consequentialist judgments tend to stimulate more emotion-focused brain areas. If we subscribe to Greene’s results, then we see that deontological judgments coincide with personal dilemmas and are more emotion-driven. Yet, these neuroscientific results do not provide evidence for the moral irrelevance of emotional factors when making judgments. This explanation is constructed after the fact, independent of empirical findings. Berker maintains that, if we are to derive normative significance from the moral relevance of the factors underlying consequentialist and deontological judgments, we must discount the neuroscientific process because it cannot in and of itself determine moral relevance (326).
The Need for an Evolutionary Account of Consequentialist Intuitions
Having outlined the positions of Greene and Berker, I will expound on Berker’s critique of Greene’s evolutionary account. Specifically, I will expound on Berker’s claim that consequentialist intuitions are just as likely to be products of evolution as deontological intuitions. My claim is that consequentialist intuitions likely have similar evolutionary roots to deontological intuitions and thus are subject to the same evolutionary scrutiny. My reason is that rational deliberation played a role in primitive personal dilemmas.
To elucidate the need for my contention, I will reiterate that Greene’s findings link consequentialist decisions with slow, rational deliberation and deontological decisions with rapid, emotional judgment. As Berker points out, Greene’s argument eliminates the latter’s potential to be a normative force based on its evolutionary history but fails to consider the former’s evolutionary history. Thus, I align with Berker in suggesting that a strong argument against deontological intuitions requires some kind of statement addressing consequentialist intuitions’ potential entanglement in evolutionary history.
If Greene scrutinizes emotional judgments’ situation in evolutionary history to discount deontology, then I propose a parallel argument should scrutinize rational judgments’ situation in evolutionary history to discount consequentialism. First, I will argue that deliberate consideration also played a primitive role in what Greene considers personal moral contexts. Let us consider a hypothetical example. A group of archaic homo sapiens is on a hunt for a mammoth, a dangerous animal and an integral resource. The group knows that they cannot take the mammoth down independently. They must stay together as a group. They are well into their hunt and must decide whether continuing along a riverbank or crossing the river will lead them to the mammoth. The group is polarized about which direction to pursue. Then, one group member notices a mammoth footprint next to the water’s edge and indicates in some capacity to the rest of the group that they should continue together along the river bank instead of split up. They subsequently locate the nearby mammoth, eat, and survive another day (Norman 688-689). Under the assumption that splitting up would render the other group members vulnerable to harm by the mammoth or other externalities, this instance fits the personal moral dilemma criterion. That is, it is a situation in which an autonomous person’s ultimate moral decision has the power to inflict serious physical harm on another person.
I concede that this example is subject to scrutiny on the grounds that it may not align with Greene’s criterion for a personal moral dilemma and may not be an instance of deliberate reason. I will address the former concern first. Greene’s criterion for personal moral dilemmas is not explicitly stated in his opinion piece, so I am operating under Berker’s characterization of what constitutes Greene’s personal dilemmas. If I were to work with Greene’s characterization of a personal moral dilemma based on his opinion piece, the criterion would actually be more ambiguous and we might consider any face-to-face situation where someone’s life is at risk to be a personal moral dilemma. However, if we use the criterion Berker spells out, I maintain that the example still fits the criterion. The individual deciding the path of action is autonomous. They are engaging in a moral dilemma where leaving the group members would render them defenseless in the face of a mammoth. If this is not convincing, then we might also consider that choosing the wrong path will likely result in a lack of food and potentially starvation.
To address the latter concern, we might cling to Greene’s suggestion that deliberate reasoning is slower, and not valenced (immediately yielding a positive or negative response). On this definition, the example fits the criterion of deliberate reasoning as the primitive moral agent likely did not have an immediate reaction prior to seeing the footprints, had to weigh the conflicting opinions among the group, and then facilitate a decision that would consequently increase the group’s chance for survival. If we subscribe to my reasoning, this example also fits the consequentialist criteria because the moral agent, in deciding to reign in the group and proceed as a clan, is prioritizing the best aggregate outcome (most lives saved).
Greene might pose another contention in that my argument does not track natural selection’s preference for slow, deliberate reasoning. However, I pose that rational consideration is an essential component of group-based thinking and cooperation. Primitive societies relied on inter-clan cooperation for survival. But, what happens when one group member thinks the group should cross the river? This ought to pose a threat to the group’s collective survival. Invoking logical faculties to interpret the incongruent thought and realign the group member’s wayward thought reasonably allowed the group to survive another day.
I believe I have at least opened the door to the possibility that reason-grounded consequentialist intuitions might have an evolutionary history of their own. If deontological intuitions should be discounted because their affective basis evolved in a primitive environment incongruent with our modern environment, then Greene ought to further explore whether consequentialist intuitions’ rational basis also evolved in a primitive environment. Without further probing into this gray area, I find it hard to privilege a consequentialist view
References
Berker, S. (2009). The Normative Insignificance of Neuroscience. Philosophy & Public Affairs, 37(4), 293–329. https://doi.org/10.1111/j.1088-4963.2009.01164
Greene, J. (2003). From neural “is” to moral “ought”: what are the moral implications of neuroscientific moral psychology? Nature Reviews Neuroscience, 4(10), 846–850. https://doi.org/10.1038/nrn1224
Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M., & Cohen, J. D. (2001). An fMRI Investigation of Emotional Engagement in Moral Judgment. Science, 293(5537), 2105–2108. https://doi.org/10.1126/science.1062872
Norman, A. Why We Reason: Intention-Alignment and the Genesis of Human Rationality. 685–704 (2016). https://doi.org/10.1007/s10539-016-9532-4