The Third Culture

Neuroscience and the Humanities

Testing Kant

As philosophers go, Immanuel Kant might be both the most complicated and the simplest thinker to read and understand. What I mean is that, for all the complicated syntax and drawn out arguments, his central concept of reasoned action is quite simple. Obtusely called the Categorical Imperative, Kant’s central idea is in reality a rather basic logical approach to moral behavior: all humans are rational, therefore a person should be able to rationally universalize every action. If a behavior cannot be universalized, then it is not moral. As Karen Sanders writes in her book Ethics and Journalism,

“If we are rational beings, and Kant says we are, than these principles – as precepts of practical reason – would be seen to be the right and in fact the only grounds of action. Categorical imperatives tell us how we ought to act irrespective of our inclinations. They are compelling because they describe the structure of reason in action.”

From the brain science perspective, the operative question here is, “can one actually make purely rational decisions?” If such a question has an answer, one could at least validate the possibility of Kant’s construct (This is a role I believe brain science can and should play in the understanding of philosophical inquiry—it can validate the potential of the philosophical ideal for realization, challenging or supporting claims like “all humans are rational beings”).

The question of the possibility for rational human thought is one angle of approach to the work of Joshua Greene, formerly of Princeton and now at Harvard. Greene has been interested in the neural substrates of moral dilemma solving. There is, for example, the famous trolley dilemma, in which one is standing at the split where a trolley car chooses one track versus another. On one track—the track the car is currently planning to take—are five trolley workers. On the other, there is a single trolley worker. The subject of the dilemma is the only one with the power to switch the course of the trolley. So which is the more rational choice (a question Kant conveniently leaves up to his readers)? Of course, there is no answer to that question. For Kant, the question is paradoxically answered on an individual basis with a requirement for a rational universal applicability.

Yet even if there is no answer to such a question, functional brain scanning can begin to show us how we solve such dilemmas. That is to say, do we attempt to answer them in a rational manner? Greene found that when people made decisions that involved flipping the switch—the one described above—they used much the same areas of their brain as when they were making amoral decisions like which mutual fund to acquire—higher-level processing areas such as the prefrontal cortex. However, if the decision was more personal—pushing the one worker from a bridge to stop the train, for example—the emotional centers of the brain were activated (much of this research is summarized in the recent Discover special Discover Presents the Brain).

If such findings are generalizable, and they appear to be, then the answer to my original question is a mixed one. When the dilemma is impersonal, involving a switch on a track, it could be argued that, as Sanders wrote, the subject’s acts are “reason in action.” However, when a more personal decision was being made, the subject always folded his or her “inclinations” into their decision. This basic split between the personal and the impersonal is as predictable as it is troubling. The most pressing moral dilemmas we face in daily life do not involve tracks and switches, and often involve some mixture of personal and impersonal features. Much of the cold war, for example, could be seen as a conflict between the “rational” plays of the governments—often built upon the supra-rational conceits of game theory—and the chillingly emotional threat of atomic war. Anyone who believes that a cold utilitarian approach was taken to the impact of the bomb has not read his or her history. In the course of daily life, then, I would argue that there are few if any true moral dilemmas that are impersonal; in fact, strap a brain scanner to a man as he actually stands at the train switch looking at that lone worker he is about to condemn to death and I would guess his limbic system would be quite active.

Science writers traditionally have been loathe to criticize moral or philosophical programs, even when evidence suggests criticism is warranted. Yet I firmly believe the evidence here (and not just Greene—much research from the emerging field of neuroeconomics supports a two-system cognitive/emotional model of decision making) makes Kantian practical reasoning a null hypothesis. If, as Kant says, we must divorce our passions from decision-making, then neuroamputation of the nucleus accumbens and the amygdala should be required for all our politicians and clergymen.

The drive among computational neuroscientists and neuro-engineers to reverse engineer the human brain provides an interesting window into Kant; I wonder what Kant would have thought of computers. Ironically, it is the unique synthesis of emotion and rationality (or, more accurately, the extremely numerous and complicated set of inputs for each action output) that have made it so difficult for the rational computer to model human behavior. And, as we now understand pure rationality, especially in popular culture, as a more strangely computerized phenomenon embodied by robots (every input has its rational “universalized” output), such a model for humanity seems almost perverse.

Computational neuroscientists hold that, given that the brain is simply a mesh of interconnected circuits, an input (say, a moral dilemma) will trigger a context-dependent set of neurons, producing an expected result. We know this must be true, that behavior springs up solely from the interactions between the neurons in our brains, and yet the model of how it works must be nearly infinitely complex, so complex that many believe the largest barrier to modeling the brain is computational power.

So if researchers have shown that pure rationality does not exist, that affect can not be divorced from decision making, it seems clear that half of Kant’s categorical imperative is in trouble. Yet if it is possible to reverse engineer the brain, it reopens the door to what I believe the more lasting and compelling philosophical question in Kant’s imperative: is there such a thing as a universalizable act in a real human context? Once we model a brain—and I believe we will, eventually—Kant may find he has a second, more complex vocabulary to speak with, that of the third culture.

June 25, 2007 Posted by | computational neuroscience, Decision Making, Philosophy | Leave a comment