I haven’t had time to read everything yet, but I have been anxiously awaiting this research’s release date ever since a friend at Weill Cornell tipped me off to the project. Here is the essence, and then a link that will take you to several different Nature pieces, depending on how deep you want to get in to it, from a news piece to the actual paper:
In the latest case study, neuroscientists describe how they implanted electrodes in the brain of a 38-year-old man who had been in a minimally conscious state for more than six years following a serious assault. By electrically stimulating a brain region called the central thalamus, they were able to help him name objects on request, make precise hand gestures, and chew food without the aid of a feeding tube. The thalamus is involved in motor control, arousal and in relaying sensory signals — from the visual systems, for example — to the cerebral cortex, the part of the brain involved in consciousness.
Neuroesthetics is part of a rapidly growing group of applied subfields, a group including other hot topics such as neuroethics, neuroeconomics, and neurolaw. What these subfields all have in common is that they are attempting to directly investigate practical topics through a detailed understanding of the neural processes underlying human behavior. Through clever experimental design, such experiments can approximate the experiences of our daily lives, getting us closer to understanding what role emotion plays in our daily lives, what beauty is, and how we know whether a patient truly is a vegetable.
However, moving neuroscience research into the practical domain requires a leap of faith generally discouraged in scientific inquiry—the belief in near-total external validity. That is to say, any claims that a research paradigm, for example, explains the effect a piece of art has on the brain requires the belief that the sum total of the research program attempting to explain such a question actually approximates real, uncontrolled human experiences. Such a necessity comes with the appropriation of practical vocabulary—‘art,’ ‘law,’ ‘financial decision-making.’ In short, when we use such words, we must be careful that we are, in fact, talking about them.
I have written in a previous post about the importance and placement of conceptual art within the neuroesthetic context, and of the ability, for example, of a telegram or chemically preserved animal to make conceptual statements in a neurally meaningful way. In this way, the everyday can become art because it is metaphor made concrete.
This capacity for meaning-making also makes conceptual art especially susceptible to effects of context. Part of the beauty of conceptual art is that if you scattered many of the pieces across an industrial waste site, and asked someone to go on an art-hunt, it would be difficult if not technically impossible for them to identify the pieces without some background in art history . However, when placed within MOMA’s white-box walls, for example, these industrial constructions—Richard Serra—or austere white canvases that, from afar, may appear untouched, forgotten—Robert Ryman—become a site for meaning and intellectual inquiry. This is not to say there is no aesthetic value to these pieces; there is surely an indescribable beauty and grace, for example, to the experience of moving in and around Richard Serra’s sculptures. However, it is difficult to believe that such pieces would be equally successful “in the wild” as they are in MOMA. Such a finding would have to be the criteria for universality.
So here comes the neuroscience: if we are to believe that there is some way to understand reactions to art by understanding the brain (or to understand the brain by understanding art), how are we to incorporate context-specific reactions?
The answer, in fact, is that some studies have already attempted to approximate such an effect. In a landmark neuroeconomic study, for example, Martino et al from the University College of London showed that willingness to gamble for money was dependent on whether the result was posed as a gain (keep 20 of 50 pounds) or a loss (lose 30 of 50 pounds), and that this dependence had neural correlates that could be used to predict risky behavior.
Let’s think for a minute about what this finding means. Specifically, it means that our decision making patterns are at least in part dependent on the context in which we find ourselves at the time of our decision. That is to say, even though the same product or object was presented in both conditions (a change in financial state of thirty pounds), the framing of the choice changed both the neural substrates and the behavior that resulted. More generally, it strongly suggests that our means of perception are dependent on what exactly we are perceiving. Though this may seem like a simple observation, it is by no means self-evident in standard academic literature, especially the decision sciences literature (for more on the neural effects of everyday context, see this recent article in the New York Times).
So how does this relate to conceptual art? Though the neural mechanisms would likely be entirely different, the bottom line is that there is little room for argument that a large proportion of conceptual art would have dissociable neural mechanisms depending on the context in which the pieces were presented to the viewer. Much as the gambler’s neural activity differed based on the framing of the task, so too would a Rauschenberg viewer’s neural activity be dependent on the knowledge that they were observing an object from which they should derive intellectual satisfaction—and I seriously doubt the average male art lover has the same neural activation while standing before a urinal in the airport that they do while standing before Duchamp’s. If a network similar to the “risky gambling predictor” network could be identified for conceptual art identification appreciation, it would be a compelling testament to the power of context in art, especially if such an effect existed independent of the observed object actually being art—that is to say, if such a network was active simply because subjects believed they were viewing art.
While Ramachandran uses the ethology example of a bird seeing a stick with red stripes and treating it as we would a Picasso to explain art’s capacity to evoke universal aesthetic responses, it would seem to be true that conceptual art specifically opposes such a rule, or, more exactly, turns it around—the stick is not art until it sits upon the wall as art. This is not a weakness in the art, but rather a weakness in a theory that understands the art through universal maxims that apply consistently across contexts. Our behavior, in fact, is wholly context dependent—we speak differently in cathedrals, get quiet when we approach ground zero, and, yes, practice mental masturbation inside the MOMA, all without consciously deciding to do so.
As the fields making up applied neuroscience explode in every direction, it is crucial that we keep two things in mind. First, we must not overstate what the findings can actually say. Good research leads to more questions than answers, and this is never more true than when actually attempting to explain the biggest questions, such as “what is consciousness?” or, “when can you pull the plug on a patient?” The same holds true for understanding the nebulous world of visual art. Second, and more central here, is that we must always be cognizant of the effect of context on any life experience. The whole program of “explaining art” with neuroscience fundamentally attempts to fit a square peg—the objective understanding of neuronal relationships—into a round hole—the subjective world of art appreciation.
As such, further research into the nature of this subjectivity—through means such as better understanding framing effects and context impacts—is crucial before we can make any overarching claims about what it actually means to view art.
I have spoken to a number of artists and art lovers about Zeki’s neuroesthetics ‘abstraction and concept formation’ hypothesis—the hypothesis that led him to assert that all great art is perceptually ambiguous in one way or another. This type of absolute statement, I have found, is what opens Zeki up to the ridicule of many entrenched artist types, and, as a corollary, to the closing of minds to a set of ideas that could prove highly useful to the program of art theory. Indeed, I will argue here that it is this myopic reading of neuroesthetics—as opposed to the theory itself—that is the greatest weakness in neuroesthetics’ current form, and that perceptually unambiguous art forms such as photorealism can be equally successful as artistic paradigms partaking in the process of concept formation that Zeki describes.
In 1999, Italian artist Maurizio Cattelan debuted a full-room installation entitled La Nona Ora (The Ninth Hour). The scene: a wax sculpture of the pope—painstaking in its realism—lay to one side of the room, crushed under the weight of an equally realistic meteorite that seemed to have come crashing through a skylight. Glass was scattered about the floor. The scene was upsetting to many, and on December 21st, 1999, the story goes, while the installation was on in Warsaw’s National Gallery, two Polish politicians ordered the meteor’s removal. The figure of the pope, they decreed, was to be put on its feet.
It is not surprising that such a piece would attract the ire of the Polish parliament. Rife with irony, Cattelan’s installation suggests several probable readings, all of which are critical of either the entire program of religion or the Catholic church as an institution. Such a piece, on display in a country that is almost entirely Catholic, can only last so long.
Cattelan’s La Nona Ora is emblematic of a whole brand of conceptual art that finds a middle ground between representational affective aesthetics and pure conceptual conclusion-making; one might think of Damien Hirst as its poster boy. And yet these works—especially Hirst’s preserved animals—are, like Duchamp’s Fountain, entirely unambiguous in their representation. So how to explain their staggering impact on the brain if visual ambiguity is so central to visual art?
I would posit that it is the photorealistic quality of La Nona Ora that makes the work successful in the process of concept formation. As opposed to Duchamp’s Fountain, in which the object is incidental to the concept, La Nona Ora is emotionally compelling because of the intricate detailing of the scene. That is to say, La Nona Ora’s effectiveness is predicated upon the viewer’s belief, if only for a split second, that before him is the pope, crushed to death. Given this necessity, Cattelan’s use of photorealism is apt, as the brain’s visual processing areas easily identify both the pope—especially given the singular ‘pope staff,’ a prescient example of Biederman’s theory of object recognition—and the meteor. Of course, this immediate visual interpretation soon comes into cognitive conflict with other inputs. This reflects a more extreme version of the informational conflicts that mark our daily lives in the modern visual landscape, in which the immediate reaction—whether emotional/limbic, visual, or otherwise—is moderated by higher level processing areas (most often the anterior cingulate cortex). This moderating effect serves to help us maximize the effectiveness of the cognitive system that is providing the correct information, focusing our attention upon areas of importance.
The most commonly cited experimental paradigm for demonstrating cognitive control is the STROOP task. In the most basic STROOP task, subjects are shown names of colors (‘green’ or ‘red’) that conflict or align with the color they are shown in. When the word and the color do not align, these trials are considered ‘incongruent.’ The subject, asked to say what color the word either is or reads as, has to cognitively control the processing areas dealing with reading and color perception. This moderating activity has been shown to take place in the anterior cingulate cortex, and the activation of the ACC speeds up reaction time on subsequent trials. Though admittedly in an entirely different world of conflict, the cognitive conflict taking place in that crucial moment, where the viewer first sees the crushed pope, is, in effect, a conflict between the visual information and the knowledge that the visual information is highly improbable. The strength of the conflict would be directly related to the realism of the scene.
The resolution of this cognitive conflict—I am seeing something that I perceive as real, but I know it is not real—leads to an engagement with abstract concepts instead: “I am not seeing the death of the pope, I am seeing art,” and then, more to the point, “I am viewing a metaphor.” The need to resolve cognitive conflict—to find an alternate meaning aside from the literal crushing of the pope—is serving the role here that Zeki reserves for ambiguity by pushing the viewer to, as he puts it, “subordinate the particular to the general.” This is a central point, as the presence of intense cognitive conflict is what differentiates this photorealistic installation from a drawing of the same scene.
In this way, I would argue that Zeki has needlessly limited his neurally-rooted definition of art. Just as Michelangelo, by leaving works unfinished, took advantage of the brain’s predilection for imaginative completion, so too have photorealistic conceptual artists taken advantage of the potential for conflict between perception and cognition. In other words, it is the absolute lack of visual ambiguity that allows these pieces to succeed. Were there any question as to the realism of the scene—think of the perfectly placed glass shards and the angle of incidence from the skylight to the pope—it would begin to fall apart. Works like Cattelan’s might be considered the most outwardly manipulative of the brain in that they are only successful because of the distinct order in which we process information.
This powerful impact of photorealistic art puts a thorn in the side of Zeki’s claim that all great art is visually ambiguous. In fact, it seems, manipulation of unambiguous visuals can be more successful than pure abstraction in causing concept formation in viewers—few viewers of La Nona Ora, I would imagine, left it without some conceptual conclusion. Instead, Zeki’s more general claim—that art is simply a process of concept formation that goes from the level of the object to the level of the idea, connecting two disparate processing areas—seems far more fundamentally accurate and less limiting. Of course, it also is nothing new—artists have been describing the difference between art and representation in terms of metaphorical abstraction (terms like “the sublime” come to mind) for as long as art has been written about. Nonetheless, this basic assumption should be focused on and should, I believe, form the foundation of future neuroimaging work in the art world. The acceptance of this assumption—that art is always on some level the creation of a physical metaphor—makes it easier to pose questions for study.
By turning away from the “artistic toolbox” conception of universal artistic laws—to which I believe exceptions could be continuously produced—and instead founding a program of inquiry upon a single, generally accepted principle, I believe future research will tell us far more about why we feel the way we do about art, and why the jump from object to metaphor is so emotionally powerful. Yet it is also important to keep in mind that science, even as it can tell us why we feel so strongly about art, will never hold court over the definition of what art is. Though deserving a seat at the table, work such as Zeki’s can only go so far. How, for example, might he position 19th century realism, a realism rooted less in metaphorical abstraction and more in the skill of the brush? What about the beloved tradition of realist landscape painting? The continuous stream of artists who have made it their task to capture the quiet everyday moment? When do crafts become art, and who is a neuroscientist to draw that line?
As such, it is crucially important that any scientific approach to understanding art take careful measures to account for the importance of the intellectual climate of the day. We live in a time of ideas, and so concept formation is “it”—what of the bygone eras of appreciation of meticulous craftsmanship? Surely the appreciators of this art were no less cerebral than modern art lovers. And even if they were, what of it? I have argued, previously, that concept-driven art is Darwinian, uniquely representative of the human intellect. However, this understanding brings us no closer to defining art.
Far more interesting, I believe, is a research agenda that draws connections between what we generally consider art and other forms of representation, or that begins to strip away the influence of context on the experience of art. Roger Ulrich’s famous 1984 study that appeared in Science, for example, showed that trees outside a hospital window significantly increased rates of recovery from surgery; how does this relate to the appreciation of the painted landscape? Questions such as this make up a program I find far more fulfilling than trying to scan our way to a set of artistic principles, a process that, let’s be honest, is tenuous at best.
Instead, a program that relates art to the everyday—instead of setting it apart—will allow us to see more of the sublime around us, and could truly have the power to reframe the artistic experience within the human context.
When Semir Zeki coined the phrase “neuroesthetics” to describe his research into the neural foundations of art appreciation, the art world made an ugly face, or else simply turned away as if not hearing. Zeki, in one of his seminal pieces on neuroesthetics, wrote in the July 2001 edition of Science that “by probing into the neural basis of art, neurological studies can help us to understand why our creative abilities and experiences vary so widely. But it can only do so by first charting the common neural organization that makes the creation and appreciation of art possible.” This assertion, that such a neural framework not only existed but in some ways defines the popular appreciation of art, is naturally disconcerting to artists, as it attempts to scientifically answer a question (what is art?) that most artists believe unanswerable—and certainly not within an fMRI scanner.
In that Science piece, Zeki went on to lay out his two “supreme artistic laws,” constancy and abstraction. Let’s focus on abstraction. Zeki believes that all great art is essentially abstract in one way or another. His argument is as follows:
“But abstraction, a key feature of an efficient knowledge-acquiring system, also exacts a heavy price on the individual, for which art may be a refuge. The abstract ‘ideal’ synthesized by the brain from many particulars can lead to a deep dissatisfaction, because the daily experience is that of particulars. Michelangelo left three-fifths of his sculptures unfinished (see the figure on this page), but he had not abandoned them in haste. He often worked on them for years, because, Giorgio Vasari tells us, ‘time and again the sublimity of his ideas lay beyond the reach of his hands.’ I would put it differently–Michelangelo realized the hopelessness of translating into a single work or a series of sculptures the synthetic ideals formed in his brain. Critics have written in emotional and lyrical terms about these unfinished works, perhaps because, being unfinished, the spectator can finish them and thus satisfy the ideals of his or her brain. This is only qualitatively different from finished works with the inestimable quality of ambiguity–a characteristic of all great art–that allows the brain of the viewer to interpret the work in a number of ways, all of them equally valid. In art, Schopenhauer wrote, ‘something, and the ultimate thing, must always be left over for the imagination [the brain] to do.”’
So, Zeki argues, with the help of a few esteemed art-world friends, a central component of the artistic experience is one of imaginative completion. Such an idea is not new, and the only controversial element in it is the nature of the claim—that all great art is abstract.
A similar claim is made by another third culture legend, neuroscientist V.S. Ramachandran. In his book A Brief Tour of Human Consciousness, he describes an experiment in which new-born birds treated a stick colored abstractly to look like the mother’s beak as if it were the beak itself. In fact, when the scientist—ethologist Niko Tinbergen—used a stick with three red stripes (instead of the mother’s one red spot) the chick reacted even more intensely than it did to the real mother, even though the stick resembled the mother only in the most abstract sense.
Just as this study was about to be filed away in the annals of Random Perceptual Ethology, Ramachandran made a striking assertion:
“Maybe the neurons’ receptive field embodies a rule such as ‘the more red contour the better’…And a message from this beak-detecting neuron goes to the emotional limbic centers in the chick’s brain, giving it a big jolt and the message: ‘here is a superbeak’…All of which brings me to my punch-line about art. If herring gulls had an art gallery, they would hang a long stick with three red stripes on the wall; they would worship it, pay millions of dollars for it, call it a Picasso, but not understand why—why they are mesmerized by this thing even though it doesn’t resemble anything. That is all any art lover is doing when buying contemporary art: behaving exactly like those gull chicks” (Rama pg 47).
Art, then, is a form of mental masturbation, a creation whose purpose (or effect, anyways) is to stimulate the imagination to achieve a more sublime and subjective form of object recognition than that of everyday life. Or, as Ramachandran himself poetically writes, “art is foreplay before the final climax of recognition.” I think that is a persuasive argument, given that one allows the recognition to encompass subjective emotional realities as well, such as the dark foreboding feeling one gets when seeing Van Gogh’s “Wheat Field with Crows” that could only be the result of the metaphorical abstraction of the image at hand.
That argument is all well and good until about 1917, when Marcel Duchamp took a urinal and placed it in a gallery and everything changed. No longer were we that chick, staring at an object without knowing why we were entranced. Duchamp rebelled against that, turned it on its head, and gave us the climax and the object as one—art is subjective, and so if I say this is art, it is art. In so birthing conceptual art, he made the concept concrete, subjugating the object to the idea instead of the other way around. Later artistic expressions made similar but more direct claims, as when Rauschenberg sent a telegram to the Clert Gallery in response to a request for a portrait that read “This is a portrait of Iris Clert if I say so.” Though one could potentially claim that Zeki’s principle might come into play with the physicality of the urinal, it is impossible to say the same for Rauschenberg’s telegram.
In the context of Ramachandran and Zeki’s claims, there are two potential conclusions about conceptual art. The first is that conceptual pieces are in fact not art but rather a challenge to the cultural dominance of art. Such a conclusion has been voiced many times. Indeed, the practice of finding the neural commonalities in artistic practice might help to explain why conceptual art is so hated by so many but so passionately loved by an art-literate crowd; that is to say, the commonalities would conveniently explain the mass appeal of non-conceptual art and the mass disinterest towards conceptual art.
Yet, as an enjoyer of conceptual art, I find this conclusion insufficient. There is also more to it, I think, from a neurobiological standpoint. Conceptual art, clearly, is all about the physical presentation of ideas (this is also true of much non-conceptual art, but conceptual art is uniquely only about this) as opposed to the artful presentation of a represented object. In many ways, the cognitive capacities required for appreciation of conceptual art are unique, in that they almost exclusively require the attention of the parts of our brain most recently evolved and as such uniquely human, especially our prefrontal cortex and associated high-level processing areas that deal with the processing of things that only humans can understand. Art itself is a concept, something an animal could appreciate without knowing why—only humans can attach to it the word “art.” In other words, even while enjoying a piece of art a chick could never understand the conceptual relevance of the piece because it fails to understand concepts.
Research indicates that all animals have the ability, to some varying degree, to determine the ‘what’ within the visual field, and recent studies seem to suggest that this ability is often quite developed and complex in animals we consider rather basic, as in Joe Tsien’s new paper in which he describes the rodent brain’s hippocampal ‘nest’ cells that fire when the rodent sees a nest, regardless of what the nest physically looks like. What non-human animals lack, however, is the ability to consciously connect that ‘what’ with a ‘why,’ which is really what conceptual art is all about. Again, the rodent’s hippocampal nest cells may fire, but all this means to the rodent is that, as Tsien himself points out, it has a specific function: “a nest is someplace to curl up in to sleep.” To us, a structure in which we sleep might actually mean something larger: home, comfort, family, nostalgia.
I think it is clear in what ways our ability to produce concepts from static objects like a house connects with the practice of conceptual art. Duchamp’s urinal is not an object to be admired. It is, to use Zeki’s own word, perfectly physically representative of the “particulars” of daily life. In choosing such a simple object Duchamp stripped away all indulgence from the question of ‘what’—just as Rauschenberg did by sending a boring old telegram. In essence, Duchamp, Rauschenberg, and the entire program of pure conceptual art have targeted exclusively the centers of our brain that tell us why we care instead of what we see: the unique human ability we might call ‘metaphor,’ and which neuroscience tells us is simply an enhanced set of connections between disparate parts of the brain. These neural connections unite the parts of our brain that deal with concrete things—our ‘nest’ cells— with the parts of our brain that can process metaphysical concepts. This is our most treasured human capacity—it explains why we cry in the darkened cinema, why we hate communism, fall in love, read poetry.
This is, clearly, an entirely different cognitive process than the traditional representative art experience that Zeki and Ramachandran describe. As such, here is my punchline about art: conceptual art is, in actuality, the “final climax” of human artistic evolution, the artistic record of human cognitive ability. It may not be the type of art that brings the most people the most pleasure in any utilitarian sense (remember, our pleasure comes from more ancient neural structures), but it is certainly the most triumphantly human, the most directly Darwinian, art form yet.
As philosophers go, Immanuel Kant might be both the most complicated and the simplest thinker to read and understand. What I mean is that, for all the complicated syntax and drawn out arguments, his central concept of reasoned action is quite simple. Obtusely called the Categorical Imperative, Kant’s central idea is in reality a rather basic logical approach to moral behavior: all humans are rational, therefore a person should be able to rationally universalize every action. If a behavior cannot be universalized, then it is not moral. As Karen Sanders writes in her book Ethics and Journalism,
“If we are rational beings, and Kant says we are, than these principles – as precepts of practical reason – would be seen to be the right and in fact the only grounds of action. Categorical imperatives tell us how we ought to act irrespective of our inclinations. They are compelling because they describe the structure of reason in action.”
From the brain science perspective, the operative question here is, “can one actually make purely rational decisions?” If such a question has an answer, one could at least validate the possibility of Kant’s construct (This is a role I believe brain science can and should play in the understanding of philosophical inquiry—it can validate the potential of the philosophical ideal for realization, challenging or supporting claims like “all humans are rational beings”).
The question of the possibility for rational human thought is one angle of approach to the work of Joshua Greene, formerly of Princeton and now at Harvard. Greene has been interested in the neural substrates of moral dilemma solving. There is, for example, the famous trolley dilemma, in which one is standing at the split where a trolley car chooses one track versus another. On one track—the track the car is currently planning to take—are five trolley workers. On the other, there is a single trolley worker. The subject of the dilemma is the only one with the power to switch the course of the trolley. So which is the more rational choice (a question Kant conveniently leaves up to his readers)? Of course, there is no answer to that question. For Kant, the question is paradoxically answered on an individual basis with a requirement for a rational universal applicability.
Yet even if there is no answer to such a question, functional brain scanning can begin to show us how we solve such dilemmas. That is to say, do we attempt to answer them in a rational manner? Greene found that when people made decisions that involved flipping the switch—the one described above—they used much the same areas of their brain as when they were making amoral decisions like which mutual fund to acquire—higher-level processing areas such as the prefrontal cortex. However, if the decision was more personal—pushing the one worker from a bridge to stop the train, for example—the emotional centers of the brain were activated (much of this research is summarized in the recent Discover special Discover Presents the Brain).
If such findings are generalizable, and they appear to be, then the answer to my original question is a mixed one. When the dilemma is impersonal, involving a switch on a track, it could be argued that, as Sanders wrote, the subject’s acts are “reason in action.” However, when a more personal decision was being made, the subject always folded his or her “inclinations” into their decision. This basic split between the personal and the impersonal is as predictable as it is troubling. The most pressing moral dilemmas we face in daily life do not involve tracks and switches, and often involve some mixture of personal and impersonal features. Much of the cold war, for example, could be seen as a conflict between the “rational” plays of the governments—often built upon the supra-rational conceits of game theory—and the chillingly emotional threat of atomic war. Anyone who believes that a cold utilitarian approach was taken to the impact of the bomb has not read his or her history. In the course of daily life, then, I would argue that there are few if any true moral dilemmas that are impersonal; in fact, strap a brain scanner to a man as he actually stands at the train switch looking at that lone worker he is about to condemn to death and I would guess his limbic system would be quite active.
Science writers traditionally have been loathe to criticize moral or philosophical programs, even when evidence suggests criticism is warranted. Yet I firmly believe the evidence here (and not just Greene—much research from the emerging field of neuroeconomics supports a two-system cognitive/emotional model of decision making) makes Kantian practical reasoning a null hypothesis. If, as Kant says, we must divorce our passions from decision-making, then neuroamputation of the nucleus accumbens and the amygdala should be required for all our politicians and clergymen.
The drive among computational neuroscientists and neuro-engineers to reverse engineer the human brain provides an interesting window into Kant; I wonder what Kant would have thought of computers. Ironically, it is the unique synthesis of emotion and rationality (or, more accurately, the extremely numerous and complicated set of inputs for each action output) that have made it so difficult for the rational computer to model human behavior. And, as we now understand pure rationality, especially in popular culture, as a more strangely computerized phenomenon embodied by robots (every input has its rational “universalized” output), such a model for humanity seems almost perverse.
Computational neuroscientists hold that, given that the brain is simply a mesh of interconnected circuits, an input (say, a moral dilemma) will trigger a context-dependent set of neurons, producing an expected result. We know this must be true, that behavior springs up solely from the interactions between the neurons in our brains, and yet the model of how it works must be nearly infinitely complex, so complex that many believe the largest barrier to modeling the brain is computational power.
So if researchers have shown that pure rationality does not exist, that affect can not be divorced from decision making, it seems clear that half of Kant’s categorical imperative is in trouble. Yet if it is possible to reverse engineer the brain, it reopens the door to what I believe the more lasting and compelling philosophical question in Kant’s imperative: is there such a thing as a universalizable act in a real human context? Once we model a brain—and I believe we will, eventually—Kant may find he has a second, more complex vocabulary to speak with, that of the third culture.
In 1959, CP Snow, a scientist and fiction writer, published a book called The Two Cultures and the Scientific Revolution, in which he argued that science and the humanities were no longer communicating, and that such a failure of communication also represented the breakdown of our educational system. He posited, in a later version, the creation of a third culture, one that would synthesize the two existing cultures, allowing science to speak about the humanistic questions dominating intellectual discourse–what is art, what does it mean to be human, etc. Groups such as The Edge (edge.org) have been championing this third culture for quite some time.
I deeply believe in the power of this third culture not only to add to our intellectual understanding of who we are and why we do what we do, but also to reframe what it means to be educated today. As a high school student ten years ago, I chose the humanities over the sciences, because I could be only one. I believe, as Snow wrote, that such a choice is fallacious, a false dilemma, and my educational course has traced a path from the humanities through the social sciences to the admittedly gray area of neurobiology and behavior.
These writings are my contribution to third culture thought. They will deal, for the most part, with the implications of brain/behavior research on our understanding of humanistic concepts such as art, philosophy, and consciousness.