Skip to content

From the archives

Referendum Trudeau

He campaigned in poetry but governed in prose

Rinkside Reading

What does hockey’s literature say about the sport?

Alarm Bells

Fort McMurray and fires hence

Losing Our Heads

Neuroscience is a modern obsession worth billions. But is it the best way to understand ourselves?

Ian Gold and Suparna Choudhury

Marvin Minsky, one of the fathers of artificial intelligence, famously said that minds are simply what brains do. Minsky’s credo has become a truism; it not only expresses our commitment to a scientific understanding of the mind but also captures the familiar idea that a deep theory can only come from a science of the brain.

It is also a truism, however, that truisms often start out life as controversies, and so it is with Minsky’s. Until recently, in fact, the brain was of little interest to those concerned with mental life. The agenda for the theory of the mind was set by the 17th-century philosopher and mathematician René Descartes, who believed that the mind was a soul that resided apart from the physical world. According to Descartes, then, there could not be a science of the mind, much less a neuroscience of the mind. Theoretical work had to be done to move from Descartes’s view to Minsky’s statement of contemporary common sense, and that work was done in living memory. The idea that the mind is nothing over and above brain activity is credited in part to two expatriate Brits living in Adelaide—the psychologist U.T. Place and the philosopher Jack Smart—who formulated it in the late 1950s. An anecdote makes clear how far from a truism the idea seemed at the time. On a visit to England in the 1950s, a Melbourne philosopher was questioned by a British colleague about what was going on in the Australian philosophy of mind: “What’s happened to Smart?” the man asked. “I hear he is going about saying that the mind is the brain. Do you think it might be the heat?” The Australian is supposed to have replied: “It’s not that hot.”

As with many philosophical doctrines, the mind-brain identity theory, as it came to be called, had antecedents of long standing. The Greek philosopher Democritus held the universe to be made up only of atoms in a void with no conceptual place for anything like a Cartesian soul. Eventually, thinkers in the Democritean tradition gravitated to the brain as the likely organ of thought. Wilhelm Griesinger’s Mental Pathology and Therapeutics, for example—the standard textbook of psychiatry of the 19th century—opens by grounding mental illness in the brain: “What organ must necessarily and invariably be diseased where there is madness? … Physiological and pathological facts show us that this organ can only be the brain.” Griesinger too was echoing ancient Greek antecedents; a Hippocratic text tells us that “from nothing else but thence [the brain] come joys, delights, laughter and sports, and sorrows, griefs, despondency, and lamentations.”

Twenty-first century neuroscience has taken the Hippocratic hypothesis to its logical conclusion. Not only do our joys, delights, sorrows and griefs come from the brain but also our taste in films and music, our political leanings, and our susceptibility to belief or atheism. Indeed, if recent books about the brain—and the inexhaustible media appetite for brain science—are to be believed, neuroscience will provide an account of all of these and much else besides. It will reveal the secret to conquering anxiety, finding love, enhancing memory, getting organized, coping with your rebellious teen, enhancing creativity, understanding gender differences, developing leadership skills, being a better parent and losing weight. At some point in recent history, it seems, a large swath of human life has telescoped into the space of our skulls.

Suharu Ogawa

The idea of the brain as the oracle of the mind is not merely hype directed at the book-­buying public. It has been repeatedly claimed that we are at the dawn of a new “neurobiological age,” in the throes of a “neuro-­revolution,” and in the midst of a “neuro-turn”—rhetoric that has fuelled neuroscience since the U.S. Congress declared the 1990s the Decade of the Brain. Reflecting this belief, billions of dollars have been invested in mega projects devoted to mapping, simulating and intervening in the human brain. The scale of the research effort continues to expand. The Obama administration’s BRAIN Initiative, for example, funded mostly by the Defense Advanced Research Projects Agency, the National Institutes of Health, and private research institutes and foundations (such as the Kavli Foundation and the Allen Institute for Brain Science), is comparable in its reach, goals and even surrounding controversy to the European Commission’s $1.3 billion Human Brain Project, which, according its director, will be the “Higgs Boson of the brain.”

Twenty-five years after the inception of the Decade of the Brain, the science of the brain is firmly ensconced at the centre of our search for an understanding of human nature. Neuroscience now occupies something like the place in popular culture once held by psychoanalysis, with the image of the brain scan in place of the picture of the severe Austrian doctor with the cigar. It is appropriate to take stock, and skeptics are beginning to raise questions: Are notions of personhood prematurely being replaced with a reductive “brainhood”? What exactly have we learned about the mind from brain imaging research? What is at stake in reshaping psychiatry and public policy in neuronal terms? And can a successful theory of the mind be exclusively neuroscientific?

By even the most conservative historical accounting, modern neuroscience had existed for more than 60 years by the time Place and Smart published their papers, and a good deal was already known about brain structure and the function of neurons. But it was not until the 1990s that ­scientists at large began to take seriously the idea that a theory of the brain could explain the human mind.

The chief stumbling block in the preceding decades was a principled disinterest in neuroscience that was motivated by another truism, still familiar today, that the mind is a computer. It was frequently remarked that trying to understand it by exploring the brain would be like trying to understand how a computer works by looking at the electrical circuits inside—a hopeless strategy for both computers and minds. A perspicuous characterization of how computers work ignores their physical properties in favour of their logical and mathematical features; a parallel study of the mind, it was argued, would also have to be logical and mathematical. This would be the job of the new discipline known as cognitive science, also invented in the 1950s. The royal road to the mind, according to cognitive scientists, was not the brain but the theory of computation developed by Alan Turing, Claude Shannon’s notion of information and the models of the kind invented by Noam Chomsky to explain grammar. To those at the cutting edge, neuroscientists were the engineers of the mind, not its theorists.

In the 1980s, everything changed. For reasons that still call out for systematic historical investigation, neuroscience became the hot field, and cognitive science was old hat. Functional magnetic resonance imaging, the method par excellence for imaging the brain in action, exploded on the scene in the early 1990s, and there is no doubt that this technology helped the so-called neuro-turn take hold. This year alone, fMRI has been used to explore the neural correlates of social life, empathy, guilt, food, money, sexual attraction, aesthetic beauty and the emotional adolescent. Meditation, too, has become a topic of widespread interest thanks in part to compelling images of brain scans thought to demonstrate the effect of mindfulness on the brains of children, psychiatric patients and Buddhist monks. (Multimillion-dollar projects are now under way funded by the Wellcome Trust, the Bill and Melinda Gates Foundation, and the United States Department of Education among others.)

The lure of fMRI is, in part, the power of visual information, an idea explored by the anthropologist Joseph Dumit in Picturing Personhood: Brain Scans and Biomedical Identity and more recently by the sociologist Kelly Ann Joyce in Magnetic Appeal: MRI and the Myth of Transparency. Brain scans are sometimes raised to the status of self-portraits, charged with emotional significance and imbued with the objective legitimacy of identities and diagnoses.

Nothing like this is justified. Although fMRI tells us a great deal about which parts of the brain support particular mental functions, it only very rarely reveals something new about how the mind works. Moreover, an image from an fMRI scan does not even “reveal” how the brain works per se. fMRI actually tracks oxygenated blood flow to the brain as a proxy for neural activity, and a good deal of (sometimes controversial) statistical analysis goes into inferring what the brain itself is doing. Nevertheless, the scans connote certainty and objectivity, and are therefore, as the psychologist Deena Skolnick Weisberg and her colleagues have demonstrated, often misleading. In a 2008 paper, “The Seductive Allure of Neuroscience Explanations,” published in the Journal of Cognitive Neuroscience, they report on an experiment in which both neuroscientific experts and non-experts were given descriptions of psychological phenomena, some of which included “scientific-sounding but empirically and conceptually uninformative” neuroscience. The non-expert participants found the explanations that included the irrelevant neuroscience more satisfying than the equivalent explanations without it, which should come as no surprise to readers of bestseller lists.

The neuroscience community has begun to evaluate the limitations of fMRI more openly in the face of some recent well-publicized statistical scandals. A paper by Ed Vul and his colleagues (originally called “Voodoo Correlations in Social Neuroscience,” then more diplomatically retitled “Puzzlingly High Correlations in fMRI Studies of Emotion, Personality, and Social Cognition”) pointed to instances of scientists employing poor methodological standards and overstating their findings. Since then a number of psychologists, neuroscientists and statisticians have challenged the validity of certain neuro-imaging studies. An audience outside of brain imaging labs has been alerted to the fact that these data are difficult to interpret at best and, at worst, meaningless. An article by Anders Eklund, Thomas Nichols and Hans Knutsson published this year in the eminent journal Proceedings of the National Academy of Sciences reported that fMRI “is 25 years old, yet … its most common statistical methods have not been validated using real data.” The weakness of some fMRI methodologies was recently underscored most clearly by a lovely—and now infamous—dead salmon study undertaken by Craig Bennett and his colleagues. Bennett put a salmon in the scanner—“the salmon was … not alive at the time of scanning,” we are told—and exposed it to pictures of human beings in social situations. When a common statistical method for analyzing fMRI data was employed, parts of the fish’s brain appeared to be responding to the images. In case we missed the point, the authors ask: “Could we conclude from this data that the salmon is engaging in the … task? Certainly not.” When the science itself admits of results like this, it seems churlish to profess indignation about popular writing advocating neuro-parenting or brain diets.

This methodological critique of fMRI research is a family quarrel. But the abiding optimism about neuroscience in the face of its shortcomings has begun to raise broader questions about an ideological sea-change. In particular, a number of writers have begun to question the effect of “neuro-centrism” on other sciences of the mind. John C. Markowitz, a clinical psychiatrist writing in the New York Times, recently lamented the fact that, for more than a decade, the National Institute of Mental Health in the United States has favoured neuroscience-related research to such an extent that “clinical research has slowed to a trickle.” He cites a promising study involving treatment for depressed mothers and their children that is unlikely to be replicated because it lacks the “neurosignature” the NIH now prioritizes. It is by no means an anomaly.

Skeptical voices have also been raised against the application of neuroscience to important social practices such as how we treat mental disorders, teach children or determine legal responsibility. The anthropologist Emily Martin has summed up the anxiety about this trend in her lament that this “form of reduction … is likely to impoverish the richness of human social life.” And, in Prospect magazine, the neuroscientist Steven Rose reminds us of the more malign applications of neuroscience in the military, and emphasize the need for more humility in the discipline in light of “a strong neurological determinism that the evidence does not sustain.”

For many neurophiles, of course, the controversies are mostly beside the point. If minds are simply what brains do, then the way to understand the mind must be to understand the brain. What other way could there be? Any failings of contemporary neuroscience are therefore bound to be fleeting. Given enough time, they would argue, a successful neuroscientific theory of the mind is inevitable.

In fact, the inevitability of a neuroscience of the mind is an illusion produced by an attractive philosophical mistake. It is very tempting to think that because the mind is nothing more than the working brain, the science of the mind must be a science of the brain; tempting, but a mistake nonetheless. To see why, think about earthquakes. Earthquakes are nothing more than a very large number of atoms (or quarks or strings; pick your favourite basic particle) moving through complex paths in space and time. Since there is nothing more to earthquakes than moving atoms, a theory of earthquakes is an atomic theory, isn’t it? Not at all. Our best theory of earthquakes is, of course, the theory of plate tectonics, which lumps vast numbers of atoms together into sheets of the earth’s crust called plates and—abstracting away from the individual atoms—models those structures. And (from the point of view of geological outsiders, at any rate) plate tectonics seems to work pretty well—so well, in fact, that we may never need a different kind of theory. Real human science happens to have found a way of understanding earthquakes by thinking about plates. Smaller objects, like atoms, or larger ones, like planets, just do not do the job, at least not for actual human beings at this point in scientific history.

Similar remarks apply to other areas of science. Why is genetics, for instance, a molecular theory? Wouldn’t an atomic theory be more fundamental or more elegant? Wouldn’t a cellular theory be easier to understand? Maybe. Even so, no such theory—if ever it is produced—will catch on simply by virtue of being more fundamental or elegant or easier to understand. Theories stand or fall on how well they explain the domain of interest. Parallel stories could be told about the colour of the sky, the Galápagos tortoises, climate change and just about everything of scientific interest—in fact, the entirety of things of scientific interest outside of fundamental physics itself, all of which are formulated in terms of bigger things than atoms. In short, from the deliciously simple fact that everything is made of atoms, nothing scientifically useful follows. For reasons that are quite mysterious, real science finds the patterns in the universe in different places and at different scales.

What about the mind? Like everything else, the mind is made up of atoms, but so far, no one thinks an atomic theory of mental life is inevitable. Nor should we think that a neuroscientific theory is inevitable. It is still an open question where in the universe we will find a way into the secrets of the mind. Minds are indeed simply what brains do, but which sciences will best explain what brains do is still up for grabs.

As for the argument that neuroscience as a theory of the mind is actually succeeding, this turns out to be largely unsupported by the current evidence. While some excitement about what contribution neuroscience may make to human self-knowledge seems justified, one must bear in mind what we know and what we do not. Fundamental neurobiology—the study of how individual neurons work and how small collections of neurons interact—has deservedly produced quite a few Nobel prizes. The application of neurobiology to human thought, in contrast, is much more speculative and contentious.

Neuroscience has no theory of human thought of its own—what theory it has is borrowed from cognitive science—and, again, fMRI rarely discovers something about mental function that was not already known from more than a century of modern psychology. Even the application of neuroscience in psychiatry, where it has been most warmly embraced, has failed to deliver the goods. For example, the great achievement of 20th-century neurobiological psychiatry, the “dopamine hypothesis” of schizophrenia—which posited that schizophrenia is caused by overactivity of the neurotransmitter dopamine—was never believed by those most closely associated with it, and has not been borne out by research. And selective serotonin reuptake transmitters, the next generation antidepressants used by millions, appear in effect to be placebos.

Modern neuroscience occupies the proverbial blink of an eye in the history of science. We are decades, or perhaps centuries, away from being able to pass judgement on its success or failure. There is, however, at least one important reason to be doubtful that neuro­science on its own will ever tell us everything we want to know about the human mind. Contemporary neuroscience is what the brain looks like through a keyhole. It is the science of the brain in isolation. The brain, however, is not isolated; it is situated. It lives in an environment—first and foremost, in a body, as well as in a physical, social and cultural milieu—and this environment matters to our understanding of what the brain does. A full description of brain function, therefore, will have to be an expansive one that includes neuroscience as well as a characterization of those features of the world—especially the social world—that matter most to the working brain.

In other contexts, this idea is obvious to the point of banality. How coral grows is a fact about coral, but a scientific theory of coral growth must go hand in hand with a theory of the water in which the coral lives. Lung cancer is a disease of the lungs; no doubt about it. But you should run a mile from any doctor who tells you that the chemistry of cigarette smoke is of no scientific interest because it is external to your organs. Yet the science of the brain’s environment is routinely ignored in neuroscientific research. If neuroscience is going to contribute to the theory of the mind, it is going to have to become a much broader church by extending its hand to the social sciences. Neuroscience may form part of a science of human nature but only if it becomes part of the science of the situated brain.

Schizophrenia, again, provides an apt illustration. A biological disease, schizophrenia appears to be caused in part by genetic processes and abnormalities in brain function. With bipolar disorder, therefore, schizophrenia is the psychiatric phenomenon that is most likely to be understood by neuroscience. As with cancer, however, environmental factors play a role in the etiology of the disease. For example, insults to the brain in utero or drug abuse may contribute to the development of schizophrenia. Surprisingly, the environmental factors we understand best show that we may have to look far afield from neuroscience, to the social world, to fully understand the brain in ­schizophrenia.

Childhood adversity, such as severe abuse or the death of a parent, being an immigrant or the child of an immigrant, and living in a big city roughly double the risk of schizophrenia—an increase comparable to that associated with cannabis abuse. Moreover, these effects seem to be genuinely social, and subtly so. A 2007 study headed by James Kirkbride investigated the incidence of schizophrenia in London neighbourhoods. It found that in more socially cohesive neighbourhoods—marked by greater voter turnout in council elections—schizophrenia was less common. And a 2001 study led by Jane Boydell found that immigrants living in neighbourhoods with other immigrants from the same country were at lower risk of schizophrenia than those living in less homogenous neighbourhoods. Social commitment and support appear to protect against psychosis.

No one knows what exactly it is about social life that interacts with schizophrenia. Obviously, whatever it is must have a downstream effect on the brain that makes it more vulnerable, and a theory of schizophrenia requires that we understand this neural vulnerability. But it is equally obvious that we need to have a theory of the relevant social phenomena. The brain functions in a culture of people, practices and ideas. The science of schizophrenia is therefore as much in need of sociologists and anthropologists as it is of neuroscientists.

The elision of the environment in contemporary neuroscience also has consequences beyond the science. Neuroscientific models of constructs such as illness, responsibility and selfhood continue to be formulated in individualist terms as if people are entirely isolated or stripped of social context or cultural influences. As sociologists such as Nikolas Rose have argued, this is part of a wider moral imperative for health and well-being in the general population to be shifted onto the individual, consistent with descriptions of neoliberal values of self-governance and self-management. If the root of many social problems is inside our own skulls, the solution must lie there too, rather than in robust social programs and thoughtful public policy.

Praise for interdisciplinarity is de rigueur in the academy, and many scientists will agree that an interdisciplinary attack on the mind is a good idea. Unfortunately, between the idea and the reality falls the shadow. Interdisciplinarity is by its nature risky, and when money is tight, the healthy scepticism that governs science can become a stifling conservatism. It is to be hoped that when the illusion of inevitability about the future of neuroscience loosens its grip on our thinking, there might be less anxiety about a marriage of neuroscience and social science. Of course, merely putting neuroscience and the social sciences together in a building or in a scientific study will not be enough. Conceptual bridges will have to be built to produce the imaginative engagement that may one day lead to something novel and useful.

There is, finally, some irony in the current disposition to search for a neuroscience of human life inside the skull alone. Evolutionary theory—the central framework of biology—is overwhelmingly concerned with the interaction between organism and environment; neither can be understood in isolation. Strangely, this preoccupation has not yet taken root in the biological study of the brain. As modern neuroscience moves through its second century, we expect it to produce great things. But for neuroscience to fulfill its considerable promise, it must, in the spirit of evolutionary theory, tackle the study of the brain in context. Among the things that 21st century neuroscience is likely to reveal is just how far from the skull one has to go to understand what lies within.

Ian Gold is a professor of philosophy and psychiatry at McGill University.

Suparna Choudhury is an assistant professor of psychiatry at McGill University.

Advertisement

Advertisement