Skip to content

From the archives

Blurred Vision

A novel by Anne Michaels

Solidarity Revisited

What past legal battles tell us about the Canadian workplace today

Clock Watching

The nuclear threat lingers still

The Science of Science

How do we know whether medical research really pays off?

Kwame McKenzie

Mental Health Retrosights: Understanding the Returns from Research (Lessons from Schizophrenia)

Steven Wooding, Alexandra Pollitt, et al.

Rand Corporation

86 pages, PDF

Twice a year I take one for the team: I review grant proposals for the Canadian Institutes of Health Research, Canada’s federal health research fund. It takes a week: five days to perform detailed assessments of research proposals followed by a gruelling two days during which I, and a group of other researchers, decide which studies we should recommend for funding. If we discuss 45 research proposals between us, only six or seven will receive funding. It is high-pressure, unpaid work that demands a good job, and it must be scheduled around all our regular duties. As many of the academic institutions we belong to value the grants we bring in and the papers we write rather than whether we sit on a review panel, it can feel like a thankless task.

Hundreds of senior researchers take this hit every year. They do it because they believe it is fair for your research proposal to be judged by a jury of your peers. They also believe that the system improves research. If the weakest proposals die and the strongest survive, health research will progress by a form of natural selection.

But the stark truth is that it probably does not move science forward as predictably as we think. Health research rarely leads to innovations that improve the country’s health. Targeted research may not identify the next big innovations. Many of the most important health breakthroughs have come about by chance.

For instance, the discovery of penicillin, the first antibiotic and an innovation that revolutionized health care and saved millions of lives, was not a product of focused research. Alexander Fleming was researching bacteria for a completely different experiment and noticed that a mould was stopping his bugs from growing. That mould was producing natural penicillin. It was a chance finding that came about because of cross-contamination between labs. But, as Louis Pasteur famously said, chance favours the prepared mind: Fleming first had to notice that something was different, then had to know how to investigate it. All the research he had done before led to the development of a scientific mind and scientific tools that allowed him to use the gift offered to him by chance.

Shantala Robinson

Chance discoveries, however, are difficult for funding panels to bank on. Furthermore, the idea that scientific breakthroughs are random could lead to the conclusion that building research skills may be as important as the actual research project, leading to studies that you hope were funded for their ability to produce strong researchers, rather than the question they aimed to answer.

Some of these are celebrated each year at Harvard in a satirical ceremony, the Ig Nobel awards (commonly known as the IGs). The “winning” research generally makes you laugh, then makes you think. Four of the top ten prizes at last September’s ceremony were health related. For example, Brian Crandall and Peter Stahl, working at the State University of New York, won an IG for parboiling a dead shrew, swallowing it without chewing and carefully examining everything excreted during subsequent days in the aim to see which bones dissolve inside the human digestive system and which bones do not.

Likewise, Masateru Uchiyama led a team from Japan in a study that assessed the effect of listening to opera on mice that had had a heart transplant operation, while a team from the Netherlands and France published “Beauty Is in the Eye of the Beer Holder,” in which they demonstrated that people who think they are drunk also think they are attractive.

Lastly, the public health IG went to a U.S.-Thai collaboration that described the surgical treatment for an epidemic of penile amputation in Thailand, which they say could be used in all cases, except where the amputated penis had been partially eaten by a duck—I kid you not.

In spite of the hilarity of the IGs, they focus the spotlight on the question: which is the best type of scientific study to invest in—chance or design? And there is an emerging group of researchers interested in precisely this sort of question, which they call the “science of science.” Although it does not yet allow us to predict what would be a good bet for the next health innovation, it does help to narrow the odds.

About 900 grants in Canada are funded through the process I am involved in each year and the average amount going to each is $600,000. If this were a business, maximizing the potential for return for our investment would be a reasonable way to decide on priorities. Economists such as James Heckman (a proper Nobel laureate rather than an Ig Nobel laureate) have mapped the rate of return for investment in people, demonstrating that investing in children in the 0-to-5 age range yielded twice the return over school-aged kids, suggesting that the rate of return decreases exponentially with age. Applying that science to science, from an economic standpoint, it could be argued that health research targeted at the early years of life (prevention) may be a better investment than research on most other age groups.

Statistical evidence on the potential for improvement of health favours researching the social factors that lead to problems over trying to develop treatments for the resulting illnesses. A recent U.S. study demonstrated that only 20 percent of health improvement could be made through clinical interventions whereas 80 percent of the possible health improvement for a population will be made through action on social factors such as living conditions, work, income inequality, pollution, and the availability of drugs, alcohol and high-calorie foods. In other words, practical research on prevention may be better than a hope of a cure.

But if we are going to focus on treatment studies—and most funders like the Canadian Institutes of Health Research are increasingly doing this—there are particular types of research that may be more likely to yield a quick return. According to the U.S. Department of Health and Human Services, it takes up to 20 years to translate research findings into routine clinical practice, from the laboratory bench to the patient bedside. Research into how we can speed up the implementation of new treatments could be a very effective and efficient way of spending public money. Unfortunately, though, there is not enough research into the science of science and we are not that good at translating its findings into changing the way we do our business, and consequently we have shelves full of proven treatments that we do not use.

Mental Health Retrosight: Understanding the Returns from Research (Lessons from Schizophrenia) provides a useful antidote to this funding conundrum. The report covers a fascinating project funded by granting agencies in Canada, the United States and the United Kingdom. The aim was to identify research studies that have gone all the way from being just a theory or a finding in a lab to the development of an intervention that has made a significant contribution to improving health. The authors, led by Steven Wooding, wanted to use the studies they had identified to develop methods to help predict which scientists, or groups of scientists, will make an important impact in their field. Put simply, they wanted to use “the idea that we can learn from the past to inform our current and future practice” to develop a formula for funding bodies to help decide to whom they should give their money.

One of the most important rules of research is the “rubbish in, rubbish out” rule. If you do not get quality information into your study then the results will not be very good. Cognizant of this, the Retrosight team spent a huge amount of time identifying and assessing bench-to-bedside research. They focused on schizophrenia research, because they thought that doing one area well would be more likely to give them a solid grasp of the issues. Apart from infectious diseases, mental illness has the biggest impact on the world’s health. And, within mental illness, depression, substance misuse and schizophrenia are the leading problems.

“Schizophrenia is a chronic, severe and disabling disorder. It is characterised by symptoms such as hallucinations, delusions, disordered thinking,” they write. It has a particularly large footprint in the world of health and on the lives of those diagnosed and their families because it usually starts in -teenage-hood to mid twenties, and two thirds of people have symptoms on and off for the rest of their lives. People with schizophrenia, on average, die 20 years earlier than the rest of the population because of suicide, smoking, and the effects of the illness and its treatment on their physical state.

Retrosight assembled an international team to examine schizophrenia research in the United States, United Kingdom and Canada over a 20-year period. They looked through published research to identify the most important new clinical interventions over that time and mapped what they called the “research cloud” for each. The research cloud is a new concept produced by the Retrosight team. It describes all the work conducted by researchers to come up with the intervention and all the research outputs: “Research clouds have the advantage of an approach not focused solely on grants or publications—they seek to embrace the activities of science itself: the inspirations, the experiments, the collaborations, the chance meetings and the unexpected.” Once you have the research cloud you can measure the payback for each of the interventions, such as how much new knowledge it had produced; how well it led to the development of better research; whether it improved policy or led to the development of a new product; whether there were clear benefits to the health sector such as prolonging life or decreasing service costs; and, lastly, whether it led to other social benefits such as a rise in employment.

From the research clouds the Retrosight team concluded that schizophrenia research has led to “a diverse and beneficial range of academic, health, social and economic impacts.” Studies that were considered clinical research, like the development of a new form of treatment, had more impact on patient care than lab-based work such as high-tech scanning studies that aimed to document the way that the brain functions, or theoretical research that tried to change how we think about setting up services. There were some common themes in the clouds that had the biggest impacts, and these could form the basis of a research funder’s formula: when researchers work across boundaries with other people from different disciplines the outcomes and impacts of research is better; lone wolf researchers, working by themselves, are less likely to be productive than people who work in teams; and committed individuals who are motivated by the needs of their patients and who effectively champion research agendas or get research into practice produce the highest impact research.

Mental Health Retrosight is compelling reading, even for someone who is not a science nerd. Its central message fundamentally questions how we decide who gets what in the research funding lottery. If we followed the science of science that it offers and we wanted to maximize our potential to improve health, we would refocus our efforts away from non-clinical or theoretical studies toward studies with a clear clinical intervention in mind. We would focus on teams of researchers rather than individuals and we would give particular weight to people who have a body of work in the field already because building on their existing research cloud is likely to produce better outcomes.

I expect resistance to these three central messages. Some will say that Retrosight is biased toward clinical research. They will argue that just finding research that has had a big impact over a 20-year period is too short a timeline. Over 20 years there may be returns from clinical research such as the development of a new intervention, better outcomes for an illness because of that intervention and reduced costs (for instance, if the intervention means that you can get treatment as an outpatient rather than an inpatient). But it takes a lot longer to move from bench to bedside for theoretical and lab-based studies.

Others will argue against the idea that all research needs to be in teams. This is because we all know of truly exceptional researchers who have quirky personalities. They will not be able to work in teams, but their work is important. Although in general teams are a good thing, surely there must be place for diversity; it is not clear that Einstein was a great team player.

Lastly, they will argue that focusing funding only on teams with a track record makes it difficult for new groups to break into the research world. It may be that funding the “same old same old” is not a recipe for long-term success since new people with new ideas may not be funded.

And there will be other critiques. Personally, although Medical Health Retrosight is an important study and the findings move us forward, I think there are some important caveats.

Retrosight focuses on clinical interventions, but in any population only 20 percent of improvement in health comes this way. The other 80 percent is due to social interventions such as clean water, sewage systems, good housing, stable government and the rule of law. These are not clinical interventions per se and would not have been included in the Retrosight study, but they have had a much bigger impact on health over time.

It is also useful to bear in mind that the lens of looking at the past to tell us about the future is fraught with difficulty. The world turns, societies change and our knowledge of science develops over time. Without some measure of the context in which the research cloud was formed, it is difficult to know whether the lessons learned from the last 20 years are as applicable now as they were then. Unemployment among researchers has increased, many areas have become more specialized and the ability to work across a variety of areas and so build a big cloud has diminished in many areas.

There is also the question about what we need to know to move the field forward. Retrosight tells us about the best clinical research that has been done over the last 20 years. But it does not tell us anything about any brilliant ideas that were not funded, or great research that was funded but was then ignored by the research community. In this way the study tells us how to be more efficient at what we are doing, but it does not tell us whether what we are doing is a good idea. If there is money for Retrosight II, it could try to investigate how to identify the best ideas that are out there rather than the best way to select and fund the ideas that are presented to us. It is as if Retrosight tells us how to make the best choice from the menu that researchers have given us, but what we really need to know is what is in the kitchen. And one thing that we need is more knowledge on how to speed up the movement of proven research that we already have from the bench to the bedside.

But there are deeper questions that we may want to discuss. The science of science may help us to better select researchers or research projects, it may help us to find the best new ideas, it may even help us to move older ideas from the bench to the bedside. But we are generally trying to achieve more than this in medical science.

Most research does not produce better medical interventions to improve health. It is not clear that is a problem. Much of the research that is done is for educational purposes. It is not done by people who end up being full-time researchers but by students who may later become clinicians or who may completely leave health, by people involved in patient care and sometimes by patients and by families. Most of the research will never be part of a fundable research cloud but is fundamental to the improvement of health services. Learning the language of research and the discipline of critical thinking can make people more likely to improve the services they offer and more likely to be able to understand, analyze and implement research findings.

But moreover, research is hope. One of the things we do as humans is try to move things forward. It is important that society is seen to be doing that. It is important for our mental health for us to aspire to taking on what life has to throw at us and try to beat it. Sure, finding a cure for cancer would be great, but the fact that we are looking for a cure, whether we find it or not, is essential. We need to be careful that when we are investigating the science of science we do not get caught thinking linearly about the needs of researchers and funders and ignore what research is really for and the breadth of important outcomes that are linked to the scientific efforts. Medical research is about more than finding clinical interventions.

Retrosight is a quick and important read. It moves us forward, makes us think but, unlike the IGs, does not make us laugh. But it ends as all good science stories do by ensuring the door is left open for further funding since it is clear that on this subject we need further research.

Kwame McKenzie is the medical director at the Centre for Addiction and Mental Health and a professor at the University of Toronto.

Advertisement

Advertisement