As I hope I have shown in these blogs, thousands of “breakthroughs” are attributed to animal models every year. Every day we read about another “breakthrough” that will eventually cure this disease or that. Most of these come from the land known as basic biomedical research. Since most basic biomedical research is performed in universities, the researchers, along with their universities, feel the need to exaggerate every new finding and make claims and promises that only the most naïve and uneducated of our society are still believing. Why? Money, of course. The more sensational the claim the more likely the researcher and the university will get even more funding. Note, I am not referring to basic research in the hard sciences. I cannot recall ever hearing a claim from the chemistry department that Dr W’s research with the Belousov–Zhabotinsky reaction was going to cure heart disease. Nor can I recall a press release from the physics department claiming that their research on fermions was going to prevent Alzheimer’s. Perhaps there have been some, but if so they are the exception. No, the claims I am talking about come from one place—basic researchers in the biomedical sciences. And animal models are the bread and butter of basic biomedical research.
The truth about basic biomedical research is very different from the hype that we see daily and the revisionist history that links animal models to every medical breakthrough throughout history. An editorial in the current issue of Nature titled “Must try harder,” addressed a major problem in the basic biomedical research community: the results are not replicable nor are they translating to human treatments.(1) NOT EVEN REPLICABLE! This is a disgrace. Hence the editorial. If what you are doing in your lab cannot even be replicated by another lab, then something is very, very wrong. The editorial was accompanied, in the same issue, by two articles on the same topic.
According to Ledford, author of the first article: “Between 2008 and 2009, only 18% of drugs in phase II clinical trials succeeded(2). . . . when the biotechnology company Amgen, based in Thousand Oaks, California, tried to reproduce data from 53 published preclinical studies of potential anticancer drugs, it failed in all but six cases.”(3) I think that is a new record for Phase II failures.
The authors of the second article were C. Glenn Begley, a consultant and former vice-president of Hematology and Oncology Research at Amgen, and Lee M. Ellis of the M. D. Anderson Cancer Center. They wrote: “Efforts over the past decade to characterize the genetic alterations in human cancers have led to a better understanding of molecular drivers of this complex set of diseases. Although we in the cancer field hoped that this would lead to more effective drugs, historically, our ability to translate cancer research to clinical success has been remarkably low(4).” (5) They go on to say that one reason the failure rate is so high is that preclinical evaluation of drugs and targets using animal models and in vitro methods does not reflect the intact human patient. They also point out that the lack of reproducibility is another problem. I think these two are linked.
Begley and Ellis: “An enduring challenge in cancer-drug development lies in the erroneous use and misinterpretation of preclinical data from cell lines and animal models. The limitations of preclinical cancer models have been widely reviewed and are largely acknowledged by the field.” Hmmmm. I seem to remember someone saying that mouse models were vital to cancer research (and here). (See here and here for my response.)
Begley and Ellis: “The academic system and peer-review process tolerates and perhaps even inadvertently encourages such conduct [publish as much as you can as fast as you can](6). To obtain funding, a job, promotion or tenure, researchers need a strong publication record, often including a first-authored high-impact publication. Journal editors, reviewers and grant-review committees often look for a scientific finding that is simple, clear and complete — a ‘perfect’ story. It is therefore tempting for investigators to submit selected data sets for publication, or even to massage data to fit the underlying hypothesis. (5) I have discussed this issue before.
Another interesting item was found in the comment section below the editorial. Jim Woodgett wrote: “The issue with inaccuracies in scientific publication seems not to be major fraud (which should be correctable) but a level of irresponsibility. When we publish our studies in mouse models, we are encouraged to extrapolate to human relevance. This is almost a requirement of some funding agencies and certainly a pressure from the press in reporting research progress. When will this enter the clinic? The problem is an obvious one. If the scientific (most notably, biomedical community) does not take ownership of the problem, then we will be held to account. If we break the "contract" with the funders (a.k.a. tax payers), we will lose not only credibility but also funding. There is no easy solution. Penalties are difficult to enforce due to the very nature of research uncertainties. But peer pressure is surely a powerful tool. We know other scientists with poor reputations (largely because their mistakes are cumulative) but we don't challenge them. Until we realize that doing nothing makes us complicit in the poor behaviour of others, the situation will only get worse. Moreover, this is also a strong justification for fundamental research since many of the basic principles upon which our assumptions are based are incomplete, erroneous or have missing data. Building only on solid foundations was a principle understood by the ancient Greeks and Egyptians yet we are building castles on the equivalent of swampland. No wonder clinical translation fails so often.” (Emphasis added.)
I have no idea who Jim Woodgett is but the comments certainly seem consistent with the theme and with what I have said many times before (see here and here and our last two books). I have often pointed out that there is no real standard in basic research. If someone publishes 200 papers and another guy publishes 20, the first guy wins, gets the promotion, the raise, and the prestige. Quality notwithstanding! The only way to judge whether some basic research was competently performed is by replication and if it cannot be replicated then this fact should be published in the same journal and that article referenced along with the original on PubMed or wherever. Considering the large amounts of money involved in all facets of the process, I doubt this will ever happen.
Moreover, animal models lend themselves to sloppy lab procedures and outright falsification of results. Two strains of the same species may respond very differently to the same perturbation so the researchers can say that the lack of reproducibility is the fault of the model not their fault. (Never mind that they chose the model.) Further, how a strain responds in one lab may not be the way the same strain responds in another due to different environment. Remember, even monozygotic twins raised in the same home do not always suffer from the same diseases. Additionally, the animal model is very, shall we say, flexible, in what one can prove. One species may react favorably to a drug while another dies. Which animal model is best? Depends on whether you are the company selling the drug or the company selling the drug’s competitor.
The fact that basic biomedical research promises much and delivers little is not unacknowledged. A vast majority of research grants from the NIH go to basic biomedical research. (7) This seems excessive considering the fact that only around 0.004% of the publications in high ranking journals result in a new class of drugs. (8)
An editorial from Nature: “The readers of Nature should be an optimistic bunch. Every week we publish encouraging dispatches from the continuing war against disease and ill health. Genetic pathways are unravelled, promising drug targets are identified and sickly animal models are brought back to rude health. Yet the number of human diseases that can be efficiently treated remains low — a concerning impotency given the looming health burden of the developed world's ageing population. The uncomfortable truth is that scientists and clinicians have been unable to convert basic biology advances into therapies or resolve why these conversion attempts so often don't succeed. Together, these failures are hampering clinical research at a time when it should be expanding.” (9)
Rothwell wrote in the Lancet in 2006: “Indeed, most major therapeutic developments over the past few decades have been due to simple clinical innovation coupled with advances in physics and engineering rather than to laboratory-based medical research. The clinical benefits of advances in surgery, for example, such as joint replacement, cataract removal, endoscopic treatment of gastrointestinal or urological disease, endovascular interventions (eg, coronary and peripheral angioplasty/stenting or coiling of cerebral aneurysms), minimally invasive surgery, and stereotactic neurosurgery, to name but a few, have been incalculable. Yet only a fraction of non-industry research funding has been targeted at such clinical innovation. How much more might otherwise have been achieved?” (10)
British economists Bruce Williams and Charles Carter and said: “It is easy to impede [economic] growth by excessive research, by having too high a percentage of scientific manpower engaged in adding to the stock of knowledge and too small a percentage engaged in using it.” (11) It is equally easy to impede research that would have given society cures and treatments and animal models impede both the pathway to cures and economic growth.
For more on basic research see Is the use of sentient animals in basic research justifiable?
1. Editorial, Must try harder. Nature 483, 509 (2012).
2. J. Arrowsmith, Trial watch: Phase II failures: 2008-2010. Nat Rev Drug Discov 10, 328 (2011).
3. H. Ledford, Drug candidates derailed in case of mistaken identity. Nature 483, 519 (2012).
4. L. Hutchinson, R. Kirk, High drug attrition rates--where are we going wrong? Nat Rev Clin Oncol 8, 189 (Apr, 2011).
5. C. G. Begley, L. M. Ellis, Drug development: Raise standards for preclinical cancer research. Nature 483, 531 (2012).
6. D. Fanelli, Do pressures to publish increase scientists' bias? An empirical support from US States Data. PLoS ONE 5, e10271 (2010).
7. D. G. Nathan, A. N. Schechter, NIH support for basic and clinical research: biomedical researcher angst in 2006. JAMA 295, 2656 (Jun 14, 2006).
8. W. F. Crowley, Jr., Translation of basic research into useful treatments: how often does it occur? Am J Med 114, 503 (Apr 15, 2003).
9. Editorial, Hope in translation. Nature 467, 499 (Sep 30, 2010).
10. P. M. Rothwell, Funding for practice-oriented clinical research. Lancet 368, 262 (Jul 22, 2006).
11. C. Carter, B. R. Williams, Government Scientific Policy and Growth of the British Economy. Manchester School 32, 197 (1964).