Animal Rights

Mouse Models Fail Again

| by Dr Ray Greek

The March 27, 2014 issue of Nature contains an article by Steve Perrin discussing why mouse models fail:

Mice take the blame for one of the most uncomfortable truths in translational research. Even after animal studies suggest that a treatment will be safe and effective, more than 80% of potential therapeutics fail when tested in people. Animal models of disease are frequently condemned as poor predictors of whether an experimental drug can become an effective treatment. Often, though, the real reason is that the preclinical experiments were not rigorously designed. . . . Over the past decade, about a dozen experimental treatments have made their way into human trials for ALS. All had been shown to ameliorate disease in an established animal model. All but one failed in the clinic, and the survival benefits of that one are marginal.

I discussed animal models of ALS here.

Perrin then discusses some of his research:

At the ALS Therapy Development Institute (TDI) in Cambridge, Massachusetts, we have tested more than 100 potential drugs in an established mouse model of this disease (mostly unpublished work). Many of these drugs had been reported to slow down disease in that same mouse model; none was found to be beneficial in our experiments (see 'Due diligence, overdue'). Eight of these compounds ultimately failed in clinical trials, which together involved thousands of people. One needs to look no further than potential blockbuster indications such as Alzheimer's and cancer to see that the problem persists across diseases.

The sentiment that animal models have been misused is not unique to Perrin who further states: “It is astonishing how often such straightforward steps are overlooked. It is hard to find a publication, for example, in which a preclinical animal study is backed by statistical models to minimize experimental noise.” Macleod et al have been addressing this for years.[1-8]

Before I go any further, the above should prove once and for all that the animal model community is not that concerned about animals or humans. The refrain has always been that they would never use an animal if it were not necessary. We have known for some time now that the fundamentals of good research are usually lacking in animal-based research and this gives lie to any sentiment regarding how valuable the lives of animals are. If a scientist cannot even be bothered to fulfill the basic requirements of good research: such as dividing the animals in experimental and control groups, then he cannot be taken seriously when claiming to be doing good science much less when claiming to be saving human lives. (See [9-13] and Trouble In Basic Research Land)

After explaining that animal models have failed to be of predictive value for human response to drugs and disease, Perrin then proposes a solution: use better animal models. Such nonsense is why Andre Menache and I published an article titled: Systematic Reviews of Animal Models: Methodology versus Epistemology. The article, along with others on Trans-Species Modeling Theory [14] explains why animal models can never be of predictive value for human response to drugs and disease regardless of changes in genetics or methodology.

Erika Check Hayden wrote an accompanying article titled: Misleading mouse studies waste medical resources. Hayden writes:

Neurobiologist Caterina Bendotti of the Mario Negri Institute for Pharmacological Research in Milan, Italy, agrees that the issues Perrin describes are not unique to his field: “The poor reproducibility of preclinical results, particularly in animal models, goes beyond ALS,” she says.

Hayden continues stating: “Other researchers say they agree broadly with Perrin, but that they would also like to see the data from his group's experiments, and that it may not be necessary to find a positive animal result to progress to a human trial.” Note what is happening. Animal models have been sold to society as necessary in biomedical research in order to predict efficacy and toxicity before a drug goes to human trials. Now the animal model community is saying they need to conduct animal studies regardless of whether the studies are predictive. This is basic science research, which is not of predictive value by definition, and which society will not fund under these circumstances.[15]

The use of animal models is now being defended by a series of ad hoc arguments that are without a basis in evolutionary biology. And the educated, smart, people doing this criticize creationists and purveyors of complementary and alternative medicine (two groups noted for their lack of of education and critical thinking skills) for sloppy thinking. This reminds me of the verse in Matthew 7 of the Christian Bible that states:

3 And why beholdest thou the mote that is in thy brother's eye, but considerest not the beam that is in thine own eye?

4 Or how wilt thou say to thy brother, Let me pull out the mote out of thine eye; and, behold, a beam is in thine own eye?

5 Thou hypocrite, first cast out the beam out of thine own eye; and then shalt thou see clearly to cast out the mote out of thy brother's eye.

To whom much is given, much will be required.

(Image courtesy of WikipediaCommons (PD-1923).


1.         Macleod, M., Systematic Review and Meta-analysis of Experimental Stroke. International Journal of Neuroprotection and Neuroregeneration, 2004. 1: p. 9-12.

2.         Macleod, M.R., et al., Pooling of animal experimental data reveals influence of study design and publication bias. Stroke, 2004. 35(5): p. 1203-8.

3.         Macleod, M.R., S. Ebrahim, and I. Roberts, Surveying the literature from animal experiments: systematic review and meta-analysis are important contributions. BMJ, 2005. 331(7508): p. 110.

4.         O'Collins, V.E., et al., 1,026 experimental treatments in acute stroke. Ann Neurol, 2006. 59(3): p. 467-77.

5.         Macleod, M., Why animal research needs to improve. Nature, 2011. 477(7366): p. 511.

6.         Landis, S.C., et al., A call for transparent reporting to optimize the predictive value of preclinical research. Nature, 2012. 490(7419): p. 187-191.

7.         Tsilidis, K., et al., Evaluation of Excess Significance Bias in Animal Studies of Neurological Diseases. PLoS Biol, 2013. 11(7): p. e1001609.

8.         van der Worp, H.B. and M.R. Macleod, Preclinical studies of human disease: Time to take methodological quality seriously. Journal of molecular and cellular cardiology, 2011. 51(4): p. 449-50.

9.         Ioannidis, J.P., Why most published research findings are false. PLoS medicine, 2005. 2(8): p. e124.

10.       Prinz, F., T. Schlange, and K. Asadullah, Believe it or not: how much can we rely on published data on potential drug targets? Nature reviews. Drug discovery, 2011. 10(9): p. 712.

11.       Begley, C.G. and L.M. Ellis, Drug development: Raise standards for preclinical cancer research. Nature, 2012. 483(7391): p. 531-533.

12.       Sarewitz, D., Beware the creeping cracks of bias. Nature, 2012. 485(7397): p. 149.

13.       Mobley, A., et al., A survey on data reproducibility in cancer research provides insights into our limited ability to translate findings from the laboratory to the clinic. PLoS One, 2013. 8(5): p. e63221.

14.       Greek, R. and L.A. Hansen, Questions regarding the predictive value of one evolved complex adaptive system for a second: exemplified by the SOD1 mouse Progress in Biophysics and Molecular Biology, 2013: p.

15.       Greek, R. and J. Greek, Is the use of sentient animals in basic research justifiable? Philos Ethics Humanit Med, 2010. 5: p. 14.