Skip to main content

Disharmony Among Animal Modelers

Dan Engber has published, on the website for Slate, a three-part series on the use of mice in research. It is a long series but worth the read. (My thanks to RM for notifying me of the articles.) I summarize some of the points below.

Mark Mattson, Laboratory Chief at the National Institute on Aging, recently discussed the problem of using overweight mice in research.1,2 Mattson et al pointed out that the overweight mouse is prone to numerous medical problems that can influence experimental results. Discussing Mattson’s epiphany, Engber states:

That's the drawback of the modern lab mouse. It's cheap, efficient, and highly standardized—all of which qualities have made it the favorite tool of large-scale biomedical research. But as Mattson points out, there's a danger to taking so much of our knowledge straight from the animal assembly line. The inbred, factory-farmed rodents in use today—raised by the millions in germ-free barrier rooms, overfed and understimulated and in some cases pumped through with antibiotics—may be placing unseen constraints on what we know and learn. "This is important for scientists," says Mattson, "but they don't think about it at."2

Clif Barry, chief of the Tuberculosis Research Section at the National Institute of Allergy and Infectious Diseases, discusses the drug development process as going from test tube to mouse to man and is quoted by Engber as saying: "The bad part of that is that no part of it is predictive:" Engber continues:

A new compound that succeeds in the dish might flunk out in the mouse, and something that can cure tuberculosis in a mouse could wash out in people. Take the example of pyrazinamide, one of the front-line drugs in the treatment of tuberculosis. Along with three other antibiotics, it forms the cocktail that remains, despite ongoing research, our only way of defeating the infection. But pyrazinamide didn't make it through the three Ms: It does nothing in the dish—there's no MIC whatsoever—and it has a weak effect in mice. According to Barry, if a compound like that were discovered in 2011, it would never make its way into clinical trials. . . . The fact that nothing gets to humans today without first passing the mouse test, says Barry, "has cost us a new generation of medicines." . . . That's why we've made so little progress using mice to generate new drugs and treatments [against tuberculosis], Barry tells me. In the absence of a clear, granulomatous response upon which to model human disease, the second M has become a massive roadblock in the path to a cure. "The vast majority of the money that we spend in clinical trials based on mouse data is completely wasted," he says. If you ask Clif Barry why we're still using the mouse to study tuberculosis, or Mark Mattson why we continue to test new drugs on obese and sedentary rodents, they'll tell you the same thing: Because that's what we've always done—we're in a rut. But to an outsider—say, a journalist who's trying to understand the place of the mouse in the broad enterprise of biomedicine—that explanation doesn't make sense.2


The doctors who devised the classic treatment [for tuberculosis] 40 years ago didn't need detailed mouse data—they found their cure with a methodical, brute-force approach: a series of human trials that spanned the better part of two decades and tested every possible combination of exposures. "The way those four drugs were put together is incredible. It's never to be seen again." Since that happened, we've had thousands of mouse studies of tuberculosis, yet not one of them has ever been used to pick a new drug regimen that succeeded in clinical trials. This isn't just true for TB; it's true for virtually every disease," he tells me. "We're spending more and more money and we're not getting more and more drug candidates." 2


Here's another way to explain the heavy expense and slow rate of return in biomedicine: Maybe the animals themselves are causing the problem. Assembly-line rats and mice have become the standard vehicles of basic research and preclinical testing across the spectrum of disease. It's a one-size-fits-all approach to science. What if that one size were way too big? 2

According to Engber Charles River Laboratories earns around $700 million every year selling their least expensive mouse for about $5 and others for as much as $400 per.3


Several other pain treatments have failed in spectacular ways upon moving from the cage to the clinic. Drugs designed to block substance-P receptors and sodium channels succeeded in rodent models, but had little, if any, effect in people. According to Mogil, there's really just one commercial analgesic for human patients whose efficacy was first identified and tested in animal models—a derivative of cone snail venom called ziconotide—and it's not a particularly good drug.3

Mice and rats have been shown to differ in activation of genes associated with pain.4

While the problems discussed above are real, the scientists are missing the forest for the trees. All of these dissimilarities are true and they are important, however, they are not as important as the fact that animals are complex systems with different evolutionary histories. The fact that animals are evolved complex adaptive systems explains all the above differences and explains why animal models will never predict human response to drugs and disease. Granted, almost every adverse drug reaction can be reproduced in some species or strain but this is not the same as having positive predictive value. Any of us can make predictions about who will win the ballgame and we might even be right 50% of the time. However, being correct 50% of the time does not qualify as predictive. Moreover, the occasional right “prediction” does not make you a predictive modality or practitioner.

Yet, this is what the vivisection activist is asking society to believe. For example, since the White New Zealand rabbit exhibited phocomelia in response to thalidomide, that species “predicted the congenital anomaly and is a predictive model for birth defects.” Neither of these claims are true. (For more on thalidomide, see The History and Implications of Testing Thalidomide on Animals.) In reality, if one wants to evaluate the predictive value of a species be it for toxicity, birth defects, efficacy, or the mechanism for a disease, the animal model must be tested multiple times and a high positive predictive value and negative predictive value demonstrated. Such testing has been done and the poor track record of animal models confirmed empirically. But the empirical evidence is less important than the theory from evolution and complexity that explains and supports the evidence. The same is true of the fact that the second law of thermodynamics prohibits a perpetual motion machine (PPM). One can always find flaws in the machine that is touted as producing more energy than it consumes, but the second law ends the discussion about PPMs in general. While theories are not laws, complexity theory and the theory of evolution similarly end the prediction disucssion. 

The use of sensitivity, specificity, positive and negative predictive values are not unique nor are they confined to medicine or medical research. (Also see here and here.) Despite what the vested interest groups would have you believe, these are commonly used formulas for evaluating a practice, intervention, modality or anything that can be described as testing a concept.

At least some in the animal model community are speaking the truth about the value of animal models. In the long run, truth will prevail, but in the meantime vivisection activists will continue to mislead society by using fallacious reasoning and conflating concepts in order to justify their trade and protect their ego.


1.         Martin B, Ji S, Maudsley S, Mattson MP. "Control" laboratory rodents are metabolically morbid: why it matters. Proceedings of the National Academy of Sciences of the United States of America. Apr 6 2010;107(14):6127-6133.

2.         Engber D. Lab mice: Are they limiting our understanding of human disease? 2011; Accessed November 17, 2011.

3.         Engber D. Black-6 lab mice and the history of biomedical research. 2011; Accessed November 17, 2011.

4.         LaCroix-Fralish ML, Austin J-S, Zheng FY, Levitin DJ, Mogil JS. Patterns of pain: Meta-analysis of microarray studies of pain. Pain. 2011;152(8):1888-1898.


Popular Video