Durrant et al. recently discovered what appear to be gene-based differences between mouse strains in terms of their ability to survive infection with Aspergillus fumigatus.(1) Numerous other studies have discovered differences in how strains of the same species respond to perturbations of the system. Studies have also confirmed the clinical impression that monozygotic twins react differently to certain perturbations.(2-5)
Concepts and studies such as these, among others, have led to the current emphasis on personalized medicine. It is therefore odd that studies such as Durrant et al. are cited as supporting the notion that what is needed in medical research is more genetically modified mouse models. Callaway writing in Nature: “The new mouse strains have some very visible differences from one another — from variations in fur colour to tail length — and are already yielding clues to genes that help fend off fungal infection, which might not have been easily uncovered with standard lab strains. . . . Steve Brown, director of the Mammalian Genetics Unit at MRC Harwell, UK, says the Collaborative Cross will mesh well with another project — the International Knockout Mouse Consortium — to create thousands of knockout strains collectively lacking nearly every mouse gene (see Nature 474, 262–263; 2011). For instance, a gene knockout that affects a mouse’s sensitivity to diabetes could be linked to other traits of the syndrome, such as altered glucose metabolism, through the Collaborative Cross.”(6)
Popular VideoHere is what happened when a pastor walked by President Trump after a prayer service:
The notion that scientists could learn what a gene does in humans by finding out what it does in a mouse is based the outdated, reductionist, linear concept of one-gene one–protein. Jura et al., 2006:
Popular VideoHere is what happened when a pastor walked by President Trump after a prayer service:
Linear models based on proportionality between variables have been commonly applied in biology and medicine but in many cases they do not describe correctly the complex relationships of living organisms and now are being replaced by nonlinear theories of deterministic chaos. Recent advances in molecular biology and genome sequencing may lead to a simplistic view that all life processes in a cell, or in the whole organism, are strictly and in a linear fashion controlled by genes. In reality, the existing phenotype arises from a complex interaction of the genome and various environmental factors. Regulation of gene expression in the animal organism occurs at the level of epigenetic DNA modification, RNA transcription, mRNA translation, and many additional alterations of nascent proteins. The process of transcription is highly complicated and includes hundreds of transcription factors, enhancers and silencers, as well as various species of low molecular mass RNAs. In addition, alternative splicing or mRNA editing can generate a family of polypeptides from a single gene. Rearrangement of coding DNA sequences during somatic recombination is the source of great variability in the structure of immunoglobulins and some other proteins. The process of rearrangement of immunoglobulin genes, or such phenomena as parental imprinting of some genes, appear to occur in a random fashion. Therefore, it seems that the mechanism of genetic information flow from DNA to mature proteins does not fit the category of linear relationship based on simple reductionism or hard determinism but would be probably better described by nonlinear models, such as deterministic chaos.(7)
The notion that scientists could learn what a gene does in humans by finding out what it does in a mouse also ignores the fact that animals are complex systems, undergo alternative splicing, have modifier genes, and the effects of environmental factors. This brings us to the controlling variable argument.
For decades, society was told that animals needed to be used in medical research in order to control variables. In the physical sciences of chemistry and physics this approach worked well but the physical sciences do not, as a rule, study living systems or study them at the level of organization where complexity comes into play. Moreover, as knowledge of complexity science and living systems has increased, it has become apparent that all the variables are not being be controlled. In addition, considering the fact that complex systems have levels of organization and emergent properties, controlling all of the relevant variables may not even be possible.
Then there is the fact that even if animals were simple systems, scientists have not been very talented at controlling the variables that could be controlled. Dirnagl and Lauritzen 2011:
A low reproducibility of experimental cerebrovascular research and problems in the translation of findings from animal experiments to successful treatment strategies in humans have precipitated investigations into the quality of preclinical research. Overall, deficits in the design, conduct, and reporting of preclinical research were found to be prevalent (Dirnagl, 2006; Dirnagl and Macleod, 2009; Fisher et al, 2009; Minnerup et al, 2010; Philip et al, 2009; Sena et al, 2007). In this issue, Vesterinen et al (2011) present a ‘Systematic survey of the design, statistical analysis, and reporting of studies published in the 2008 volume of the Journal of Cerebral Blood Flow and Metabolism’. Not surprisingly, this systematic analysis reveals indicators for deficiencies in original articles published in the Journal of Cerebral Blood Flow and Metabolism. Although this is the first such analysis in a neuroscience journal, it is quite clear that the deficits exposed are not specific to the Journal of Cerebral Blood Flow and Metabolism, one of the leading journals in the cerebrovascular field. In fact, another systematic analysis (Kilkenny et al, 2009) focusing on published biomedical research using laboratory animals in general reported very similar results.(8) (Emphasis added.)
Dirnagl had stated earlier, in 2006:
Over the past decades, great progress has been made in clinical as well as experimental stroke research. Disappointingly, however, hundreds of clinical trials testing neuroprotective agents have failed despite efficacy in experimental models. Recently, several systematic reviews have exposed a number of important deficits in the quality of preclinical stroke research. Many of the issues raised in these reviews are not specific to experimental stroke research, but apply to studies of animal models of disease in general. It is the aim of this article to review some quality-related sources of bias with a particular focus on experimental stroke research. Weaknesses discussed include, among others, low statistical power and hence reproducibility, defects in statistical analysis, lack of blinding and randomization, lack of quality-control mechanisms, deficiencies in reporting, and negative publication bias. Although quantitative evidence for quality problems at present is restricted to preclinical stroke research, to spur discussion and in the hope that they will be exposed to meta-analysis in the near future, I have also included some quality-related sources of bias, which have not been systematically studied. Importantly, these may be also relevant to mechanism-driven basic stroke research. I propose that by a number of rather simple measures reproducibility of experimental results, as well as the step from bench to bedside in stroke research may be made more successful. However, the ultimate proof for this has to await successful phase III stroke trials, which were built on basic research conforming to the criteria as put forward in this article.(9) (Emphasis added.)
Hossmann 2009: “The high incidence and the devastating consequences of stroke call for efficient therapies but despite extensive experimental evidence of neuroprotective improvements, most clinical treatments have failed. The poor translational success is attributed to the inappropriate selection of clinically irrelevant animal models, the inappropriate focus on clinically irrelevant injury pathways and the inappropriate estimation of the length of therapeutic windows.”(10)
Dragunow 2008: "Neurodegenerative disorders are caused by the death and dysfunction of brain cells, but despite a huge worldwide effort, no neuroprotective treatments that slow cell death currently exist. The failure of translation from animal models to humans in the clinic is due to many factors including species differences, human brain complexity, age, patient variability and disease-specific phenotypes. Additional methods are therefore required to overcome these obstacles in neuroprotective drug development. Incorporating target validation using human brain-tissue microarray screening and direct human brain-cell testing at an early preclinical stage to isolate molecules that protect the human brain may be an effective strategy."(11)
For those who doubt that industry questions the results from animal models consider the following. O’Collins et al., 2006 published a review article that revealed that of 1,026 putative neuroprotectants studied, the drugs that went to clinical trials were not more efficacious in animal studies than the ones that were passed over.(12) Also see (13, 14)
There are other problems. Sena et al., 2010:
Using the CAMARADES (Collaborative Approach to Meta-analysis and Review of Animal Data in Experimental Studies) database we identified 16 systematic reviews of interventions tested in animal studies of acute ischaemic stroke involving 525 unique publications. Only ten publications (2%) reported no significant effects on infarct volume and only six (1.2%) did not report at least one significant finding. Egger regression and trim-and-fill analysis suggested that publication bias was highly prevalent (present in the literature for 16 and ten interventions, respectively) in animal studies modelling stroke. Trim-and-fill analysis suggested that publication bias might account for around one-third of the efficacy reported in systematic reviews, with reported efficacy falling from 31.3% to 23.8% after adjustment for publication bias. We estimate that a further 214 experiments (in addition to the 1,359 identified through rigorous systematic review; non publication rate 14%) have been conducted but not reported. It is probable that publication bias has an important impact in other animal disease models, and more broadly in the life sciences . . . Publication bias is known to be a major problem in the reporting of clinical trials, but its impact in basic research has not previously been quantified. Here we show that publication bias is prevalent in reports of laboratory-based research in animal models of stroke, such that data from as many as one in seven experiments remain unpublished. The result of this bias is that systematic reviews of the published results of interventions in animal models of stroke overstate their efficacy by around one third. Nonpublication of data raises ethical concerns, first because the animals used have not contributed to the sum of human knowledge, and second because participants in clinical trials may be put at unnecessary risk if efficacy in animals has been overstated. It is unlikely that this publication bias in the basic sciences is restricted to the area we have studied, the preclinical modelling of the efficacy of candidate drugs for stroke. A related article in PLoS Medicine (van der Worp et al., doi:10.1371/journal.pmed.1000245) discusses the controversies and possibilities of translating the results of animal experiments into human clinical trials. (15)
“We must control the variables,” will go down in history as yet another fallacy used by those with a vested interest in animal modelling.
1. Durrant C, Tayem H, Yalcin B, Cleak J, Goodstadt L, Pardo-Manuel de Villena F, et al. Collaborative Cross mice and their power to map host susceptibility to Aspergillus fumigatus infection. Genome Research. 2011 August 1, 2011;21(8):1239-48.
2. Bruder CE, Piotrowski A, Gijsbers AA, Andersson R, Erickson S, de Stahl TD, et al. Phenotypically concordant and discordant monozygotic twins display different DNA copy-number-variation profiles. Am J Hum Genet. 2008 Mar;82(3):763-71.
3. Fraga MF, Ballestar E, Paz MF, Ropero S, Setien F, Ballestar ML, et al. Epigenetic differences arise during the lifetime of monozygotic twins. Proc Natl Acad Sci U S A. 2005 Jul 26;102(30):10604-9.
4. Javierre BM, Fernandez AF, Richter J, Al-Shahrour F, Martin-Subero JI, Rodriguez-Ubreva J, et al. Changes in the pattern of DNA methylation associate with twin discordance in systemic lupus erythematosus. Genome Res. 2010 Feb;20(2):170-9.
5. von Herrath M, Nepom GT. Remodeling rodent models to mimic human type 1 diabetes. Eur J Immunol. 2009 Aug 11;39(8):2049-54.
6. Callaway E. How to build a better mouse. Nature. 2011;475(7356):279.
7. Jura J, Wegrzyn P, Koj A. Regulatory mechanisms of gene expression: complexity with elements of deterministic chaos. Acta Biochim Pol. 2006;53(1):1-10.
8. Dirnagl U, Lauritzen M. Improving the Quality of Biomedical Research: Guidelines for Reporting Experiments Involving Animals. J Cereb Blood Flow Metab. 2011;31(4):989-90.
9. Dirnagl U. Bench to bedside: the quest for quality in experimental stroke research. J Cereb Blood Flow Metab. 2006 Dec;26(12):1465-78.
10. Hossmann KA. Pathophysiological basis of translational stroke research. Folia Neuropathol. 2009;47(3):213-27.
11. Dragunow M. The adult human brain in preclinical drug development. Nat Rev Drug Discov. 2008;7(8):659-66.
12. O'Collins VE, Macleod MR, Donnan GA, Horky LL, van der Worp BH, Howells DW. 1,026 experimental treatments in acute stroke. Ann Neurol. 2006 Mar;59(3):467-77.
13. Macleod MR, Ebrahim S, Roberts I. Surveying the literature from animal experiments: systematic review and meta-analysis are important contributions. BMJ. 2005 Jul 9;331(7508):110.
14. Macleod MR, O'Collins T, Howells DW, Donnan GA. Pooling of animal experimental data reveals influence of study design and publication bias. Stroke. 2004 May;35(5):1203-8.
15. Sena ES, van der Worp HB, Bath PMW, Howells DW, Macleod MR. Publication Bias in Reports of Animal Stroke Studies Leads to Major Overstatement of Efficacy. PLoS Biol. 2010;8(3):e1000344.