Apr 18, 2014 fbook icon twitter icon rss icon
Animal Rights

Science Errors in The Scientist

article image

Richard Gallagher, then-editor of The Scientist stated in a 2008 editorial discussing an article on scientific illiteracy: “You might expect that newly minted science graduates - who presumably think of themselves as scientists, and who I'd thought of as scientists - would have a well-developed sense of what science is. So it's pretty shocking to discover that a large proportion of them don't have a clue.” (Gallagher 2008) The Williams article (Williams 2008) referred to by Gallagher reveals that, of the graduates in science that Williams surveyed:

  • 76% equated a fact with 'truth' and 'proven'
  • 23% defined a theory as 'unproven ideas' with less than half (47%) recognizing a theory as a well evidenced exposition of a natural phenomenon
  • 34% defined a law as a rule not to be broken, and forty-one percent defined it as an idea that science fully supports.
  • Definitions of 'hypothesis' were the most consistent, with 61% recognizing the predictive, testable nature of hypotheses.

Williams states that the students did not understand the differences between laws, theories, and facts and further did not appreciate the difference between a scientific theory and hypothesis. Some thought hypothesis and theory were the same thing. (Williams 2008)

Nice article and nice editorial. Too bad writers for the magazine did not read it.

In a news item titled Tissue on Chips Galore, The Scientist writer Edyta Zielinska reports that the National Institutes of Health (NIH) will spend $70 million funding projects to develop lab-on-a-chip technology. This technology will be used to aid the drug development process. Zielinska: “The hope is that by incorporating multiple human cell types, these systems will be more predictive than animal models, which fail to predict human toxicity in approximately 30 percent of drugs tested.”

This 30% figure is an oft-repeated myth among science writers and even scientists. It is based on the Olson study (Olson et al. 2000) and is so common we addressed it in Animal Models in Light of Evolution and I discussed it in a previous blog. Olson et al coined the term “true positive concordance rate,” which most people call sensitivity. A Google search in 2009 revealed 12 instances of the phrase being used and all were references to the Olson study. Why invent new terms? The study was funded and conducted by the pharmaceutical industry ostensibly to show that animal models were good predictors for human responses to their products. This must be set in context of the fact that Daubert v Dow had been ruled on by the US Supreme Court in 1993 and had, among other things, set the legal standard that animal models were not predictive for human response to drugs, at least in terms of birth defects. A preliminary report from Olson et al was published in 1998 (Olson et al. 1998) and indicated what the final report would say, so the study had been underway for a while. But before I explain more about Olson et al, more on Daubert v Dow.

Daubert sued Merrell Dow Pharmaceuticals Inc., which was a subsidiary of Dow Chemical Company (hence Daubert v Dow) for damages related to Merrell’s drug Bendectin (Pyridoxine/doxylamine), which they claimed caused birth defects. In support for their claim, the plaintiffs introduced data from animal models that revealed birth defects in the offspring of mothers administered Bendectin. The court ruled that this evidence was inadmissible on the basis that it was not scientifically established that such tests could predict human response. (There were also in vitro tests that suggested Bendectin might cause birth defects and they were likewise ruled inadmissible.) The Supreme Court ruling shook the scientific community for several reasons. (See here and here for details.)

Relative to this discussion, however, the pharmaceutical industry had been given a very mixed verdict. On the one hand, Dow had been vindicated but on the other hand, Pharma in general could no longer submit animal-based data supporting their claim of safety or efficacy (or would at least find it much more difficult to submit such data). As Pharma and related industries had relied on animal data in legal proceedings, this was a severe blow. This was the environment that led to the Olson study being conducted by scientists from Pfizer, AstraZeneca, Pharmacia & UpJohn, Boehringer, Rhone-Poulenc, Abbott, Eli Lilly, Janssen, Monsanto-Searle, Sanofi-Synthe, and Bayer. It would be in Pharma’s best interest to show that animal models in fact were predictive, Merrell Dow notwithstanding. And with a little creative reasoning that is exactly what the study appeared to show and how it is interpreted to this day e.g. The Scientist.

Olson et al retrospectively studied 150 drug and actually showed that the sensitivity of animal models was ~0.7 not that the positive predictive value (PPV) or negative predictive value (NPV) was 0.7. (For a brief review on sensitivity etc see here or my last blog.) Moreover, even if the PPV and NPV had been 0.7 that would not mean that animal tests predict 70% of toxicities, only that results from such tests have a 70% chance of translating to some humans. In order to make it appear that the PPV was 70% they invented the phrase “true positive concordance rate,” and used it instead of sensitivity when referring to their results. Yet, when the paper is mentioned in the scientific articles or the lay press, it is usually quoted as saying exactly what The Scientist incorrectly conveyed. And it is widely quoted. We conducted a citation search in 2008 and found 114 citations for the Olson study. I see the study quoted about once per month just in my routine reading. Finally, even if the PPV was 0.7, such a low PPV would be inadequate to qualify the modality as predictive for medical science. There were other problems with the study and we address some in Animal Models in Light of Evolution. Suffice it to say, the study was disingenuous at best. (Roche uses the 70% figure even now on their website.)

Philosophy of science is not well respected by some vivisection activists or scientists in general. For example, Nobel laureate in physics, Richard Feynman, stated that philosophy of science “is about as useful to scientists as ornithology is to birds.” (Good Reads 2011)  This is unfortunate because, without the philosophy of science, bean counting, astrology, and creationism could be passed off as science. (The rejoinder to Feynman, from philosophers of science, is that ornithology would be useful to birds if only they could understand it. Such is apropos here as well.) The above illustrates why scientists, and nonscientists, need to have an understanding of the fundamentals of statistics and critical thinking in addition to science and the philosophy of science. (For another excellent example of why this knowledge is needed see this essay from Dario Ringach in which he makes essentially all of the mistakes one would expect from someone uneducated in critical thinking and philosophy of science.) If scientists and science writers do not understand critical thinking and philosophy of science, they propagate errors, throughout both the lay and scientific communities, and these errors can cost lives. The entire notion that animal models are predictive for human response to drugs and disease is an example of this.

Some scientists whine that no one uses animal models as predictive modalities. I have quoted from the scientific and lay literature many times refuting this. The Zielinska article in The Scientist is yet another example of people assuming the animal model is used, and can be relied on, as a predictive model as well as another example of confused reasoning and scientific illiteracy.

References

Gallagher, Richard. 2008. Why the philosophy of science matters. The Scientist 22 (10):15.

Good Reads. 2011. Richard Feynman. Good Reads 2011 [cited August 7 2011]. Available from http://www.goodreads.com/author/quotes/1429989.Richard_P_Feynman.

Olson, H., G. Betton, D. Robinson, K. Thomas, A. Monro, G. Kolaja, P. Lilly, J. Sanders, G. Sipes, W. Bracken, M. Dorato, K. Van Deun, P. Smith, B. Berger, and A. Heller. 2000. Concordance of the toxicity of pharmaceuticals in humans and in animals. Regul Toxicol Pharmacol 32 (1):56-67.

Olson, H., G. Betton, J. Stritar, and D. Robinson. 1998. The predictivity of the toxicity of pharmaceuticals in humans from animal data--an interim assessment. Toxicol Lett 102-103:535-8.

Williams, James. 2008. What Makes Science 'Science'? The Scientist 22 (10):29.

 


Hot Gallery of the Day

22 People That Will Make You Feel Smart

Close x
Don't Miss Out! |
Like us on Facebook?