As I said in my previous blog, a Brazilian weekly is running a series on animal-based research. My interview was the first and an interview with Michael Conn was the second. Most of Conn’s interview focused on ethics but some did address science. His rhetoric is typical; attributing virtually all medical advances to research that used animals.
He did not offer proof but, in fairness to him, this was an interview not a book or article and such are not usually amenable to printing long scientific essays. He does point out, as he has before, that some humans are not able to predict the response for other humans and that therefore animals will not necessarily always be able to predict human response either. (I am paraphrasing.) But he then goes on to say that animal-based research is nonetheless vital. All of this is pretty routine for Conn.
Conn bases his defense of the predictive value of animal testing on a 2000 study by Olson et al. (Olson et al. 2000). I know this because in the back and forth with the editor of the Brazilian series, he mentioned that Conn quoted the Olson study as refuting my points and because Conn has said exactly that in the past. (That is kind of a dead giveaway.)
In 2007, we published an article in Skeptic magazine titled: “Do Animals Predict Human Responses?” (Shanks et al. 2007) We presented the usual arguments that readers of this blog or our recent books will recognize. (The article is available as a back issue from Skeptic or on Amazon.com.) Michael Conn and James Parker, both of the Oregon Health and Science University responded with a letter to the editor published in the next issue, some of which is quoted below:
The authors have simply overlooked the classic study (Olson, Harry, et al., 2000. “Concordance of the Toxicity of Pharmaceuticals in Humans and in Animals.” Regulatory Toxicology and Pharmacology 32, 56-67) that summarizes the results from 12 international pharmaceutical companies on the predictivity of animal tests in human toxicity. While the study is not perfect, the overall conclusion from 150 compounds and 221 human toxicity events was that animal testing has significant predictive power to detect most—but not all—areas of human toxicity.
I then responded with the following published in volume 14, issue 2:
As a coauthor of the article "Animal and Medicine: Do Animal Experiments Predict Human Response" (Vol. 13 No. 3) I would like to address the concerns raised by Conn and Parker (Vol. 13 No. 4). Our article described the evidence, mainly from evolutionary biology, genetics, and clinical studies that animal models fail to be predictive for human response. The studies we quoted were not anecdotal but rather legitimate scientific studies. We presented numerous studies, sensitivity and positive predictive values when available, as well as numerous examples. The individual cases we cited were also from the scientific literature and were used as examples of the principle that different species respond differently to the same stimuli, specifically drugs and disease. We also tied all the above to theory as one should do in science.
Conn and Parker cited the 2000 Olson et al study (Regul Toxicol Pharmacol 2000 Aug;32(1):56-67). It was sponsored by the pharmaceutical industry and concluded animal models had a high concordance with humans. In other words, Olson et al measured sensitivity but not specificity, positive or negative predictive values All four are needed to make assertions about prediction. The Olson et al even concluded: “This study did not attempt to assess the predictability of preclinical experimental data to humans.” This statement combined with lack of the three other values calls into question Conn and Parker’s position that the study proves animals have “significant predictive power”
We then addressed the Olson study in more detail in Animal Models in Light of Evolution where we included the entire article in an appendix. We concluded, in part:
2. The study says at the outset that it is aimed at measuring the predictive reliability of animal models. Later the authors concede that their methods are not, as a matter of fact, up to this task. This makes us wonder how many of those who cite the study have actually read it in its entirety.
3. The authors of the study invented new statistical terminology to describe the results. The crucial term here, unqualified at the beginning of the article, is “true positive concordance rate” which sounds similar to “true positive predictive value” (which is what should have been measured, but was not). A Google search on “true positive concordance rate” yielded twelve results (counting repeats), all of which referred to the Olson Study (see Figure 13.3).)). At least seven of the twelve Google hits qualified the term “true positive concordance rate” with the term “sensitivity”—a well-known statistical concept. In effect, these two terms are synonyms. Presumably, the authors of the study must have known that “sensitivity” does not measure “true positive predictive value,” for later in the middle of the article they qualify the term “true positive concordance” with the term “sensitivity.” In addition to “sensitivity” you would need information on “specificity” and so on, to nail down the crucial concept of “true positive predictive value,” and this the authors did not do! If all the Olson Study measured was sensitivity, its conclusions are largely irrelevant to the great prediction debate. Given the weight placed on the Olson study by friends of predictive modeling, we are left wondering how many of those citing the study got beyond the first page, and of those who did, how many understood elementary statistics.
5. Any animal giving the same response as a human was counted as a positive result. So if six species were tested and one of the six mimicked humans that was counted as a positive. The Olson Study was concerned primarily not with prediction, but with retrospective simulation of antecedently known human results.
6. Only drugs in clinical trials were studied. Many drugs tested do not actually get that far because they fail in animal studies . . .
7. Even if all the data is good—and it may well be—sensitivity (i.e. true positive concordance rate) of 70% does not settle the prediction question. Sensitivity is not synonymous with prediction and even if a 70% true positive prediction value is assumed, when predicting human response 70% is inadequate. In carcinogenicity studies, the sensitivity using rodents may well be 100%; the specificity, however, is another story. That is the reason rodents cannot be said to predict human outcomes in that particular biomedical context.
The fact that the animal model community is still quoting Olson and that Conn is still referencing the study even after our exchange and after the analysis in our book, speaks to the intent of both the individual and the community.
Perhaps Dr Conn would like to debate the Olson study in Portland in front of his colleagues and with his university’s security in force. Actually, I know he does not. I asked for such a debate on CNN.com last year and was refused. When one side asks to take a debate to the scientific literature or to debate specifics in public and the other side refuses, that should settle the issue for rational, unbiased people.
In closing, please allow me to say that while the skeptic community as a whole has either ignored or misrepresented our position on animal-based research (see Response to criticisms from Orac and the video and or transcript of my debate with Andrew Skolnick, a noted skeptic, science and medical journalist, and the Executive Director of the Commission for Scientific Medicine and Mental Health in Amherst, NY), Michael Shermer allowed us to publish in Skeptic and allowed the follow up discussion to take place. He is the real deal in the skeptic community!
Olson, H., G. Betton, D. Robinson, K. Thomas, A. Monro, G. Kolaja, P. Lilly, J. Sanders, G. Sipes, W. Bracken, M. Dorato, K. Van Deun, P. Smith, B. Berger, and A. Heller. 2000. Concordance of the toxicity of pharmaceuticals in humans and in animals. Regul Toxicol Pharmacol 32 (1):56-67.
Shanks, Niall, Ray Greek, Nathan Nobis, and Jean Greek. 2007. Animals and Medicine: Do Animal Experiments Predict Human Response? Skeptic 13 (3):44-51.