Dr Ringach, in response to my blog “More Misrepresentations From Dr Ringach,” stated:
If you say you know what the probabilities are you should be able to state them.
That is true as far as it goes but there is a difference between knowing a probability to six decimal places and knowing that a probability is high or low. Regardless, it appears Dr Ringach wants a lesson in simple statistics so I will provide one here. I will expand on one study that we referenced in Animal Models in Light of Evolution (p224) demonstrating the calculations, then quote from others.
Litchfield (1) studied rats, dogs, and humans in order to evaluate responses to 6 drugs. Only side effects that could be studied in animals were calculated (which is not insignificant when one is considering the reliability of animal models for predicting human responses). The results are below.
Man Toxic effects found in man 53
Toxic effects found in man only 23
Rat Toxic effects also found in man 18
Toxic effects not found in man 19
Dog Toxic effects also found in man 29
Toxic effects not found in man 24
19 false positives
35 false negatives
Sn = 18/(18+35) = .34
PPV = 18/(18+19) = .49
24 false positives
24 false negatives
Sn = 29/(29+24) = .55
PPV = 29/(29+24) = .55
As the above simple calculations reveal, in this study dogs gave a positive predictive value (PPV) of .55 and rats and similar PPV of .49. I can go through the equations step by step if Dr Ringach desires. (Actually, I cannot, as the above is everything that is required. There is nothing else to show!) When both species exhibited the same toxicity then the sensitivity went up to 0.7. None of these numbers are acceptable for qualifying as a predictive modality in medical science.
More references that allow calculations like the above are given in the book and, now that I have demonstrated how to perform the calculations, I am sure Dr Ringach will have no trouble looking up the other references we cite and doing the math. All the studies show the same low numbers for sensitivity, specificity, PPV, and NPV, assuming the numbers for performing the calculations are available. (Some of the studies only allow a calculation of some of the statistics.)
But the above is just toxicity; what about other animal tests? Bioavailability is similar to toxicity. Mahmood in 2000:
Fifteen drugs were tested and the results of this study indicate that all five approaches predict absolute bioavailability with different degrees of accuracy, and are therefore unreliable for the accurate prediction of absolute bioavailability in humans from animal data. In conclusion, although the above mentioned approaches do not accurately predict absolute bioavailability, a rough estimate of absolute bioavailability is possible using these approaches. (2) (Emphasis added.)
Unreliable for the accurate prediction of absolute bioavailability means low probability.
Now, for some more of those out of context quotes.
Ralph Heywood, former director of Huntington Research Center (UK) said:
… the best guess for the correlation of adverse reactions in man and animal toxicity data is somewhere between 5 and 25%.” [(3) p57-67].
Five to twenty-five percent correlation does not qualify as predictive but it does qualify as a low probability.
These examples should serve to illustrate that interspecies differences in the absorption of organic compounds do indeed exist and that in specific instances the degree of difference can be appreciable. Of the 38 organic compounds reviewed here, more than one-third of them (e.g., atenolol, indomethacin, ibuprofen, iprindole, linoleamide, lorazepam, nadolol, naproxen, thalidomide, 6-azauridine, phenylbutazone, bumadizone, trazodone, thiopental sodium, pentobarbital sodium, raubasinine) appeared to be differently absorbed by the animal model as compared to the human subjects. To what extent organic environmental contaminants are differentially absorbed by animal models and humans and how such differences affect toxicity and carcinogenicity outcomes has not yet even begun to be aggressively researched. The experience in the drug field, though, does illustrate that species may differ substantially in their capacities to absorb organic compounds, and it is only logical to predict that such interspecies differences will be operational in the field of environmental toxicology of organics as well. [(4) p9, 50]
In other words, there is a low probability that absorption of organic compounds will extrapolate from animal models to humans.
In the April 1, 2010 issue of The Scientist:
Most of the time, in order to test a cancer therapy, researchers simply transplant human cancer tissue into a mouse. But those experiments rarely predict how a human will respond to the same treatment . . . Mouse models that use transplants of human cancer have not had a great track record of predicting human responses to treatment in the clinic. It’s been estimated that cancer drugs that enter clinical testing have a 95 percent rate of failing to make it to market, in comparison to the 89 percent failure rate for all therapies . . . Indeed, “we had loads of models that were not predictive, that were [in fact] seriously misleading,” says NCI’s Marks, also head of the Mouse Models of Human Cancers Consortium, a network of cancer researchers (including Pandolfi) who set goals, meet regularly, and collaborate in their search for a better mouse model. In 2001, researchers at the NCI looked at the xenograft data from 39 cancer drugs that had already successfully completed Phase II trials in humans. Only one of the xenograft models showed a similar response to the cancer drug as the patients who received it (Br J Cancer, 84:1424–31, 2001) . . . Scientists and companies compound the problem by putting too much stock in the results from these troubled models. (5) (Emphasis added.)
I think that qualifies as a low probability, as do the following.
A study of 23 chemicals revealed only 4 were metabolized the same in humans and rats (6). Four out of twenty –three equals a low probability that metabolism studies in rats will match humans. In the book we explain why this is true and why it also holds true for other animals. That’s what scientists do. They reason from specific to general based on scientific principles and empirical evidence.
Johnson et al. 2001 found that out of 39 anticancer drugs tested on xenograft mice, only one mimicked the response in humans (7).
One out of thirty-nine! I do not need to calculate the simple math as I did above to understand that one out of thirty-nine and the other examples listed here equal a low probability. If Dr Ringach needs this kind of reassurance he can look up the original papers and do the calculations himself. I don’t need to analyze every patent application for a perpetual motion machine either. But then I believe that laws of physics actually exist. (In my saner moments I really cannot believe I am willfully entering into a discussion with someone who does not believe in the laws of physics.)
Drugs known to damage the human fetus are found to be safe in 70% of cases when tried on primates. [(8) p312-13]
But what about diseases as opposed to drug testing?
Every vaccine against HIV that has been effective in monkeys has been ineffective in humans.
Every neuroprotection drug that was effective in animal models has proven ineffective in humans.
I would classify both of the above as low probability. Even if an HIV vaccine is developed tomorrow and developed entirely based on monkey research, the probability that monkeys will predict human response will still be very low. If memory serves me correct over fifty vaccines against HIV worked in monkeys so far. Therefore even if monkeys lead to an HIV vaccine tomorrow the calculations will be somewhere around one out of fifty. This is the point that Dr Ringach seems to be having trouble with. Occasionally getting the right answer does not mean the modality is predictive or has a high probability of giving society the right answer. If animal-based research was all society had available maybe one out of fifty would be acceptable but human-based research is both underfunded and, as the name implies—human-based.
Ennever et al.:
. . . of the 20 probable human non-carcinogens with conclusive animal bioassay results, only one, methotrexate, is negative, and the other 19 are positive . . . Thus, the standard interpretation of animal bioassay results provides essentially no differentiation between definite human carcinogens and probable human non-carcinogens. (9)
Nineteen out of twenty wrong means there is a low probability that animal tests predict human carcinogens.
Nature Biotechnology 2010:
The low predictive value of mouse cancer models for human disease is a major challenge for cancer research. Whereas human tumors develop from individual cells in the context of normal tissue, cancer research mostly relies on models employing xenografts or carrying oncogenic mutations throughout the whole animal or tissue. (10) (Emphasis added.)
Lindl et al. 2005:
According to the German Animal Welfare Act, scientists in Germany must provide an ethical and scientific justification for their application to the licensing authority prior to undertaking an animal experiment. Such justifications commonly include lack of knowledge on the development of human diseases or the need for better or new therapies for humans. The present literature research is based on applications to perform animal experiments from biomedical study groups of three universities in Bavaria (Germany) between 1991 and 1993. These applications were classified as successful in the animal model in the respective publications. We investigated the frequency of citations, the course of citations, and in which type of research the primary publications were cited: subsequent animal-based studies, in vitro studies, review articles or clinical studies. The criterion we applied was whether the scientists succeeded in reaching the goal they postulated in their applications, i.e. to contribute to new therapies or to gain results with direct clinical impact. The outcome was unambiguous: even though 97 clinically orientated publications containing citations of the above-mentioned publications were found (8% of all citations), only 4 publications evidenced a direct correlation between the results from animal experiments and observations in humans (0.3%). However, even in these 4 cases the hypotheses that had been verified successfully in the animal experiment failed in every respect. The implications of our findings may lead to demands concerning improvement of the licensing practice in Germany. (11) (Emphasis added.)
And lets not forget the previously mentioned Contopoulos-Ioannidis et al. study.
The article by Contopoulos-Ioannidis et al. (1) in this issue of the journal addresses a much-discussed but rarely quantified issue: the frequency with which basic research findings translate into clinical utility. The authors performed an algorithmic computer search of all articles published in six leading basic science journals (Nature, Cell, Science, the Journal of Biological Chemistry, the Journal of Clinical Investigation, the Journal Experimental Medicine) from 1979 to 1983. Of the 25,000 articles searched, about 500 (2%) contained some potential claim to future applicability in humans, about 100 (0.4%) resulted in a clinical trial, and, according to the authors, only 1 (0.004%) led to the development of a clinically useful class of drugs (angiotensin-converting enzyme inhibitors) in the 30 years following their publication of the basic science finding. They also found that the presence of industrial support increased the likelihood of translating a basic finding into a clinical trial by eightfold.
Still, regardless of the study's limitations, and even if the authors were to underestimate the frequency of successful translation into clinical use by 10-fold, their findings strongly suggest that, as most observers suspected, the transfer rate of basic research into clinical use is very low. (12) (Emphasis added.)
All of the above examples can be easily multiplied. But to what end? Where does it stop? Even if all of the above studies (that cite or demonstrate the results from experiments) were considered collectively, some would maintain that such results, without a supporting theory to put them into context, would be an inadequate basis for our claims. Our claims being:
1. that the probability of animal models as used in basic research leading to treatments is low; and
2. that animal models cannot predict human response to drugs and disease.
However, even if one does not value empirical evidence, the studies mentioned above and in Animal Models in Light of Evolution, when combined with a working knowledge of evolution, evo devo, complex systems, and genetics in general, do provide both data and context and hence proof that our claims are correct.
Dr Ringach stated:
Your argument that “this is what the entire book is about” is pure nonsense. A deception. A probability is just a number. Where is it? Hiding in 500 pages of half-truths and out-of-context citations?
Actually, probability is a word that represents a concept that can be expressed as a number. Of course, the reader must keep in mind that Dr Ringach has also criticized Niall Shanks my coauthor, who wrote the critically acclaimed Intelligent Design critique God, the Devil, and Darwin and who has written books about, and is considered an authority on quantum mechanics, logic, probability and science as being the coauthor of a book that:
. . . added one cup of badly interpreted evolutionary biology and a pinch of irrelevant mathematical formulas to confuse the reader.
When students want to (or more likely are forced to) learn about organic chemistry they study a book because the principles of organic chemistry sadly cannot be encapsulated in a paragraph or a formula. Organic chemistry is not the only discipline of which this concept is true. As I said in my first blog, books are important.
I close with this from this from Upton Sinclair:
It is difficult to get a man to understand something, when his salary depends upon his not understanding it!
1. J. T. Litchfield, Jr., Clin Pharmacol Ther 3, 665 (Sep-Oct, 1962).
2. I. Mahmood, Drug Metabol Drug Interact 16, 143 (2000).
3. R. Heywood, in Animal Toxicity Studies: Their Relevance for Man, CE Lumley, S. Walker, Eds. (Quay, Lancaster, 1990), pp. 57-67.
4. E. J. Calabrese, Principles of Animal Extrapolation. (CRC Press, 1991).
5. E. Zielinska, The Scientist 24, 34 (April 1, 2010).
6. R. L. Smith, J. Caldwell, in Drug metabolism - from microbe to man, D. V. Parke, R. L. Smith, Eds. (Taylor & Francis, London, 1977).
7. J. I. Johnson et al., Br J Cancer 84, 1424 (May 18, 2001).
8. J. M. Manson, in Developmental Toxicology: Mechanisms and Risks. Banbury Report 26, J. A. McLachlan, R. M. Pratt, C. L. Markert, Eds. (Cold Springs Harbor Laboratory, 1987), pp. 307-322.
9. F. K. Ennever, T. J. Noonan, H. S. Rosenkranz, Mutagenesis 2, 73 (Mar, 1987).
10. M.E., Nature Biotechnology 28, vii (2010).
11. T. Lindl, M. Voelkel, R. Kolar, ALTEX 22, 143 (2005).
12. W. F. Crowley, Jr., Am J Med 114, 503 (Apr 15, 2003).