0.004 Percent

| by Dr Ray Greek
article imagearticle image

Robin Lovell-Badge, head of the division of Stem Cell Biology and Developmental Genetics at the Medical Research Council National Institute for Medical Research in London, wrote an essay titled: How to distort 0.004% of the statistics. The essay was published by Speaking of Research on May 22. Lovell-Badge begins by explaining that his essay will address the claim that “only 0.004 per cent of all animal experimentation is of any direct benefit to human health.” The 0.004% figure comes from Crowley [1] and since I frequently quote the figure, I decided to reply to Lovell-Badge. The exact wording however does not come from Crowley. More about that later.

Briefly, Contopoulos-Ioannidis et al examined six top science journals in order “To evaluate the predictors of and time taken for the translation of highly promising basic research into clinical experimentation and use.” They published their results in their paper “Translation of highly promising basic science research into clinical applications,” in The American Journal of Medicine. [2] Out of 101 basic research papers published in the journals between 1979 and 1983, 27 led to randomized clinical trials and 5 eventually gave rise to licensed clinical application. A translation rate of ~27% is not bad.

But there were some criticisms of the article. Crowley commented on the article:

The article by Contopoulos-Ioannidis et al. (1) in this issue of the journal addresses a much-discussed but rarely quantified issue: the frequency with which basic research findings translate into clinical utility. The authors performed an algorithmic computer search of all articles published in six leading basic science journals (Nature, Cell, Science, the Journal of Biological Chemistry, the Journal of Clinical Investigation, the Journal Experimental Medicine) from 1979 to 1983. Of the 25,000 articles searched, about 500 (2%) contained some potential claim to future applicability in humans, about 100 (0.4%) resulted in a clinical trial, and, according to the authors, only 1 (0.004%) led to the development of a clinically useful class of drugs (angiotensin-converting enzyme inhibitors) in the 30 [I think that should be 20] years following their publication of the basic science finding. They also found that the presence of industrial support increased the likelihood of translating a basic finding into a clinical trial by eightfold. Still, regardless of the study's limitations, and even if the authors were to underestimate the frequency of successful translation into clinical use by 10-fold, their findings strongly suggest that, as most observers suspected, the transfer rate of basic research into clinical use is very low. [1]

Most people would indeed find a huge difference between the figures of 27% and 0.004%. Lovell-Badge then attempts to explain away the difference.

Lovell-Badge’s main criticism appears to be the fact that Contopoulos-Ioannidis et al used filters to weed out the research that appeared to make no claim for clinical relevance. The total number of papers evaluated by Contopoulos-Ioannidis et al was 25,190. They then filtered these by the words therapy, therapies, therapeutic, therapeutical, prevention, preventative, vaccine, vaccines, or clinical. If any of these words were included in the article, the authors investigated them further, but if these words were not in the article the authors assumed the papers were not claiming a clinically relevant intervention might result from their research. This is a very reasonable assumption. Granted, the authors may miss an occasional paper that went on to change medical care, but given biomedical researchers tendency to exaggerate the importance of their research, the absence of any of these terms seems reasonable in terms of making further decisions. See any number of my previous blogs (for example here, here, and here) for more on exaggerated claims by researchers. It is difficult to imagine any paper that might possibly result in a new development in medical care lacking at least one of these words somewhere in the entire article. I can’t imagine any paper of note that lacked these words in the abstract. Or the title! Finally, using such filters is an accepted standard in such research. So if Lovell-Badge wants to take issue with the practice, he will have to discount an extremely high number of good studies.

A secondary criticism seems to revolve around the sentence: “[O]nly 0.004 per cent of all animal experimentation is of any direct benefit to human health”. Those words were not from Crowley and to the best of my knowledge not from me. I usually quote Crowley exactly or say something very similar. I assume Lovell-Badge is quoting the media or animal rights activists. In that case, I really do not see that much difference between the words “direct benefit to human health” and “clinically useful class of drugs.” If these words were being published in a science journal then I would argue that there is a significant difference between the sentences. But for a vast majority of people they convey the same concept. As Lovell-Badge is supposedly writing for society in general, he should either clarify the differences between the two statements or stop harping on just the one in hopes that he can confuse the reader into thinking it was what Crowley actually said. Moreover, Lovell-Badge seems to equate a new drug with a clinically useful new class of drugs. New drugs are developed all the time, mostly me-too drugs. But a new class of drugs is rare and usually a game-changer. Crowley was not claiming that new drugs were not developed but rather that a new class had only been discovered because of one paper. This is a big difference and impacts on Lovell-Badge criticisms.

Other criticisms are trivial. For example, Lovell-Badge begins the essay by criticizing the arbitrary 20-year interval between publication and assessment by Contopoulos-Ioannidis et al. (The studies analyzed were published between 1979 and 1983 and the paper was published in 1983 thus allowing at least 20 years for a clinically significant outcome from the original paper.) There is some truth to the fact that the longer a paper is available, the more chances there will be to develop a clinically useful application, but in real life assessments have to be made in terms of whether the a type of research is worth the money being allocated to it and similar studies have also questioned the return rate of basic research.[3-6] The 20-year time frame is adequate for these purposes but obviously not all encompassing. Were it all encompassing, there would be other problems, as we will see.

Along the same lines, Lovell-Badge points out that there were some papers of clinical significance that Contopoulos-Ioannidis et al missed and hence the percentage would have been higher had they been included. I am not certain that the papers he is referring to led to new classes of drugs and if not then Crowley and the original paper would still be correct. But regardless, 1) we are talking about different figures after the decimal point here, not raising the 0.004% figure to 4% or 40%. Any analysis will be flawed in some way, which is probably why Crowley states in his essay: “even if the authors were to underestimate the frequency of successful translation into clinical use by 10-fold, their findings strongly suggest that, as most observers suspected, the transfer rate of basic research into clinical use is very low.” [1]

2) Moreover, such hindsight analysis goes both ways. How many papers led to clinical interventions that harmed people and how many of these were discovered only after the Contopoulos-Ioannidis et al and Crowley publications? Scientists are not anxious to point out the basic science research that led to harm such as the mechanisms of entry of the poliovirus, early animal-based research on HIV including the mechanism of entry of HIV, and the myriad drugs that were cancelled after harming humans in clinical trials. The mechanisms of these drugs had been suggested by basic research using animals and hence had proceeded to development. Had these factors been taken into account the harm:benefit ratio for animal-based basic research would probably have been much greater than 1.0. That is a study that should be funded! And one that should not be performed by academia.

Lovell-Badge goes on to criticize the Crowley analysis based on data that was not available at the time and other factors that even if they were correct were not greatly influence the 0.004% figure. If one wishes to see the red herring fallacy, read the Lovell-Badge essay. No matter how he spins it, the final percentage is going to be in that 0.004% area, not in the 27% area. This also gets us back to the fact that the study only looked for successes as opposed to harms, something Lovell-Badge ignores. If you want to honestly examine a study or practice make sure your analysis considers both positive and negative outcomes and therefore is not an example of cherry picking. Had the study examined both good and bad outcomes from basic research, just based on what we know about basic research used in drug development, Lovell-Badge’s essay would have needed to be very different.

Lovell-Badge makes the argument that basic research can be divided into research that uses animals and research that does not. I agree. However, he then states that we cannot draw any conclusions about animal experimentation because such research was not separated from human-based and technology-based basic research. This is mathematically untrue. We can draw the following conclusions.

1. Basic research in general has a very low success rate in terms of translating to medications and interventions that benefit humans.

2. We can draw conclusion based on the one success. The development of angiotensin-converting enzyme inhibitors, the 1 success out of 25,000, relied on nonanimal methods and tissue from animals. But the tissues could have been harvested from humans. Once again we see a breakthrough that used animal tissues but that could have used human tissue. This is not an example of animal use (specifically rabbits) being necessary for the discovery! If one is going to defend an ethically contentious practice one must concede that when other methods could have resulted in the breakthrough, the ethically contentious use is harder to justify.

(I will try to publish a detailed paper on the history of angiotensin-converting enzyme inhibitors but such will not happen any time soon given my current schedule. Speaking of peer-reviewed detailed papers, if there are so many flaws in Crowley’s analysis why didn’t Lovell-Badge’s write a real paper and publish it in a peer-reviewed journal? That is what I do with most of my “big issue” papers. See Google Scholar or PubMed for a list. Perhaps Lovell-Badge would like to debate the role of animal models with me in the peer-reviewed literature? Any serious scholar would jump at the chance and I know of journals that would publish the debate. Why would a serious scholar refuse such an invitation, opting instead to blog his unconventional thoughts? Hmmm.)

3. Some of the 25,000+ basic research papers examined by Contopoulos-Ioannidis et al were probably human-based, meaning that humans or human tissues were studied. Given our current understanding of evolution and complex systems it follows that if research that uses human tissue has a low rate of return, research that studies a different species will yield an even lower rate of return. The drug development literature is the best we have for judging animal-based research and clearly animal models are not great for predicting human response to drugs. (For more on the drug development literature, see my responses to another Lovell-Badge essay in Liars And Statistics. Part I,  Part II and Part III.)

Finally, to hopefully lay to rest the notion that the Crowley paper was fatally flawed, it was peer-reviewed and met the standards of The American Journal of Medicine: Official Journal of the Alliance for Academic Internal Medicine. Flawed papers make it through peer-review every day but given the history and standing of The American Journal of Medicine, it is unlikely that a paper that was as fatally flawed as Lovell-Badge purports Crowley’s article to have been, would have been published. Nor has the author or journal withdrawn the paper, nor has there been any serious criticism of the paper. Furthermore, the problems that so concern Lovell-Badge do not appear in any of the reviews or editorial comments regarding the Crowley paper. Even Nature Medicine did not question Crowley’s conclusions. One of the co-authors of the Contopoulos-Ioannidis paper, JP Ioannidis, later stated: “There is considerable evidence that the translation rate of major basic science promises to clinical applications has been inefficient and disappointing.” [7] He reiterated this in other publications and he is not alone in this thinking. Many have stated as much (see Is the use of sentient animals in basic research justifiable?). In light of the trivial and or misleading nature of Lovell-Badge’s comments, the reputation of The American Journal of Medicine is in no danger.

The notion that animal-based basic research is necessary for medical science to advance dates back to a series of papers by Comroe and Dripps.[8-10] They supposedly proved that basic research in general as well as animal-based basic research was essential for medical science. Suffice it to say their methodology was fatally flawed and this has been recently discussed by Grant et al.[3-6] (For more on this, see here, here, here, and here.) Flawed research happens, probably more often than not, as prefect protocols are almost impossible. But flawed research is not synonymous with fatally flawed. Grant et al explain why the Comroe-Dripps papers were fatally flawed. Despite the fatal problems with Comroe-Dripps, it is still cited in support of basic research.

This whole discussion boils down to money. As I have stated many times, basic researchers want to have their cake and eat it too. They want the grant money that accompanies applied research but want the cover that comes with basic research since there is a very low probability that their research will ever do anything that contributes to human health. Note that I am not opposed to basic research. I am opposed to an ethically contentious practice that claims a high rate of return but in reality offers very little. I am also opposed to fraud.

The physicist, Sean Carroll, recently stated: “Philosophers are much better than scientists at discovering when certain scientific accepted beliefs are wrong. [11] As I have stated before, much of what I do, I classify as philosophy of science. I am not criticizing the methodology used by animal modelers but rather the underlying assumptions. If animal models have little to no predictive value and basic research using animals offers a very low rate of return, then society in general does not want the practice to continue.[12] Animal-based basic research must be evaluated and treated differently from basic research that does not use animals. The return on investment to the humans funding the animal-based basic research is a valid question; much as predictive values are relevant when evaluating the use of animals in drug development.

In the final analysis, rate of return counts. In 1964, John R Platt wrote what would become a classic paper, titled Strong Inference. [13] In it, Platt anticipated some of the discussion regarding basic research that society is having today: “We speak piously of taking measurements and making small ‘studies that will "add another brick to the temple of science."’ Most such bricks just lie around the brickyard.”

Given the fact that basic research in general has never been an area with a high yield for technological and medical advances (nor should it be), one should question the value of an ethically contentious practice like animal-based basic research. Yes, I know that basic research has provided the foundations for some very impressive technologies, but all in all that was usually physics- and chemistry-based research not animal-based research. Given the low yield of animal-based basic research, and how much of the successful basic research that used animals could have used human tissue and even ethically studied intact humans, I find it very difficult to believe that those who defend basic research that uses animals by appealing to a fictitious higher rate of return for the practice are being honest or ethical.


1.         Crowley WF, Jr.: Translation of basic research into useful treatments: how often does it occur? Am J Med 2003, 114(6):503-505.

2.         Contopoulos-Ioannidis DG, Ntzani E, Ioannidis JP: Translation of highly promising basic science research into clinical applications. Am J Med 2003, 114(6):477-484.

3.         Grant J, Cottrell R, Cluzeau F, Fawcett G: Evaluating "payback" on biomedical research from papers cited in clinical guidelines: applied bibliometric study. BMJ 2000, 320(7242):1107-1111.

4.         Grant J, Green L, Mason B: From Bedside to Bench: Comroe and Dripps Revisited. In: HERG Research Report No 30 Health Economics Research Group. Brunel University, Uxbridge, Middlesex UB8 3PH, UK; 2003.

5.         Grant J, Green L, Mason B: Basic research and health: a reassessment of the scientific basis for the support of biomedical science. Research Evaluation 2003, 12(3):217-224.

6.         Grant J, Hanney S, Buxton M: Academic medicine: time for reinvention: research needs researching. BMJ 2004, 328(7430):48; discussion 49.

7.         Ioannidis JP: Materializing research promises: opportunities, priorities and conflicts in translational medicine. J Transl Med 2004, 2(1):5.

8.         Comroe JH, Jr., Dripps RD: Ben Franklin and open heart surgery. Circ Res 1974, 35(5):661-669.

9.         Comroe JH, Jr., Dripps RD: Ben Franklin and Open Heart Surgery. Cardiovasc Dis 1975, 2(4):361-375.

10.       Comroe JH, Jr., Dripps RD: Scientific basis for the support of biomedical science. Science 1976, 192(4235):105-111.

11.       RS87 - Sean Carroll on Naturalism []

12.       Greek R, Greek J: Is the use of sentient animals in basic research justifiable? Philos Ethics Humanit Med 2010, 5:14.

13.       Platt JR: Strong Inference: Certain systematic methods of scientific thinking may produce much more rapid progress than others. Science 1964, 146(3642):347-353.