Animal Rights
Animal Rights

# Prediction II

| by Dr Ray Greek

The comments by LifeScientist are representative of a group and therefore deserve significant exploration. In lecturing to and discussing this topic with biological scientists, I have found an under appreciation for the statistics we commonly use in medical science. Trained as a physician, I am certainly not the symbol for a great statistics education, but as medical students and residents we do cover what I consider the very basics. In past essays I have linked to the Wikipedia sites on sensitivity, specificity, positive and negative predictive value. An understanding of these concepts is essential for any analysis of whether a test or research modality or endeavour of any kind (such as astrology) qualifies as predictive. LifeScientist is correct when he states that there are many levels of prediction but only if what he means is that the same numerical value derived by calculating the above can be useful for one endeavor while being useless for another. Allow me to clarify.

As we explain much better in Animal Models in Light of Evolution, a gambling method that resulted in winning 51% of the time at the blackjack table in Las Vegas would quickly make one wealthy. (I continue, Dr Ringach, to reference Animal Models in Light of Evolution because all that I am saying has been said better and in more detail in that book. I do not make money from the sales.) So, finding a strategy or method that resulted in winning 51% of the time would be great! (A PPV of 0.51, if you will, although that is not specifically what it is.) But biomedical science does not tolerate such a low winning rate. Medications are recalled if they harm a relative handful of people and tests for coronary artery disease, HIV, and so forth must have a very high positive predictive value (PPV) and negative predictive value (NPV).

In the physical sciences it is common to require 100% accuracy in order for something to be said to be predictive. We call these concepts laws of physics. As I have said before, the biological sciences rely more on statistics when determining causation and effects than do the physical sciences. So when using say, a medical test to diagnose condition X, we assume the test will not be predictive 100% of the time, like the Second Law of Thermodynamics is, but we do need it to have a very high PPV and NPV. The same is true of research modalities. No one believes that the astrologer can predict the mechanism used by HIV to enter the human white blood cells (despite occasionally guessing correctly about other things). But scientists did explore animal models in order to find the answer to the HIV question. The animal models failed, as the mechanisms differ between species (1,2). So the question really becomes, what is the PPV and NPV of an animal test or of animal models per se? If toxicity testing, for example, has a low PPV and NPV then it is not predictive. If using animals to predict human response to disease mechanisms like HIV has a low PPV and NPV, then it also fails to qualify as predictive. So what does the data say?

There have been many studies comparing the results of tests in animals with the responses in humans. We reference some of these in the book. The PPV and NPV values fall far short of anything that could reasonably be considered predictive. In this sense, LifeScientist is wrong when he states there are different levels of prediction. In the biomedical sciences anything without a high PPV and NPV is not predictive. (Many of these studies have been performed analyzing toxicity testing. But this brings up an interesting point. Even if there were no data on disease response in humans and animals, what would knowing the toxicity data tell us about trans-species extrapolation? Quite a lot actually. The same evolutionary mechanisms that account for the differences in toxicity will also denote different responses to diseases and different disease mechanisms.)

But there should be an easier way than analyzing every use of animals in research for PPV and NPV and there is (well easier in some aspects). We ought to find the biological equivalent of a law in physics that helps us decide whether animal models even should be predictive. Fortunately, in biology we have an overarching theory and it is the Theory of Evolution. This, combined with an analysis of complex systems, will allow us to explore the concept of using animals as predictive models for drug and disease response. This is what we do in Animal Models in Light of Evolution. As I have said in previous essays, the Second Law saves everyone a lot of effort in analyzing perpetual motion machine applications. Likewise, a proper analysis of complex systems and evolutionary biology reveals that while animals and humans share conserved processes (and we can learn about such processes from studying animals e.g. the homeobox, the Krebs cycle and so forth) the results of stimuli or pertubations to the system, such as drugs and diseases, will differ among species and even between sexes, age groups, and ethnic groups (see references in the book).

LifeScientist points out that research cannot be easily divided into basic and applied and again he is correct in that. But it can be easily divided into the use of animals as predictive models and uses that have no such pretensions. By traditional definitions of research, basic research does not make claims of prediction while applied research and even translational research does. The current problem for some is that in order to get funding, basic researchers must pass off their research as being applicable to humans. (There are many reasons for this but the fact remains.) Freeman and St Johnston in Dis Model Mech 2008:

Many scientists who work on model organisms, including both of us, have been known to contrive a connection to human disease to boost a grant or paper. It’s fair: after all, the parallels are genuine, but the connection is often rather indirect. DMM is about something quite different. This new journal is aimed at people who set out with an explicit goal to investigate human disease using model organisms. (3)

No, it’s not fair. This is fraud by any standard dictionary defintion. The applicant is promising a relationship (that there is a high probability, in other words a high PPV and NPV, that the animal model will give data applicable to humans) that does not hold up to scrutiny and is promising this in order to obtain taxpayers money. That is fraud. Look it up. It may be true that “everyone does it” but that does not make it right.

An example of basic researchers claiming human relevance can be found in the book and in my previous citations of Dr Ringach’s research. But even better, we can find it in his own words when he says: “Yes, I am explicitly referring to the restoration of function in humans!  . . . To satisfy Dr. Greek, I assert here that the organization of the retina and early visual pathways of old-world monkeys and humans are, for all practical purposes, identical.”  This claim then is subject to the scrutiny outlined in this and other essays.

When LifeScientist states: “. . . it is hoped or expected that the results of the animal study will predict what will happen when the same treatment is used in humans” he is describing research that makes the claim of prediction and this claim can be judged using simple math as described. Such hope has been tested and found wanting. If by “it is hoped” he means the researcher really wants some cure to come from the research, then fine, so be it. But why a cure for human disease, why not an end to world hunger? By connecting the research with human disease the researcher is telling society that a reasonable link exists between causation in the animal model and causation in humans and, based on both empirical evidence and theory, such a link simply does not exist. The researcher might as well hope the research in question will result in an engine that runs on water. There is not enough of a correlation (vis-à-vis high PPV and NPV) to assume the results from animals will be the same in humans and hoping does not change that.

I should now briefly explore another aspect of the prediction issue that we explain in depth in the book and that many will consider obvious. In order for a modality to be considered predictive in the scientific sense of the word, it must have a history. The fact that you predicted the winner of March Madness (the NCAA basketball tournament of 64 or so teams) does not mean that you, or the method that you use, are a predictive entity. Sensitivity, PPV and so forth evaluate a history of tests or claims in order to determine whether the modality per se is predictive. One success does not a predictive modality make or else astrology would qualify. You may make the claim that you predicted the ultimate winner, but if so you would be using predict in the lay sense, not the scientific sense, of the word. (Show that you forecast the winner every year but one from 1976 and then you can make such a claim.)

When LifeScientist states “Even very "blue skies" basic research has some predictive element” he is simply wrong. Basic or blue-sky research, as defined by Nobel laureates and so forth, does not make predictions about human responses. To state otherwise is either a corruption of language and or disingenuous and hence has consequences.

I agree that “due to our evolutionary relationship one might expect” certain processes to be conserved (I am paraphrasing). But the ability of animal studies to inform about humans stops at the hypothesis generating stage. The homeobox is a good example as is RNAi. Neither of these are examples of using animals to predict human response to drugs or disease and that is what Dr Shanks and I address. Both RNAi and the homeobox are examples of finding something new in an animal or other life form and then looking for the same in humans. Animals were used as hypotheses generators. Again (I may have mentioned this before), allow me to state that animals can be used as heuristic devices and as hypotheses generators. This is how they are used vis-à-vis the homeobox and so forth. But that is an entirely different claim from saying they can be used to predict human response to disease and drugs and such use is how animal-based researchers sell animal use to society (see the book and specifically Appendix 3). Attempting to conflate the two very different uses serves only to confuse the issue and I can think of no reason animal experimenters would stoop to this level except that they know what we are saying is true.