In light of recent posts about the prediction question in animal-based research I want to, once again, point out that there is a difference between occasionally getting the right answer and a modality as a whole being predictive for outcomes. In medical science one judges the predictive ability of a modality by using simple statistics analyzing true positives, false positives, false negatives, sensitivity, positive predictive value, and so forth. This blog is not an explanation of these terms and calculations. (See Animal Models in Light of Evolution or the links for a more thorough examination of this.) Rather, I want to once again point out that the way society should choose a modality, be it a research modality or any modality that is used to predict outcomes, is by a scientific examination of the modality and this includes evaluating past performance.
Predictions are statements about expected future observations, but in science they are not, as they are in ordinary usage, merely lucky guesses. Scientific predictions are derived from hypotheses. Crudely speaking, a hypothesis is something that explains past and present observations and enables the investigator to form (under suitable conditions) expectations—predictions—about the course of future events. These predictions about the course of future events must be testable—they must be sensitive to the fruits of evidential inquiry (which may, in the biological sciences, involve carefully-controlled experiments, field observations, or some combination of both). (p251)
There is a difference between a modality, such as astrology, which occasionally guesses correctly and a modality, such as gene–based medicine that can match a drug to a gene and predict effects or side effects. If an astrologer predicted that you were going to get a raise from your boss, I would say that while I sincerely hope the raise happens, if I were you I would not count on it. The same is true of results from animal tests. I sincerely hope that every cure that has been promised from animal testing actually happens. The reason for my apprehension lies in the fact that neither astrology nor animal testing has a good track record.
The question society needs to ask is “How should we best proceed?” Should we waste our time reading our horoscope or taking homeopathy medicine when we are ill or using animals as predictive models of human disease? Or, should we spend the time we would otherwise waste on our horoscope doing something productive, seek advice about our health from board certified medical doctors that practice science–based medicine, and fund research methods that actually do predict human response to drugs and disease?
Some might respond, why not do both? Because, taking homeopathy medicine delays real medical care and funding animal-based research as a predictive modality takes money away from better research methods and misleads scientists. Choosing nonsense over rational thought has consequences.
Society needs cures and treatments for Alzheimer’s, Parkinson’s, cancer, and many more diseases and conditions. Funding animal-based research under the false assumption that it will predict human response is worse than throwing the money away as, at least if we just tossed the money scientists would not be misled into endorsing things that ended up causing harm to patients.
Note what I am not saying. If you read your horoscope everyday for fifty years there is a high likelihood that at least one day a prediction will correlate with what actually happened. That does not make the modality known as astrology predictive. Most research money that comes via the NIH goes to animal-based research. (See previous blogs and Animal Models in Light of Evolution.) It should come as no surprise that something of value has come from animal-based research. As the astrology example shows, do almost anything enough times and you will eventually learn something or guess correctly. But that does not mean the modality in question is the best way to get the job done. Furthermore, in some cases the modality is actually counterproductive, as I have also explored in previous blogs.
As this concept seems to be perplexing for some I will give one more example.
Currently, the US and other countries are looking for oil. The world runs on oil (more or less) and everyone desires more. For the purposes of this example lets assume there are only two ways of looking for oil. 1. Giant strip mines that go very deep into the earth. 2. Using technology to search for oil from space and elsewhere in an attempt to pinpoint exactly where it will be. I assure you that if you strip mine often enough and deep enough you will find oil. Did the strip mining process predict where the oil would be? Absolutely not! Nevertheless, the strip mining did occasionally (in my example) result in an oil find. Making use of technology, as oil companies in fact do when they search for oil, does not always result in oil finds, but it does result in oil finds a significant percentage of the time.
Why not do both? Because one works a lot better than the other and uses/destroys less resources. Human-based research in areas like gene-based medicine, pharmacogenomics, evidence-based medicine, autopsies, clinical research, epidemiology and so forth give results that are applicable to humans far more often than animal-based research. That is why human-based research is predictive for humans and animal-based research is not. This is almost a tautology. Arguing that human-based research is less predictive for humans than animal-based research says more about the person making the argument than the validity of the argument itself.
Note one more thing that I am not saying. If a scientist wants to use animals to search for knowledge for the sake of new knowledge, such is scientifically viable. If a scientist wants to use animals as heuristic devices, that is also viable. But that is not how animal-based research is sold to society. It is sold as a predictive modality and the support for this false claim comes in the form of ad hominem attacks and other examples of fallacious reasoning. Science as an enterprise suffers when members of its own community justify their income by merely heaping bad logic on top of bad science. Making claims that have been falsified is no better than the faith-based statement of the Malaysian minister referred to in my last blog. Neither relies on critical thought. Both can be classified under the category non-science. (By the way, in many debates my scientist opponent has appealed to the Bible as giving humans the right to experiment on animals. Odd statement for an atheist and those who have made such statements do classify themselves as atheists. It is also an odd statement for someone to make in a debate on the scientific aspects of using animals in research. So once again, I stand by my statement.)
My recent blogs can be classified under the field of philosophy of science and or critical thought. The silence on this issue from philosophers of science and skeptics is deafening. These communities cannot continue to feel self-righteous because they debunk evolution deniers, homeopathy, and anti-vaccine wingnuts while ignoring the literally life threatening problem of using animals as predictive models. The regulations of the FDA and EPA are based on the myth that animal models can predict human response as is every NIH grant that goes to an “animal model” of disease X that promises extrapolation. These are not insignificant faux pas.
To skeptics and philosophers of science I say this. Correcting uneducated people when they advocate nonsense is appropriate and needed. Nonsense cannot be allowed to go unchallenged. Attacking well-educated scientists who support the anti-vaxers and so forth is also appropriate and needed. But a more egregious error is to be found when your own communities advocate nonsense because of the money involved and uses example after example of fallacious reasoning to justify their position. When scientists who advocate for using animal models refuse to take the debate to the peer-reviewed scientific literature and or debate the topic in their own university, the implications are obvious. Reasonable people can come to only one conclusion: “That community is hiding something and therefore cannot be trusted.” By allowing nonsense in their own community to go unchallenged, the scientific and skeptic communities do not help the fight for critical thinking and science as a whole. Why, among other reasons, is this important? Paul Thagard said:
…society faces the twin problems of lack of public concern with the important advancement of science, and the lack of public concern with the important ethical issues now arising in science and technology, for example we around the topic of genetic engineering. One reason for the dual lack of concern is the wide popularity of pseudoscience and the occult among the general public. Elucidation of how science differs from pseudoscience is the philosophical side of an attempt to overcome public neglect of genuine science. (Thagard 1998)
I have spoken with countless scientists who admit they agree with my position but refuse to come out because they fear the consequences from their university or government employer. To these people I can only say: stay silent. I am sure someone else will stand up for what is right. There is no need for you to risk your professional reputation and personal comfort for something as meaningless as truth and curing disease.
Thagard, Paul. 1998. Why astrology is not science. In Philosophy of Science: The Central Issues, edited by M. Curd and J. A. Cover: Norton.