- Shopping Bag ( 0 items )
The Demarcation Problem
A (Belated) Response to Laudan
The Premature Obituary of the Demarcation Problem
The "demarcation problem," the issue of how to separate science from pseudoscience, has been around since fall 1919—at least according to Karl Popper's (1957) recollection of when he first started thinking about it. In Popper's mind, the demarcation problem was intimately linked with one of the most vexing issues in philosophy of science, David Hume's problem of induction (Vickers 2010) and, in particular, Hume's contention that induction cannot be logically justified by appealing to the fact that "it works," as that in itself is an inductive argument, thereby potentially plunging the philosopher straight into the abyss of a viciously circular argument.
Popper famously thought he had solved both the demarcation and induction problems in one fell swoop, by invoking falsification as the criterion that separates science from pseudoscience. Not only, according to Popper, do scientific hypotheses have to be falsifiable (while pseudoscientific ones are not), but since falsification is an application of modus tollens, and hence a type of deductive thinking, we can get rid of induction altogether as the basis for scientific reasoning and set Hume's ghost to rest once and for all.
As it turns out, however, although Popper did indeed have several important things to say about both demarcation and induction, philosophers are still very much debating both issues as live ones (see, e.g., Okasha 2001 on induction, and Hansson 2009 on demarcation). The fact that we continue to discuss the issue of demarcation may seem peculiar, though, considering that Laudan (1983) allegedly laid to rest the problem once and for all. In a much referenced paper quite definitively entitled "The Demise of the Demarcation Problem," Laudan concluded that "the [demarcation] question is both uninteresting and, judging by its checkered past, intractable. If we would stand up and be counted on the side of reason, we ought to drop terms like 'pseudoscience' and 'unscientific' from our vocabulary" (Laudan 1983, 125).
At the risk of being counted on the side of unreason, in this chapter I argue that Laudan's requiem for the demarcation problem was much too premature. First, I quickly review Popper's original arguments concerning demarcation and falsification (but not those relating to induction, which is beyond the scope of this contribution); second, I comment on Laudan's brief history of the demarcation problem as presented in parts 2 and 4 of his paper; third, I argue against Laudan's "metaphilosophical interlude" (part 3 of his paper), where he sets out the demarcation problem as he understands it; and last, I propose to rethink the problem itself, building on an observation made by Kuhn (1974, 803) and a suggestion contributed by Dupré (1993, 242). (Also see in this volume, Boudry, chapter 5; Hansson, chapter 4; Koertge, chapter 9; and Nickles, chapter 6.)
Popper (1957) wanted to distinguish scientific theories or hypotheses from nonscientific and pseudoscientific ones, and was unhappy with what he took to be the standard answer to the question of demarcation: science, unlike pseudoscience (or "metaphysics"), works on the basis of the empirical method, which consists of an inductive progression from observation to theories. If that were the case, Popper reckoned, astrology would have to rank as a science, albeit as a spectacularly unsuccessful one (Carlson 1985). Popper then set out to compare what in his mind were clear examples of good science (e.g., Albert Einstein's general theory of relativity) and pseudoscience (e.g., Marxist theories of history, Freudian psychoanalysis, and Alfred Adler's "individual psychology") to figure out what exactly distinguishes the first from the second group. I use a much broadened version of the same comparative approach toward the end of this essay to arrive at my own proposal for the problem raised by Popper.
Popper was positively impressed by the then recent spectacular confirmation of Einstein's theory after the 1919 total solar eclipse. Photographs taken by Arthur Eddington during the eclipse confirmed a daring and precise prediction made by Einstein, concerning the slight degree by which light coming from behind the sun would be bent by the latter's gravitational field. By the same token, however, Popper was highly unimpressed by Marxism, Freudianism, and Adlerianism. For instance, here is how he recalls his personal encounter with Adler and his theories:
Once, in 1919, I reported to [Adler] a case which to me did not seem particularly Adlerian, but which he found no difficulty in analysing in terms of his theory of inferiority feelings, although he had not even seen the child. Slightly shocked, I asked him how he could be so sure. "Because of my thousandfold experience," he replied; whereupon I could not help saying: "And with this new case, I suppose, your experience has become thousand-and-one-fold." (Popper 1957, sec. 1)
Regardless of whether one agrees with Popper's analysis of demarcation, there is something profoundly right about the contrasts he sets up between relativity theory and psychoanalysis or Marxist history: anyone who has had even a passing acquaintance with both science and pseudoscience cannot but be compelled to recognize the same clear difference that struck Popper as obvious. I maintain in this essay that, as long as we agree that there is indeed a recognizable difference between, say, evolutionary biology on the one hand and creationism on the other, then we must also agree that there are demarcation criteria–however elusive they may be at first glance.
Popper's analysis led him to a set of seven conclusions that summarize his take on demarcation (Popper 1957, sec. 1):
1. Theory confirmation is too easy.
2. The only exception to statement 1 is when confirmation results from risky predictions made by a theory.
3. Better theories make more "prohibitions" (i.e., predict things that should not be observed).
4. Irrefutability of a theory is a vice, not a virtue.
5. Testability is the same as falsifiability, and it comes in degrees.
6. Confirming evidence counts only when it is the result of a serious attempt at falsification (this is, it should be noted, somewhat redundant with statement 2 above).
7. A falsified theory can be rescued by employing ad hoc hypotheses, but this comes at the cost of a reduced scientific status for the theory in question.
The problems with Popper's solution are well known, and we do not need to dwell too much on them. Briefly, as even Popper acknowledged, falsificationism is faced with (and, most would argue, undermined by) the daunting problem set out by Pierre Duhem (see Needham 2000). The history of science clearly shows that scientists do not throw a theory out as soon as it appears to be falsified by data, as long as they think the theory is promising or has been fruitful in the past and can be rescued by reasonable adjustments of ancillary conditions and hypotheses. It is what Johannes Kepler did to Nicolaus Copernicus's early insight, as well as the reason astronomers retained Newtonian mechanics in the face of its apparent inability to account for the orbit of Uranus (a move that quickly led to the discovery of Neptune), to mention but two examples. Yet, as Kuhn (1974, 803) aptly noticed, even though his and Popper's criteria of demarcation differed profoundly (and he obviously thought Popper's to be mistaken), they did seem to agree on where the fault lines run between science and pseudoscience: which brings me to an examination and critique of Laudan's brief survey of the history of demarcation.
Laudan's Brief History of Demarcation
Two sections of Laudan's (1983, secs. 2, 4) critique of demarcation are devoted to a brief critical history of the subject, divided into "old demarcationist tradition" and "new demarcationist tradition" (and separated by the "metaphilosophical interlude" in section 3, to which I come next). Though much is right in Laudan's analysis, I disagree with his fundamental take on what the history of the demarcation problem tells us: for him, the rational conclusion is that philosophers have failed at the task, probably because the task itself is hopeless. For me, the same history is a nice example of how philosophy makes progress: by considering first the obvious moves or solutions, then criticizing them to arrive at more sophisticated moves, which are in turn criticized, and so on. The process is really not entirely disanalogous with that of science, except that philosophy proceeds in logical space rather than by empirical evidence.
For instance, Laudan is correct that Aristotle's goal of scientific analysis as proceeding by logical demonstrations and arriving at universals is simply not attainable. But Laudan is too quick, I think, in rejecting Parmenides' distinction between episteme (knowledge) and doxa (opinion), a rejection that he traces to the success of fallibilism in epistemology during the nineteenth century (more on this in a moment). But the dividing line between knowledge and opinion does not have (and in fact cannot be) sharp, just like the dividing line between science and pseudoscience cannot be sharp, so that fallibilism does not, in fact, undermine the possibility of separating knowledge from mere opinion. Fuzzy lines and gradual distinctions—as I argue later—still make for useful separations.
Laudan then proceeds with rejecting Aristotle's other criterion for demarcation, the difference between "know-how" (typical of craftsmen) and "know-why" (what the scientists are aiming at), on the ground that this would make pre-Copernican astronomy a matter of craftsmanship, not science, since pre-Copernicans simply knew how to calculate the positions of the planets and did not really have any scientific idea of what was actually causing planetary motions. Well, I will bite the bullet here and agree that protoscience, such as pre-Copernican astronomy, does indeed share some aspects with craftsmanship. Even Popper (1957, sec. 2) agreed that science develops from proto-scientific myths: "I realized that such myths may be developed, and become testable; and that a myth may contain important anticipations of scientific theories."
Laudan makes much of Galileo Galilei's and Isaac Newton's contentions that they were not after causes, hypothesis non fingo to use Newton's famous remark about gravity, and yet they were surely doing science. Again, true enough, but both of those great thinkers stood at the brink of the historical period where physics was transitioning from protoscience to mature science, so that it was clearly way too early to search for causal explanations. But no physicist worth her salt today (or, indeed, shortly after Newton) would agree that one can be happy with a science that ignores the search for causal explanations. Indeed, historical transitions away from pseudoscience, when they occur (think of the difference between alchemy and chemistry), involve intermediate stages similar to those that characterized astronomy in the sixteenth and seventeenth centuries and physics in the seventeenth and eighteenth centuries. But had astronomers and physicists not eventually abandoned Galileo's and Newton's initial caution about hypotheses, we would have had two aborted sciences instead of the highly developed disciplines that we so admire today.
Laudan then steps into what is arguably one of the most erroneous claims of his paper: the above mentioned contention that the onset of fallibilism in epistemology during the nineteenth century meant the end of any meaningful distinction between knowledge and opinion. If so, I wager that scientists themselves have not noticed. Laudan does point out that "several nineteenth century philosophers of science tried to take some of the sting out of this volte-face [i.e., the acknowledgment that absolute truth is not within the grasp of science] by suggesting that scientific opinions were more probable or more reliable than non-scientific ones" (Laudan 1983, 115), leaving his readers to wonder why exactly such a move did not succeed. Surely Laudan is not arguing that scientific "opinion" is not more probable than "mere" opinion. If he were, we should count him amongst postmodern epistemic relativists, a company that I am quite sure he would eschew.
Laudan proceeds to build his case against demarcation by claiming that, once fallibilism was accepted, philosophers reoriented their focus to investigate and epistemically justify science as a method rather than as a body of knowledge (of course, the two are deeply interconnected, but we will leave that aside for the present discussion). The history of that attempt naturally passes through John Stuart Mill's and William Whewell's discussions about the nature of inductive reasoning. Again, Laudan reads this history in an entirely negative fashion, while I—perhaps out of a naturally optimistic tendency—see it as yet another example of progress in philosophy. Mill's ( 2002) five methods of induction and Whewell's (1840) concept of inference to the best explanation represent marked improvements on Francis Bacon's (1620) analysis, based as it was largely on enumerative induction. These are milestones in our understanding of inductive reasoning and the workings of science, and to dismiss them as "ambiguous" and "embarrassing" is both presumptuous and a disservice to philosophy as well as to science.
Laudan then moves on to twentieth-century attempts at demarcation, beginning with the logical positivists. It has become a fashionable sport among philosophers to dismiss logical positivism out of hand, and I am certainly not about to mount a defense of it here (or anywhere else, for that matter). But, again, it strikes me as bizarre to argue that the exploration of another corner of the logical space of possibilities for demarcation—the positivists' emphasis on theories of meaning—was a waste of time. It is because the positivists and their critics explored and eventually rejected that possibility that we have made further progress in understanding the problem. This is the general method of philosophical inquiry, and for a philosopher to use these "failures" as a reason to reject an entire project is akin to a scientist pointing out that because Newtonian mechanics turned out to be wrong, we have made no progress in our understanding of physics.
After dismissing the positivists, Laudan turns his guns on Popper, another preferred target amongst philosophers of science. Here, however, Laudan comes close to admitting what a more sensible answer to the issue of demarcation may turn out to be, one that was tentatively probed by Popper himself: "One might respond to such criticisms [of falsificationism] by saying that scientific status is a matter of degree rather than kind" (Laudan 1983, 121). One might indeed do so, but instead of pursuing that possibility, Laudan quickly declares it a dead end on the grounds that "acute technical difficulties confront this suggestion." That may be the case, but it is nonetheless true that within the sciences themselves there has been quite a bit of work done (admittedly, much of it since Laudan's paper) to make the notion of quantitative comparisons of alternative theories more rigorous. These days this is done by way of either Bayesian reasoning (Henderson et al. 2010) or some sort of model selection approach like the Akaike criterion (Sakamoto and Kitagawa 1987). It is beyond me why this sort of approach could not be one way to pursue Popper's eminently sensible intuition that scientificity is a matter of degrees. Indeed, I argue below that something along these lines is actually a much more promising way to recast the demarcation problem, following an early suggestion by Dupré (1993). For now, though, suffice it to say that even scientists would agree that some hypotheses are more testable than others, not just when comparing science with proto- or pseudoscience, but within established scientific disciplines themselves, even if this judgment is not exactly quantifiable. For instance, evolutionary psychology's claims are notoriously far more difficult to test than similarly structured hypotheses from mainstream evolutionary biology, for the simple reason that human behavioral traits happen to be awful subjects of historical investigation (Kaplan 2002; Pigliucci and Kaplan 2006, chap. 7). Or consider the ongoing discussion about the (lack of) testability of superstring and allied family of theories in fundamental physics (Voit 2006; Smolin 2007).
Excerpted from Philosophy of Pseudoscience by Massimo Pigliucci, Maarten Boudry. Copyright © 2013 The University of Chicago. Excerpted by permission of THE UNIVERSITY OF CHICAGO PRESS.
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.