Is Intelligent Design science?

Is Intelligent Design science?

Previous posts have proposed a demarcation criteria for science, and evaluated ID in considerable detail against these criteria.

Based on ID’s clear failure to satisfy the necessary criteria of testability and empirical adequacy, ID as a discipline can not be considered to fall within the realm of science. This is further confirmed by its lack of progressiveness and explanatory power.

This does not mean that ID researchers are not at times satisfying some of the criteria for doing science, but this is not sufficient to rescue ID from being classified as pseudoscience, particularly given their repeated claims of scientific status.

Where can ID go from here? It is possible that ID is a proto-science, and that in time a comprehensive theory of ID will be developed that proposes certain capabilities and motives of an intelligent designer, and possibly a mechanism for the instantiation of design in living organisms. Certainly, for ID to progress and become testable, a radical change in approach from denying any knowledge of the designer is required. Given that most ID proponents are theists, this may be a difficult step for ID to take.

There is an alternative, which is to forsake ID’s quest for scientific legitimacy, and concede that its positive arguments for design are more suitably classified as philosophy of religion, being primarily scientifically sophisticated versions of the teleological argument for design. Of course, this casts aside the mantle of scientific authority that many ID proponents see as important for swaying opinion, and is probably unacceptable for most in the ID community.

References

Behe, Michael J. 1996: Darwin’s Black Box: The Biochemical Challenge to Evolution. New York: Free.

Boudry, M., Blancke, S. and Braeckman, J. 2010: How Not to Attack Intelligent Design Creationism: Philosophical Misconceptions About Methodological Naturalism. Foundations of Science, 15(3), 227-244.

Boudry, M., Blancke, S. and Braeckman, J. 2012: Grist to the Mill of Anti-evolutionism: The Failed Strategy of Ruling the Supernatural Out of Science by Philosophical Fiat. Sci & Educ, 21(8), 1151-1165.

Cleland, C. 2011: Prediction and Explanation in Historical Natural Science. The British Journal for the Philosophy of Science, 62(3), 551-582.

Cleland, C. 2013: Common cause explanation and the search for a smoking gun. In Baker, V. (ed.) 125th Anniversary Volume of the Geological Society of America: Rethinking the Fabric of Geology, Special Paper 502 (2013), 1-9.

Creation Ministries International, 2015: What we believe. Creation Ministries International, accessed 2 December 2015. http://creation.com/about-us#what_we_believe

Darwin, Charles. 1859: The Origin of Species: A Facsimile of the First Edition. Harvard University Press, 1964.

Dembski, William A. 1998: The Design Inference: Eliminating Chance through Small Probabilities. Cambridge; New York: Cambridge University Press.

Dembski, William A. 2005: Specification: the pattern that signifies intelligence. Philosophia Christi 7 (2):299-343.

Discovery Institute, 2016 (a): Frequently Asked Questions. [online] Available at: http://www.discovery.org/id/faqs/ [Accessed 19 Jan. 2016].

Discovery Institute, 2016 (b): Peer-Reviewed Articles Supporting Intelligent Design. [online] Available at: http://www.discovery.org/id/peer-review/ [Accessed 14 Jan. 2016].

Discovery Institute, 2016 (c): What Is the Science Behind Intelligent Design?. [online] Available at: http://www.discovery.org/a/9761 [Accessed 21 Jan. 2016].

Dupré, J.,1993: The disorder of things. Cambridge, Mass.: Harvard University Press.

Fitelson, B., Stephens, C. and Sober, E., 1999: How Not to Detect Design:The Design Inference. William A. Dembski. Philosophy of Science, 66(3), 472.

Forber, P. and Griffith, E., 2011: Historical Reconstruction: Gaining Epistemic Access to the Deep Past. Philosophy and Theory in Biology, 3(20150929).

Forrest, B. 2000: Methodological Naturalism and Philosophical Naturalism. Philo, 3(2), 7-29.

Geisler, N. and Anderson, J. (1987). Origin science. Grand Rapids, Mich.: Baker Book House.

Gould, S. 2000: Wonderful life. New York: W.W. Norton.

Hansson, S. O., 1996: Defining Pseudoscience, Philosophia Naturalis, 33: 169–176.

Kitzmiller v. Dover Area School District. 2005: Memorandum Opinion (United States District Court, M.D. Pennsylvania.).

Kurtz, Paul, 1998: Darwin Re-Crucified: Why Are So Many Afraid of Naturalism?. Free Inquiry (Spring 1998), 17.

Laudan, L. 1982: Commentary: Science at the Bar – Causes for Concern. Science, Technology and Human Values 7(41), 16–19.

Luskin, C. 2011: How Do We Know Intelligent Design Is a Scientific “Theory”? [online] Evolution News & Views. Available at: http://www.evolutionnews.org/2011/10/how_do_we_know_intelligent_des051841.html [Accessed 6 Jan. 2016].

Luskin, C. 2013: Straw Men Aside, What Is the Theory of Intelligent Design, Really? [online] Evolution News & Views. Available at: http://www.evolutionnews.org/2013/08/what_is_the_the075281.html [Accessed 12 Jan. 2016].

Mahner, Martin. 2013: Science and Pseudoscience. In Pigliucci, Massimo, and Boudry (eds). Philosophy of Pseudoscience: Reconsidering the Demarcation Problem. University of Chicago Press.

Matzke, N. 2009: But isn’t it creationism? In Pennock, R.T. and Ruse, R (eds). But is it science? The philosophical question in the creation/evolution controversy, pp377-405

Meyer, Stephen C. 2008: A Scientific History – and Philosophical Defense – of the Theory of Intelligent Design. Religion – Staat – Gesellschaft, vol. 7.

Meyer, Stephen C. 2009: Signature in the cell. New York: HarperOne.

Meyer, Stephen C. 2014: Darwin’s Doubt: The Explosive Origin of Animal Life and the Case for Intelligent Design. New York: HarperOne.

Orr, H.A. 1996: Darwin v. Intelligent Design (Again). Boston Review, December 1996/January 1997, 28–31.

Pigliucci, Massimo, 2013: The Demarcation Problem. In Pigliucci, Massimo, and Boudry (eds). Philosophy of Pseudoscience: Reconsidering the Demarcation Problem. University of Chicago Press.

Pennock, R., 2011: Can’t philosophers tell the difference between science and religion?: Demarcation revisited. Synthese, 178(2), pp.177-206.

Prothero, Donald, 2010: Science and Creationism. In Rosenberg, A. and Arp, R. (eds). Philosophy of biology. Chichester, U.K.: Wiley-Blackwell.

Shermer, M. 2002: In Darwin’s Shadow: The Life and Science of Alfred Russel Wallace : a Biographical Study on the Psychology of History. Oxford University Press.

Thagard, P., 1978: Why Astrology is a Pseudoscience. PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association, 1978(1), pp.223-234.

Evaluating Intelligent Design 2

Evaluating Intelligent Design 2

Empirical adequacy

Empirical adequacy is determined by the results of testing, and in the historical sciences, by the acceptance of a hypothesis over its competitors because of its superior causal explanation of empirical observations. Given that ID has been assessed as untestable, it cannot even be properly evaluated for empirical adequacy.

It is worth noting here that ID’s paragon example of IC (and CSI), the bacterial flagellum, is strongly disputed to be IC by evolutionary biologists, again suggesting empirical inadequacy.

PMN

PMN requires that an assumption of methodological naturalism be adopted unless there is ‘extraordinary empirical evidence’ that indicates otherwise. This includes hypotheses advanced in the historical sciences.

As discussed earlier, ID is not inherently supernatural, and so satisfies the PMN requirement. However the now familiar lack of detail about the designer’s capabilities, motives and mechanism certainly casts a pall of suspicion on ID, and seems to be a strategy to avoid violating PMN.

Progressiveness

Does ID demonstrate the progressiveness expected of scientific endeavours? Does it develop theories, solve problems, and discard or modify them as require?

During the almost thirty years since its inception, ID has produced ideas such as Behe’s IC and Dembski’s specified complexity – flawed concepts as discussed above, but which have generated considerable debate. ID has also produced more sophisticated criticisms of evolutionary theory than its creation science predecessor, such as those by Stephen Meyer.

The primary issue, however, is the failure to develop a comprehensive theory, a situation already highlighted, and also acknowledged by Paul Nelson, a current Fellow of the Discovery Institute, at a meeting at Biola College in 2004: ‘Easily the biggest challenge facing the ID community is to develop a full-fledged theory of biological design. We don’t have such a theory right now, and that’s a problem. Without a theory, it’s very hard to know where to direct your research focus’ (Nelson, 2004).

Donald Prothero agreed, stating that ‘they don’t offer any new scientific ideas or a true alternative theory competing with evolution. All they argue is that some parts of nature seem too complex for them to imagine an evolutionary explanation’ (Prothero, 2010, 418).

In the decade or more since Paul Nelson made the statement above, little seems to have changed. Dembski’s ideas have not been further developed, and he has recently announced that he is abandoning further research in ID. There is no evidence that a detailed theory of design has

been developed (Luskin, 2013), and so there seems little empirical content to progress with. As a result, the conclusion is that ID does not satisfy the progressiveness indicator.

Explanatory power

Meyer claims that ‘[ID] provides a better explanation than any competing chemical evolutionary model for the origin of biological information’ (Meyer, 2009, 347). Similarly, Luskin states that ‘[ID] is an explanation of many aspects of the natural world, especially many aspects of biological complexity’ (Luskin, 2011).

However we have already noted the lack of any detail regarding the designer’s capabilities, motives and mechanisms. Essentially, the designer is undefined, and provides no more explanation than positing a deity does. This indicator is also not satisfied.

Peer review

The Discovery Institute maintains a list of ID-related peer-reviewed articles (Discovery Institute, 2016b). Many are not genuine peer-reviewed journal articles, but are book chapters, conference papers, and peer-edited and editor-reviewed articles. Most peer-reviewed articles deal with evolutionary science, highlighting particular issues, but do not mention ID. Articles directly supportive of ID are published in the Discovery Institute-funded open access journal BIO-Complexity.

It seems clear that ID practitioners are engaging in the peer-review process, and are contributing to evolutionary science, even if in a critical way. ID has also succeeded in provoking widespread discussion amongst evolutionary scientists, and although most of this is negative, it is also a form of peer-review.

It is concluded that ID satisfies the peer-review indicator.

Borrows knowledge

Unlike the creation science movement, which borrows knowledge from science only when supportive of its views (e.g. ‘young earth’ creationism’s rejection of radiometric dating), the ID community demonstrates willingness to draw on research from a broad range of scientific fields, including areas within biology, physics and chemistry.

Of course, ID does dispute many of the contentions of evolutionary theory, but this is expected as ID represents an alternative hypothesis. Certainly no basic science is disputed.

Claims to be science

The practitioners of a pseudoscience ‘deliberately attempt to create the impression that it is scientific’ (Hansson, 1996), and this is certainly the case for ID. The Discovery Institute has published numerous articles defending this position, and Stephen Meyer advances a detailed case (Meyer, 2009).

 Part 6: Is Intelligent Design science?

Evaluating Intelligent Design 1

Evaluating Intelligent Design 1

Having allowed that ID is not necessarily religious in nature and is capable of transcending its creationist origins, the task is to evaluate ID according to the proposed strategy outlined earlier. The three necessary criteria will be examined in detail, followed by the four indicators.

Testability

ID’s historical claims will need to be evaluated for testability by examining their causal explanations. Its hypotheses about design detection must also be examined for testability.

Historical testability

The basic hypothesis of ID states that ‘certain features of the universe and of living things are best explained by an intelligent cause’. However this is a very general hypothesis too vague to generate meaningful causal explanations. According to Elliott Sober, ‘testing the design hypothesis requires that we have information about the goals and abilities the designer would have’ (Sober, 1999).

Yet ID proponents explicitly deny any knowledge of the designer. Michael Behe states that ‘the reasons that a designer would or would not do anything are virtually impossible to know unless the designer tells you specifically what those reasons are’ (Behe, 1996). Similarly, Stephen Meyer says ID ‘does not claim to be able to determine the identity or any other attributes of that intelligence’ (Meyer, 2009, 428). A corollary is that the designer’s mechanism for creation is also unknown.

This lack of knowledge is used as a defence by ID proponents against the existence of apparently sub-optimal designs in nature – how can we know that a designer would not create them when we know nothing about their intentions? This, however, highlights the difficulty of testing such a minimalist version of intelligent design – what empirical observations would we expect from a designer whose attributes are unknown?

According to Behe and Meyer, ‘the conclusion that something was designed can be made quite independently of knowledge of the designer’ (Behe, 1996, 197), as ‘we know from experience only conscious, intelligent agents produce large amounts of specified information’ (Meyer, 2009, 429).

In terms of historical testing, their basic claim is that intelligent design can be reliably detected, and this is empirical evidence of a design event some time in the past. No auxiliary assumptions about the designer are necessary, and this evidence of design is Cleland’s ‘smoking gun’ that selects the design hypothesis over evolution.

This is a dubious claim, as it is dependent on an unstated assumption that human intelligence acts in a way analogous to designer intelligence, despite the very significant differences in capabilities and the denial of any knowledge about the designer. It is also vague enough to deny that sub-optimal designs could not be the product of this intelligence, as well accommodating any other empirical evidence that might be available. There is no indication as to how or why the intelligent designer does what it is claimed to do, and so very little is explained – in fact it provides no more explanation than the claim ‘God did it’ does. It has to be concluded that without a more developed theory, ID is untestable as a historical science.

Testability of design detection using CSI

William Dembski’s complex specified information (CSI) is at the heart of ID’s design detection claims. In terms of testability, the claim is that a biological feature that exhibits CSI has been designed by an intelligent agent.

This use of CSI is undermined by a theoretical flaw that makes this claim difficult to substantiate. Both the explanatory filter and Dembski’s more recent simplified approach of ruling out chance hypotheses (which now include regularities) require that currently unknown natural laws are rejected to conclude design. According to Fitelson, rejecting all possible chance and regularity hypotheses requires ‘ kind of omniscience’ (Fitelson et al, 1999) – both in the estimation of probabilities that are poorly defined, and because of these unknown laws. Given PMN’s presumption of naturalistic explanations, it is hard to see how a conclusion of design could be arrived at unless extraordinary empirical evidence confirms it. ID is not necessarily supernatural as has been discussed, but it is potentially supernatural and so still requires such evidence.

Is the design claim testable, despite this flaw? In principle, a series of tests could be devised that calculate CSI for human-designed and natural artefacts to validate the Dembski formula for human intelligence. However the auxiliary assumption that a designer would act in a way analogous to human intelligence would be required, as with historical testing. It also does not solve Fitelson’s issue, and so it is difficult to see how any tests can validate CSI’s applicability to design in biological organisms.

Testability of irreducible complexity

The first issue regarding testability is that IC is primarily a test of evolution rather than ID. If a truly IC feature was found, evolution would be falsified – but this would not necessarily rule out an alternative natural mechanism.

A further problem is that without any details of the designer and the mechanism, we can’t know an intelligent designer would build IC structures. Similarly, IC cannot falsify ID. If an evolutionary explanation is later found for a feature claimed to be IC, ID proponents can claim that they were mistaken about this feature being IC.

In terms of historical testing, an IC feature would certainly favour the ID hypothesis instead of evolution, but again the PMN requirement means that an extraordinary level of evidence is required for a feature to be considered IC.

How is IC determined? Luskin states that IC ‘can be tested for by reverse-engineering biological structures through genetic knockout experiments to determine if they require all of their parts to function’ (Luskin, 2011). But does this mean a structure is IC? To be definitive would surely involve examining all conceivable evolutionary scenarios for constructing the feature rather than just determining if all parts are required. This would need to include the possibility of exaptation, where the function of a biological trait can change during its evolutionary history. H. Allen Orr complicates things further by suggesting ‘an irreducibly complex system can be built gradually by adding parts that, while initially just advantageous, become— because of later changes—essential’ (Orr, 1996).

Behe himself notes ‘an exhaustive consideration of all possible roles for a particular component can’t be done’ (Behe, 1996, 111). Behe rather is claiming that it is extremely unlikely that certain features he has identified as irreducibly complex could have evolved gradually, i.e. his is a probabilistic argument: ‘one can not definitively rule out the possibility of an indirect, circuitous route. As the complexity of an interacting system increases, though, the likelihood of such an indirect route drops precipitously’ (Behe, 1996, 40).

However Behe’s discussions of supposedly IC features such as blood clotting, the bacterial flagellum and the immune system do not perform any probabilistic calculations (Behe, 1996). They describe the systems in some detail and highlight their complexity, as well as the current lack of knowledge concerning their evolution. How the relevant probabilities could be calculated for an unknown process is not discussed, and it is difficult to see how a conclusive decision could be made. They primarily seem to be arguments from incredulity, and so IC’s testability is problematic.

Testability of evolution criticisms

ID’s various criticisms of evolutionary theory listed above also offer no direct positive evidence for ID, and are again tests of evolution rather than ID.

But when comparing historical hypotheses, criticisms of one hypothesis are indirect support for the other. According to Stephen Meyer, ‘since design hypotheses are often formulated as strong claims about intelligence as the best causal explanation of some particular phenomenon, these hypotheses entail counter-claims about the insufficiency of competing materialistic mechanisms’ (Meyer, 2009, 482). This seems reasonable – provided that ID is providing a causal explanation. But we have already concluded that ID provides little explanation at all, and so this claim seems very weak.

ID’s predictions

Stephen Meyer lists ‘a dozen ID-inspired predictions’ (Meyer, 2009, 482-483). Representative examples are briefly examined below.

One prediction is that instances of ‘bad design’ will eventually show evidence of degenerated original designs, or functional reasons for the supposedly bad design – for example, the backward wiring of the retina.

Another is that the fossil record should show evidence of sudden appearances of major forms of life, and limits to the amount of change organisms undergo discovered.

A final example is that ‘no undirected process will demonstrate the capacity to generate 500 bits of new information starting from a non-biological source’, which draws on Dembski’s CSI.

These predictions are not testable. Some predictions (e.g. ‘bad design’) are not falsifiable in that they predict an event vindicating ID at an unspecified future point. Other predictions of future events that would falsify ID (e.g. generating bits of information) cannot easily be tested. And finally some predictions (e.g. concerning the fossil record) are vague enough that in practice they are not falsifiable.

Conclusion

So is ID testable? Without any knowledge of the designer, there are no specific empirical predictions that can genuinely test the theory. Both CSI and IC have significant theoretical flaws that cast doubt on the veracity of design detection, which is the key claim in ID’s quiver. There is also little evidence to suggest that these concepts can be used in practice. Criticisms of evolution are tests of evolutionary theory, not ID. So ID’s key tenets are currently not testable, and we have also concluded that ID is not testable as a historical science. ID fails the stated criteria for science at the first hurdle.

 Part 5: Evaluating Intelligent Design – part 2

What is Intelligent Design?

What is Intelligent Design?

A brief history of Intelligent design

ID has its roots in the modern ‘creation science’ movement pioneered in the 1960s by Henry M. Morris and his influential book, The Genesis Flood. During the 1970s and 1980s, creationist organisations vigorously promoted ‘young earth’ creationism – a commitment to a literal interpretation of the six days of creation recorded in Genesis.

In 1987, Edwards v. Aguillard ruled that the teaching of creation science in schools was unconstitutional, as it was a religious doctrine. Subsequently, the ID movement was launched, largely as a result of lawyer Phillip E. Johnson’s leadership and publication of Darwin on Trial. It eloquently repeated popular creationist criticisms of Darwinian evolution, and was critical of what Johnson perceived as science’s presumption of naturalism. Crucially, Johnson chose to avoid contentious issues such as interpreting Genesis and the age of the earth, and focused on a more fundamental issue – is life intelligently designed or the result of blind, natural causes? The main intellectual centre of ID quickly became the non-profit think tank The Discovery Institute.

Kitzmiller v. Dover Area School District was another significant trial in which demarcation was a central issue, this time concerning the teaching of ID in schools. Judge John E. Jones III used methodological naturalism as a key demarcation criterion, and concluded that ID was a religious argument, not science. Despite this setback, ID’s influence continued to grow, aided by the popularity of books such as Signature in the Cell (Meyer, 2009) and Darwin’s Doubt (Meyer, 2013).

What is Intelligent design?

The Discovery Institute defines ID as a theory holding ‘that certain features of the universe and of living things are best explained by an intelligent cause, not an undirected process such as natural selection’ (Discovery Institute, 2016a). Implicit in this definition is the assumption that this intelligent cause is not biologically related to and is ultimately responsible for the existence of all living things.

More specifically, the Discovery Institute states that ID ‘is based on observations of nature which the theory attempts to explain based on what we know about the cause and effect structure of the world and the patterns that generally indicate intelligent causes. Intelligent design is an inference from empirical evidence’ (Meyer, 2008). They explicitly state that ID ‘is a scientific theory that employs the methods commonly used by other historical sciences’ (Discovery Institute, 2016c).

There are two important points worth noting about these definitions. Firstly, they are non-religious, and the intelligent cause is not specified. Secondly, ID purports to be based on empirical evidence, and is explicitly claimed to be scientific.

Key tenets

The vast majority of ID literature concerns three main areas: specified complexity, irreducible complexity, and various anti-evolution arguments.

Specified complexity

William Dembski’s The Design Inference (Dembski, 1998) attempted to place design arguments such as William Paley’s on a firmer footing by proposing a theoretical basis for detecting design. His ‘explanatory filter’ is a three-step procedure for deciding upon the best explanation for an observation (such as seeing Paley’s watch lying on a beach).

The filter first considers if a regularity (basically a deterministic consequence) is responsible for the observation. If this can be ruled out, chance is then considered, and if the probability of the observation being a chance occurrence is sufficiently low, design is concluded to be responsible. Dembski calls the probabilistic threshold for selecting design over chance the ‘universal probability bound’, calculating it to be 1 in 10150. This very large number is guesstimated as an upper bound on the number of physical events that could have occurred in the universe since the Big Bang. An observation classified as the result of design is said to have exhibited specified complexity, or complex specified information (CSI).

In more recent work (Dembski, 2005), the explanatory filter appears to have been largely discarded. Dembski collapses regularities and chance categories into one category, chance, and develops a formula for CSI, and a threshold for choosing design over chance.

Irreducible complexity

The concept of ‘irreducible complexity’ (IC) is the claim that evolution is incapable of producing certain complex biological structures. An IC structure is one form of CSI. In the Origin, Darwin wrote ‘if it could be demonstrated that any complex organ existed which could not possibly have been formed by numerous, successive, slight modifications, my theory would absolutely break down’ (Darwin, 1859, 189). Michael Behe applied this idea to biochemical systems.

According to Behe, a system is IC if it is ‘a single system composed of several well-matched, interacting parts that contribute to the basic function, wherein the removal of any one of the parts causes the system to effectively cease functioning’ (Behe, 1996, 39). Behe claims that such a system cannot have been produced by evolution, as its evolutionary precursor, missing one or more of the parts, would be non-functional.

Behe has proposed numerous biochemical systems as IC, including cell cilia, the bacterial flagellum, and the process of blood clotting.

Criticisms of evolutionary theory

ID’s criticisms of evolutionary theory are widely discussed in ID literature, and are also heavily promoted in the creation science movement – often as fatal flaws that threaten to demolish Darwinism. They include familiar arguments about the lack of intermediate fossils in the fossil record, the rapid appearance of complex lifeforms during the Cambrian explosion, the failure to produce a workable theory for abiogenesis, and cellular complexity.

Is ID creation science or religious?

The close links between creation science and ID have been well documented (Matzke, 2009). The Kitzmiller judgement ruled that ‘ID cannot uncouple itself from its creationist, and thus religious, antecedents’ (Kitzmiller v. Dover Area School District, 2005, 136), reasoning that the designer must be supernatural. This was a major factor in concluding ID was not a science.

This approach seems a dubious basis for demarcation. There are non-supernatural alternatives such as intelligent aliens, or that we are part of a post-human computer simulation. Historically, many sciences have disentangled themselves from questionable origins, and ID proponents have made considerable efforts to do so.

Phillip E. Johnson, in founding ID, made clear the differences between ID and creationism. Stephen Meyer states that ID ‘does not offer an interpretation of the book of Genesis, nor does it posit a theory about the length of the Biblical days of creation or even the age of the earth’ and ‘unlike creationism, is not based upon the Bible ‘ (Meyer, 2008).

Creation science organisations clearly state that science is subservient to Biblical authority (Creation Ministries International, 2015). This is not the case for the Discovery Institute, and ID’s definition has no religious references – it is ‘not a deduction from religious authority’ (Meyer, 2008). It is true that most of its proponents have a belief in some form of supernatural creator, but this is not a fatal objection – after all, most Renaissance scientists shared this belief.

In the interests of an impartial evaluation, ID will be assumed to be non-religious as claimed. Its scientific status will be evaluated on its current characteristics rather than its origins.

Part 4: Evaluating Intelligent Design – part 1

A demarcation strategy proposal

A demarcation strategy proposal

Overview

To make an impartial judgement on the status of ID as a science or pseudoscience, a demarcation strategy must be adopted. It must allow fields widely accepted as science to be classified as such, and must exclude the most obvious cases of pseudoscience.

The proposal presented here acknowledges the difficulties Laudan pointed out with a necessary and sufficient set of demarcation criteria. However it rejects his conclusion that demarcation is a pseudo-problem, and instead takes the view that while there are no clear-cut demarcation boundaries, demarcation is still possible in many instances.

The proposal suggests three minimal necessary criteria for demarcation, coupled with a variable group of indicators. The necessary criteria are not regarded as sufficient, but serve to draw a relatively uncontroversial boundary around science. The more additional indicators that are satisfied by a field, the more firmly it is placed within the realm of science. It may be that some fields satisfy the necessary criteria, but few if any of the indicators, and it may not be possible to come to a firm conclusion in these cases. But this is not a fatal objection: rather it might be that a field is in the early stages of development as a science and its status may become clearer in the future.

Testability

The first and most important proposed criterion is testability. Science is an empirical investigation about the real world, and so scientific theories must be checked against the real world – they must be testable. Testability is closely related to falsifiability – a field can only be falsified if it can be tested. Additionally, the Duhem–Quine thesis tells us that a theory cannot be tested in isolation – theories require a number of auxiliary assumptions, and these are also tested when a theory is tested. Crucially, there should be independent evidence for auxiliary assumptions, to prevent their ad hoc invention to ensure a theory produces the expected results.

Historical or operational science?

Many creationists and ID supporters have distinguished between ‘historical science’ and ‘operational science’. A detailed case was presented by Geisler and Anderson (Geisler et al, 1987). They claimed historical sciences investigate phenomena that have occurred in the past (such as a creation event) and are not subject to repeatable experiment, unlike operational (or experimental) science.

This distinction has been used to argue that creationism and ID should not be subject to a testability criterion, and that they have equivalent scientific status to evolutionary theory, another historical science.

There is a valid basis to claims of methodological differences between historical and operational sciences. Stephen Jay Gould made the distinction, as have philosophers of science such as Carol Cleland. Of course, it should be noted that evolutionary theory uses both historical and operational methodologies – experimental evolution is an increasingly important field.

Do these differences grant an exemption from testability? The answer must be no. A science’s theories must be confirmed against the real world, and that can only be performed via some form of testing. As Gould stated: ‘We cannot see a past event directly, but science is usually based on inference, not unvarnished observation. The firm requirement for all science – whether stereotypical or historical – lies in secure testability, not direct observation. History’s richness drives us to different methods of testing, but testability is our criterion as well’ (Gould, 2000, 282).

Similarly, Michael Shermer wrote that ‘we cannot rerun the past, alter a variable here or there, and then observe the effects. This does not mean, however, that we cannot make causal inferences from what has already transpired’ (Shermer, 2002, 317).

As Gould and Shermer note, testing in the historical sciences must be performed differently to testing in operational science. Both Forber et al and Cleland argue that multiple competing hypotheses must be considered, and the hypothesis providing the best causal explanation selected. Forber et al argue for multiple independent lines of evidence that converge on a hypothesis’ estimates (Forber et al, 2011).

According to Cleland, predictions made by historical hypotheses are usually too vague to fail, and function more as educated guesses. Instead, ‘hypotheses are accepted and rejected by virtue of their power to explain as opposed to predict the evidence that supports them’. The assumption is that ‘seemingly improbable associations among present-day traces of the past are best explained in terms of a common cause’ (Cleland, 2013). So the process is to compare between common cause hypotheses for their ability to explain all the available evidence, hoping for ‘smoking gun’ evidence that discriminates between them.

In the historical sciences, then, testing proceeds by comparing multiple hypotheses to see which provides the best common cause explanation for the available empirical evidence. Is ID a historical science? Certainly, many of ID’s claims concern historical events, particularly those pertaining to criticisms of evolutionary theory. Examples include claims about the lack of intermediate fossils, and Stephen Meyers’ argument about the Cambrian explosion (Meyer, 2013), which attempts to provide a competing explanation to evolutionary theory. However ID also makes claims about being able to detect design in living organisms. While the initial design event may be regarded as a historical claim based far in the past, it is very much a matter of operational science to demonstrate the veracity of design detection techniques.

This means investigating testability will need to consider testing from both a historical and operational point of view, depending on the claims being made.

Empirical adequacy

The second necessary criterion is empirical adequacy. A theory is empirically adequate if its predictions approximate relevant observable aspects of the world. Empirical adequacy is strongly related to testability. If theories are not testable, they will automatically fail to be empirically adequate. Testability is not sufficient, though – theories must be confirmed by successful testing to be empirically adequate.

Of course, testing will not always be successful. Test failures cannot be ignored – they must be acknowledged and eventually resolved. By contrast, pseudosciences are ‘selective in considering confirmations and disconfirmations’ (Thagard, 1978).

Methodological naturalism

The final necessary criterion is the presumption of methodological naturalism (MN) – the principle that ‘all hypotheses and events are to be explained and tested by reference to natural causes and events’ (Kurz, 1998). MN is widely considered to be a working assumption of science, and science’s successes based on MN are a strong inductive justification for requiring it as a necessary feature of doing science.

An important question is whether MN should permit the investigation of supernatural phenomena. In the context of creation science and ID, this is a pertinent question, as creation science explicitly appeals to the supernatural (Creation Ministries International, 2015). MN was a key demarcation criterion used in numerous court cases to reject creation science and subsequently ID because of their apparent reliance upon the supernatural.

Restricting science to natural phenomena is problematic in several ways. Firstly, the concept of the supernatural is ill-defined. Secondly, there seems to be no a priori reason why supernatural events could not exert a causal influence on the physical, and if we suspected that they did, we would surely want to investigate. Finally, there have been numerous attempts to investigate a variety of supernatural phenomena by scientific means, e.g. Benson et al’s study on the effects of intercessory prayer (Benson et al., 2006), and this implies that the supernatural is amenable to scientific investigation.

Boudry et al argue that rather than considering MN as an intrinsic limitation of science, it should be a ‘provisory and empirically grounded commitment to naturalistic causes and explanations, which in principle is revocable by extraordinary empirical evidence’ (Boudry et al, 2010). They justify this inductively, based on ‘the pattern of consistent success of naturalistic explanations’ (Boudry et al, 2010), and label this Provisory (or Pragmatic) Methodological Naturalism (PMN). PMN does not exclude the supernatural by definition, accepting that supernatural forces ‘would have empirically detectable consequences, and these are in principle open to scientific investigation’ (Boudry et al, 2012).

This proposal suggests adopting PMN as a necessary feature of science. It means that supernatural explanations cannot be excluded from science by philosophical decree, but naturalistic explanations are to be preferred unless ‘extraordinary empirical evidence’ is brought to light.

This has implications for testing in the historical sciences, implying that extraordinary ‘smoking gun’ evidence would be required to prefer a supernatural hypothesis over a naturalistic hypothesis.

Additional indicators

Four optional indicators have been chosen that are characteristic of fields that have undisputed scientific status, such as physics, chemistry and biology. Note that some of the indicators are not uniquely associated with scientific fields, but are also characteristic of other academic fields.

Progressiveness

The first indicator is progressiveness. Thagard states that for a theory or discipline to be pseudoscience it ‘has been less progressive than alternative theories over a long period of time, and faces many unsolved problems … the community of practitioners makes little attempt to develop the theory towards solutions of the problems’ (Thagard, 1978). He defines progressiveness as ‘a matter of the success of the theory in adding to its set of facts explained and problems solved’ (Thagard, 1978).

A progressive field continually identifies anomalies and shortcomings of its theories and actively works to resolve them. It develops, modifies and discards theories as necessary – this is sometimes known as revisability, and also implies that theories are tentative.

If an alternative theory exists, comparative progressiveness becomes important. A pseudoscience’s practitioners may be doggedly persisting over many years when an alternative theory is solving problems that they continue to struggle with.

Progressiveness is an indicator rather than a necessary criterion because many theories we regard as scientific have been superseded and are no longer progressive. For example, Newtonian gravity was superseded by general relativity, while steady-state cosmology was superseded by big bang cosmology.

Explanatory power

Theories should explain why things are as they are. Typically, this should be by reference to natural laws, although adopting PMM means this is not absolute.

Peer review

Scientists generally share the results of their research – informally, at conferences, and by publication in peer-reviewed journals. This allows their claims to be scrutinised by the broader scientific community.

Borrowing knowledge

Scientists borrow relevant knowledge from adjacent disciplines relevant to their research. They do not repudiate or ignore knowledge that tends to disconfirm their theories.

Summary of proposal

The proposed demarcation strategy thus consists of evaluating a field against the three necessary criteria of testability, empirical adequacy and PMN, and the optional indicators of progressiveness, explanatory power, peer review, and knowledge borrowing.

Finally, to be considered pseudoscience, a field must both fail the above demarcation test and meet Hansson’s criterion of deliberately masquerading as science.

Evaluating the proposal

The strategy will be briefly evaluated against a field widely thought to be a pseudoscience – astrology. As a comparison, quantum field theory, an undisputed scientific field, is also evaluated.

Astrology

Astrology is somewhat testable, as evidenced by the work of statistician Michel Gauquelin, and more recently Carlson (Carlson, 1985). However it is not empirically adequate, having failed any rigorous tests it has been subjected to. Given that no mechanism is suggested, the PMN criterion is difficult to assess.

Astrology fails to meet most of the proposed indicators. It is not progressive or explanatory – according to Thagard astrology ‘has changed little and has added nothing to its explanatory power since the time of Ptolemy’ (Thagard, 1978). There is no active research community or peer-reviewed publication record. Astrology does borrow some knowledge from astronomy, and this criterion is satisfied in a weak sense.

Based on the necessary criteria and the indicators, astrology fails the demarcation test, which firmly places it outside the realm of science.

Quantum field theory

Quantum field theory (QFT) is a framework for elementary particle physics. The Standard Model is an extremely successful QFT which has been tested extensively and confirmed to an extraordinary degree of accuracy, thus satisfying the necessary criteria. The Standard Model has progressed significantly over the last 60 years or more, it has broad explanatory power, there is a vast research community generating thousands of peer-reviewed articles, and it draws on many different areas of physics. It is science.

Conclusion

In each of the examples, the proposed demarcation strategy performs as expected. Astrology is decisively classified as non-science, and QFT is clearly shown to be science.

Part 3: What is Intelligent Design?

What is the demarcation problem?

What is the demarcation problem?

The demarcation problem is a long-standing philosophical issue of how to distinguish (or demarcate) science from non-science. Demarcation dates back to the early Greek philosophers, and has been a central and problematic issue in philosophy of science for the last fifty years or more.

Why is demarcation such a difficult problem? One reason is that science is a heterogeneous, moving target, developing and changing significantly over time. And yet paradoxically, despite diverse views on the nature of science and demarcation criteria, there is broad agreement on most concrete demarcation cases, suggesting that demarcation is achievable.

Often, we also want to know if a field is a pseudoscience – non-science masquerading as science. Science has a privileged position in modern society, and those practising pseudoscience desire these privileges. Of course, they may also genuinely believe they are practising science.

Creation science is one field widely regarded by philosophers of science and scientists as a pseudoscience. Its proponents claim to provide scientific evidence for an Biblical creation account. Intelligent design (ID), the subject of this series of posts, is a recent derivative of creation science that attempts to place itself more firmly within the realm of science.

The importance of demarcation

Obviously, it is important for philosophers of science that they are able to characterise their own subject matter.

However demarcation is more than a philosophical problem, as modern society not only privileges but depends on scientific knowledge in various ways. For example, we need to be able to distinguish effective medical treatments from quack remedies; we have limited public funding for scientific research and would like to allocate funds appropriately; we need to make political decisions on scientific issues such as climate change. Science is also a central subject in our education systems, and decisions must be made as to what topics should be included in science curricula.

Consequently, society needs to make informed judgements about what is and is not science – and in all of the above dependencies, the critical demarcation issue is between science and pseudoscience – the primary focus of this series, applied to ID.

A brief history of demarcation

Aristotle was the first philosopher to attempt to comprehensively describe scientific knowledge, but it was not until the rise of logical positivism in the twentieth century that demarcation became a central concern of philosophy of science. The logical positivists decided a claim was scientific only if it was empirically meaningful, and that required empirical verifiability.

Karl Popper rejected verificationism, instead proposing falsifiability as the sole demarcation criterion. Thomas Kuhn thought that science in its ‘normal’ phase resisted falsification, and was characterised by routine puzzle-solving, while Imre Lakatos and Paul Thagard proposed progressiveness as a demarcation criterion.

In the 1970s and 1980s, a variety of multi-criteria demarcations were proposed. These included being guided by natural law, testability, tentativeness and falsifiability. In 1983, Larry Laudan famously announced the demise of the demarcation problem, labelling it a ‘pseudoproblem’ that is ‘uninteresting and […] intractable’ (Laudan, 1983). Laudan complained that there are no necessary and sufficient conditions adequate for demarcation, as they invariably misclassify some fields. Work on demarcation subsequently stalled until a recent resurgence of interest, particularly with regard to distinguishing science from pseudoscience.

Hansson noted that a pseudoscience will ‘deliberately attempt to create the impression that it is scientific’ (Hansson, 1996). Pigliucci criticised the desire for a set of necessary and sufficient conditions (Pigliucci, 2013), noting that the demarcation problem is not sharply delineated. Both Pigliucci and Dupré (Dupré, 1993, 242) proposed Wittgenstein’s family resemblances as a basis for demarcation. Rather than a common set of compulsory criteria, family resemblance concepts are connected by overlapping similarities that may not apply to every instance. Pigliucci suggested evaluating disciplines based on their varying degrees of theoretical soundness and empirical support, with no sharp demarcation line.

Similarly, Mahner argued for a variable cluster of indicators that characterise science, requiring a certain number to apply for a field to be regarded as scientific (Mahner, 2013). He suggested there may be up to fifty relevant indicators.

Part 2: A demarcation strategy proposal

Intelligent design: science or pseudoscience?

Intelligent design: science or pseudoscience?

Society privileges science in various ways, particularly in education, and consequently it is important to be able to distinguish between science and non-science – an issue known as the demarcation problem in philosophy of science.

Some non-sciences such as astrology and homeopathy deliberately masquerade as science, and are known as pseudosciences.

This series of posts considers the status of intelligent design (ID), a derivative of creation science.  Creation science claims to be science, but is widely considered to be pseudoscience, primarily because of its insistence on the primacy of the Bible. ID is an attempt to adopt a more scientific approach.

To evaluate ID’s status, a demarcation strategy is proposed and an evaluation based on the strategy conducted.

The ethos of the Philosophical Apologist means that it was important the evaluation be as impartial as possible. A judgement was not finally made until the research for this series was completed.

An outline of this series is shown below.

Part 1:  What is the demarcation problem?

Part 2: A demarcation strategy proposal

Part 3: What is Intelligent Design?

Part 4: Evaluating Intelligent Design – part 1

Part 5: Evaluating Intelligent Design – part 2

Part 6: Is Intelligent Design science?

Part 7: Does it matter if ID isn’t science?