Having allowed that ID is not necessarily religious in nature and is capable of transcending its creationist origins, the task is to evaluate ID according to the proposed strategy outlined earlier. The three necessary criteria will be examined in detail, followed by the four indicators.
ID’s historical claims will need to be evaluated for testability by examining their causal explanations. Its hypotheses about design detection must also be examined for testability.
The basic hypothesis of ID states that ‘certain features of the universe and of living things are best explained by an intelligent cause’. However this is a very general hypothesis too vague to generate meaningful causal explanations. According to Elliott Sober, ‘testing the design hypothesis requires that we have information about the goals and abilities the designer would have’ (Sober, 1999).
Yet ID proponents explicitly deny any knowledge of the designer. Michael Behe states that ‘the reasons that a designer would or would not do anything are virtually impossible to know unless the designer tells you specifically what those reasons are’ (Behe, 1996). Similarly, Stephen Meyer says ID ‘does not claim to be able to determine the identity or any other attributes of that intelligence’ (Meyer, 2009, 428). A corollary is that the designer’s mechanism for creation is also unknown.
This lack of knowledge is used as a defence by ID proponents against the existence of apparently sub-optimal designs in nature – how can we know that a designer would not create them when we know nothing about their intentions? This, however, highlights the difficulty of testing such a minimalist version of intelligent design – what empirical observations would we expect from a designer whose attributes are unknown?
According to Behe and Meyer, ‘the conclusion that something was designed can be made quite independently of knowledge of the designer’ (Behe, 1996, 197), as ‘we know from experience only conscious, intelligent agents produce large amounts of specified information’ (Meyer, 2009, 429).
In terms of historical testing, their basic claim is that intelligent design can be reliably detected, and this is empirical evidence of a design event some time in the past. No auxiliary assumptions about the designer are necessary, and this evidence of design is Cleland’s ‘smoking gun’ that selects the design hypothesis over evolution.
This is a dubious claim, as it is dependent on an unstated assumption that human intelligence acts in a way analogous to designer intelligence, despite the very significant differences in capabilities and the denial of any knowledge about the designer. It is also vague enough to deny that sub-optimal designs could not be the product of this intelligence, as well accommodating any other empirical evidence that might be available. There is no indication as to how or why the intelligent designer does what it is claimed to do, and so very little is explained – in fact it provides no more explanation than the claim ‘God did it’ does. It has to be concluded that without a more developed theory, ID is untestable as a historical science.
Testability of design detection using CSI
William Dembski’s complex specified information (CSI) is at the heart of ID’s design detection claims. In terms of testability, the claim is that a biological feature that exhibits CSI has been designed by an intelligent agent.
This use of CSI is undermined by a theoretical flaw that makes this claim difficult to substantiate. Both the explanatory filter and Dembski’s more recent simplified approach of ruling out chance hypotheses (which now include regularities) require that currently unknown natural laws are rejected to conclude design. According to Fitelson, rejecting all possible chance and regularity hypotheses requires ‘ kind of omniscience’ (Fitelson et al, 1999) – both in the estimation of probabilities that are poorly defined, and because of these unknown laws. Given PMN’s presumption of naturalistic explanations, it is hard to see how a conclusion of design could be arrived at unless extraordinary empirical evidence confirms it. ID is not necessarily supernatural as has been discussed, but it is potentially supernatural and so still requires such evidence.
Is the design claim testable, despite this flaw? In principle, a series of tests could be devised that calculate CSI for human-designed and natural artefacts to validate the Dembski formula for human intelligence. However the auxiliary assumption that a designer would act in a way analogous to human intelligence would be required, as with historical testing. It also does not solve Fitelson’s issue, and so it is difficult to see how any tests can validate CSI’s applicability to design in biological organisms.
Testability of irreducible complexity
The first issue regarding testability is that IC is primarily a test of evolution rather than ID. If a truly IC feature was found, evolution would be falsified – but this would not necessarily rule out an alternative natural mechanism.
A further problem is that without any details of the designer and the mechanism, we can’t know an intelligent designer would build IC structures. Similarly, IC cannot falsify ID. If an evolutionary explanation is later found for a feature claimed to be IC, ID proponents can claim that they were mistaken about this feature being IC.
In terms of historical testing, an IC feature would certainly favour the ID hypothesis instead of evolution, but again the PMN requirement means that an extraordinary level of evidence is required for a feature to be considered IC.
How is IC determined? Luskin states that IC ‘can be tested for by reverse-engineering biological structures through genetic knockout experiments to determine if they require all of their parts to function’ (Luskin, 2011). But does this mean a structure is IC? To be definitive would surely involve examining all conceivable evolutionary scenarios for constructing the feature rather than just determining if all parts are required. This would need to include the possibility of exaptation, where the function of a biological trait can change during its evolutionary history. H. Allen Orr complicates things further by suggesting ‘an irreducibly complex system can be built gradually by adding parts that, while initially just advantageous, become— because of later changes—essential’ (Orr, 1996).
Behe himself notes ‘an exhaustive consideration of all possible roles for a particular component can’t be done’ (Behe, 1996, 111). Behe rather is claiming that it is extremely unlikely that certain features he has identified as irreducibly complex could have evolved gradually, i.e. his is a probabilistic argument: ‘one can not definitively rule out the possibility of an indirect, circuitous route. As the complexity of an interacting system increases, though, the likelihood of such an indirect route drops precipitously’ (Behe, 1996, 40).
However Behe’s discussions of supposedly IC features such as blood clotting, the bacterial flagellum and the immune system do not perform any probabilistic calculations (Behe, 1996). They describe the systems in some detail and highlight their complexity, as well as the current lack of knowledge concerning their evolution. How the relevant probabilities could be calculated for an unknown process is not discussed, and it is difficult to see how a conclusive decision could be made. They primarily seem to be arguments from incredulity, and so IC’s testability is problematic.
Testability of evolution criticisms
ID’s various criticisms of evolutionary theory listed above also offer no direct positive evidence for ID, and are again tests of evolution rather than ID.
But when comparing historical hypotheses, criticisms of one hypothesis are indirect support for the other. According to Stephen Meyer, ‘since design hypotheses are often formulated as strong claims about intelligence as the best causal explanation of some particular phenomenon, these hypotheses entail counter-claims about the insufficiency of competing materialistic mechanisms’ (Meyer, 2009, 482). This seems reasonable – provided that ID is providing a causal explanation. But we have already concluded that ID provides little explanation at all, and so this claim seems very weak.
Stephen Meyer lists ‘a dozen ID-inspired predictions’ (Meyer, 2009, 482-483). Representative examples are briefly examined below.
One prediction is that instances of ‘bad design’ will eventually show evidence of degenerated original designs, or functional reasons for the supposedly bad design – for example, the backward wiring of the retina.
Another is that the fossil record should show evidence of sudden appearances of major forms of life, and limits to the amount of change organisms undergo discovered.
A final example is that ‘no undirected process will demonstrate the capacity to generate 500 bits of new information starting from a non-biological source’, which draws on Dembski’s CSI.
These predictions are not testable. Some predictions (e.g. ‘bad design’) are not falsifiable in that they predict an event vindicating ID at an unspecified future point. Other predictions of future events that would falsify ID (e.g. generating bits of information) cannot easily be tested. And finally some predictions (e.g. concerning the fossil record) are vague enough that in practice they are not falsifiable.
So is ID testable? Without any knowledge of the designer, there are no specific empirical predictions that can genuinely test the theory. Both CSI and IC have significant theoretical flaws that cast doubt on the veracity of design detection, which is the key claim in ID’s quiver. There is also little evidence to suggest that these concepts can be used in practice. Criticisms of evolution are tests of evolutionary theory, not ID. So ID’s key tenets are currently not testable, and we have also concluded that ID is not testable as a historical science. ID fails the stated criteria for science at the first hurdle.