A demarcation strategy proposal

A demarcation strategy proposal

Overview

To make an impartial judgement on the status of ID as a science or pseudoscience, a demarcation strategy must be adopted. It must allow fields widely accepted as science to be classified as such, and must exclude the most obvious cases of pseudoscience.

The proposal presented here acknowledges the difficulties Laudan pointed out with a necessary and sufficient set of demarcation criteria. However it rejects his conclusion that demarcation is a pseudo-problem, and instead takes the view that while there are no clear-cut demarcation boundaries, demarcation is still possible in many instances.

The proposal suggests three minimal necessary criteria for demarcation, coupled with a variable group of indicators. The necessary criteria are not regarded as sufficient, but serve to draw a relatively uncontroversial boundary around science. The more additional indicators that are satisfied by a field, the more firmly it is placed within the realm of science. It may be that some fields satisfy the necessary criteria, but few if any of the indicators, and it may not be possible to come to a firm conclusion in these cases. But this is not a fatal objection: rather it might be that a field is in the early stages of development as a science and its status may become clearer in the future.

Testability

The first and most important proposed criterion is testability. Science is an empirical investigation about the real world, and so scientific theories must be checked against the real world – they must be testable. Testability is closely related to falsifiability – a field can only be falsified if it can be tested. Additionally, the Duhem–Quine thesis tells us that a theory cannot be tested in isolation – theories require a number of auxiliary assumptions, and these are also tested when a theory is tested. Crucially, there should be independent evidence for auxiliary assumptions, to prevent their ad hoc invention to ensure a theory produces the expected results.

Historical or operational science?

Many creationists and ID supporters have distinguished between ‘historical science’ and ‘operational science’. A detailed case was presented by Geisler and Anderson (Geisler et al, 1987). They claimed historical sciences investigate phenomena that have occurred in the past (such as a creation event) and are not subject to repeatable experiment, unlike operational (or experimental) science.

This distinction has been used to argue that creationism and ID should not be subject to a testability criterion, and that they have equivalent scientific status to evolutionary theory, another historical science.

There is a valid basis to claims of methodological differences between historical and operational sciences. Stephen Jay Gould made the distinction, as have philosophers of science such as Carol Cleland. Of course, it should be noted that evolutionary theory uses both historical and operational methodologies – experimental evolution is an increasingly important field.

Do these differences grant an exemption from testability? The answer must be no. A science’s theories must be confirmed against the real world, and that can only be performed via some form of testing. As Gould stated: ‘We cannot see a past event directly, but science is usually based on inference, not unvarnished observation. The firm requirement for all science – whether stereotypical or historical – lies in secure testability, not direct observation. History’s richness drives us to different methods of testing, but testability is our criterion as well’ (Gould, 2000, 282).

Similarly, Michael Shermer wrote that ‘we cannot rerun the past, alter a variable here or there, and then observe the effects. This does not mean, however, that we cannot make causal inferences from what has already transpired’ (Shermer, 2002, 317).

As Gould and Shermer note, testing in the historical sciences must be performed differently to testing in operational science. Both Forber et al and Cleland argue that multiple competing hypotheses must be considered, and the hypothesis providing the best causal explanation selected. Forber et al argue for multiple independent lines of evidence that converge on a hypothesis’ estimates (Forber et al, 2011).

According to Cleland, predictions made by historical hypotheses are usually too vague to fail, and function more as educated guesses. Instead, ‘hypotheses are accepted and rejected by virtue of their power to explain as opposed to predict the evidence that supports them’. The assumption is that ‘seemingly improbable associations among present-day traces of the past are best explained in terms of a common cause’ (Cleland, 2013). So the process is to compare between common cause hypotheses for their ability to explain all the available evidence, hoping for ‘smoking gun’ evidence that discriminates between them.

In the historical sciences, then, testing proceeds by comparing multiple hypotheses to see which provides the best common cause explanation for the available empirical evidence. Is ID a historical science? Certainly, many of ID’s claims concern historical events, particularly those pertaining to criticisms of evolutionary theory. Examples include claims about the lack of intermediate fossils, and Stephen Meyers’ argument about the Cambrian explosion (Meyer, 2013), which attempts to provide a competing explanation to evolutionary theory. However ID also makes claims about being able to detect design in living organisms. While the initial design event may be regarded as a historical claim based far in the past, it is very much a matter of operational science to demonstrate the veracity of design detection techniques.

This means investigating testability will need to consider testing from both a historical and operational point of view, depending on the claims being made.

Empirical adequacy

The second necessary criterion is empirical adequacy. A theory is empirically adequate if its predictions approximate relevant observable aspects of the world. Empirical adequacy is strongly related to testability. If theories are not testable, they will automatically fail to be empirically adequate. Testability is not sufficient, though – theories must be confirmed by successful testing to be empirically adequate.

Of course, testing will not always be successful. Test failures cannot be ignored – they must be acknowledged and eventually resolved. By contrast, pseudosciences are ‘selective in considering confirmations and disconfirmations’ (Thagard, 1978).

Methodological naturalism

The final necessary criterion is the presumption of methodological naturalism (MN) – the principle that ‘all hypotheses and events are to be explained and tested by reference to natural causes and events’ (Kurz, 1998). MN is widely considered to be a working assumption of science, and science’s successes based on MN are a strong inductive justification for requiring it as a necessary feature of doing science.

An important question is whether MN should permit the investigation of supernatural phenomena. In the context of creation science and ID, this is a pertinent question, as creation science explicitly appeals to the supernatural (Creation Ministries International, 2015). MN was a key demarcation criterion used in numerous court cases to reject creation science and subsequently ID because of their apparent reliance upon the supernatural.

Restricting science to natural phenomena is problematic in several ways. Firstly, the concept of the supernatural is ill-defined. Secondly, there seems to be no a priori reason why supernatural events could not exert a causal influence on the physical, and if we suspected that they did, we would surely want to investigate. Finally, there have been numerous attempts to investigate a variety of supernatural phenomena by scientific means, e.g. Benson et al’s study on the effects of intercessory prayer (Benson et al., 2006), and this implies that the supernatural is amenable to scientific investigation.

Boudry et al argue that rather than considering MN as an intrinsic limitation of science, it should be a ‘provisory and empirically grounded commitment to naturalistic causes and explanations, which in principle is revocable by extraordinary empirical evidence’ (Boudry et al, 2010). They justify this inductively, based on ‘the pattern of consistent success of naturalistic explanations’ (Boudry et al, 2010), and label this Provisory (or Pragmatic) Methodological Naturalism (PMN). PMN does not exclude the supernatural by definition, accepting that supernatural forces ‘would have empirically detectable consequences, and these are in principle open to scientific investigation’ (Boudry et al, 2012).

This proposal suggests adopting PMN as a necessary feature of science. It means that supernatural explanations cannot be excluded from science by philosophical decree, but naturalistic explanations are to be preferred unless ‘extraordinary empirical evidence’ is brought to light.

This has implications for testing in the historical sciences, implying that extraordinary ‘smoking gun’ evidence would be required to prefer a supernatural hypothesis over a naturalistic hypothesis.

Additional indicators

Four optional indicators have been chosen that are characteristic of fields that have undisputed scientific status, such as physics, chemistry and biology. Note that some of the indicators are not uniquely associated with scientific fields, but are also characteristic of other academic fields.

Progressiveness

The first indicator is progressiveness. Thagard states that for a theory or discipline to be pseudoscience it ‘has been less progressive than alternative theories over a long period of time, and faces many unsolved problems … the community of practitioners makes little attempt to develop the theory towards solutions of the problems’ (Thagard, 1978). He defines progressiveness as ‘a matter of the success of the theory in adding to its set of facts explained and problems solved’ (Thagard, 1978).

A progressive field continually identifies anomalies and shortcomings of its theories and actively works to resolve them. It develops, modifies and discards theories as necessary – this is sometimes known as revisability, and also implies that theories are tentative.

If an alternative theory exists, comparative progressiveness becomes important. A pseudoscience’s practitioners may be doggedly persisting over many years when an alternative theory is solving problems that they continue to struggle with.

Progressiveness is an indicator rather than a necessary criterion because many theories we regard as scientific have been superseded and are no longer progressive. For example, Newtonian gravity was superseded by general relativity, while steady-state cosmology was superseded by big bang cosmology.

Explanatory power

Theories should explain why things are as they are. Typically, this should be by reference to natural laws, although adopting PMM means this is not absolute.

Peer review

Scientists generally share the results of their research – informally, at conferences, and by publication in peer-reviewed journals. This allows their claims to be scrutinised by the broader scientific community.

Borrowing knowledge

Scientists borrow relevant knowledge from adjacent disciplines relevant to their research. They do not repudiate or ignore knowledge that tends to disconfirm their theories.

Summary of proposal

The proposed demarcation strategy thus consists of evaluating a field against the three necessary criteria of testability, empirical adequacy and PMN, and the optional indicators of progressiveness, explanatory power, peer review, and knowledge borrowing.

Finally, to be considered pseudoscience, a field must both fail the above demarcation test and meet Hansson’s criterion of deliberately masquerading as science.

Evaluating the proposal

The strategy will be briefly evaluated against a field widely thought to be a pseudoscience – astrology. As a comparison, quantum field theory, an undisputed scientific field, is also evaluated.

Astrology

Astrology is somewhat testable, as evidenced by the work of statistician Michel Gauquelin, and more recently Carlson (Carlson, 1985). However it is not empirically adequate, having failed any rigorous tests it has been subjected to. Given that no mechanism is suggested, the PMN criterion is difficult to assess.

Astrology fails to meet most of the proposed indicators. It is not progressive or explanatory – according to Thagard astrology ‘has changed little and has added nothing to its explanatory power since the time of Ptolemy’ (Thagard, 1978). There is no active research community or peer-reviewed publication record. Astrology does borrow some knowledge from astronomy, and this criterion is satisfied in a weak sense.

Based on the necessary criteria and the indicators, astrology fails the demarcation test, which firmly places it outside the realm of science.

Quantum field theory

Quantum field theory (QFT) is a framework for elementary particle physics. The Standard Model is an extremely successful QFT which has been tested extensively and confirmed to an extraordinary degree of accuracy, thus satisfying the necessary criteria. The Standard Model has progressed significantly over the last 60 years or more, it has broad explanatory power, there is a vast research community generating thousands of peer-reviewed articles, and it draws on many different areas of physics. It is science.

Conclusion

In each of the examples, the proposed demarcation strategy performs as expected. Astrology is decisively classified as non-science, and QFT is clearly shown to be science.

Part 3: What is Intelligent Design?

Advertisements

What is the demarcation problem?

What is the demarcation problem?

The demarcation problem is a long-standing philosophical issue of how to distinguish (or demarcate) science from non-science. Demarcation dates back to the early Greek philosophers, and has been a central and problematic issue in philosophy of science for the last fifty years or more.

Why is demarcation such a difficult problem? One reason is that science is a heterogeneous, moving target, developing and changing significantly over time. And yet paradoxically, despite diverse views on the nature of science and demarcation criteria, there is broad agreement on most concrete demarcation cases, suggesting that demarcation is achievable.

Often, we also want to know if a field is a pseudoscience – non-science masquerading as science. Science has a privileged position in modern society, and those practising pseudoscience desire these privileges. Of course, they may also genuinely believe they are practising science.

Creation science is one field widely regarded by philosophers of science and scientists as a pseudoscience. Its proponents claim to provide scientific evidence for an Biblical creation account. Intelligent design (ID), the subject of this series of posts, is a recent derivative of creation science that attempts to place itself more firmly within the realm of science.

The importance of demarcation

Obviously, it is important for philosophers of science that they are able to characterise their own subject matter.

However demarcation is more than a philosophical problem, as modern society not only privileges but depends on scientific knowledge in various ways. For example, we need to be able to distinguish effective medical treatments from quack remedies; we have limited public funding for scientific research and would like to allocate funds appropriately; we need to make political decisions on scientific issues such as climate change. Science is also a central subject in our education systems, and decisions must be made as to what topics should be included in science curricula.

Consequently, society needs to make informed judgements about what is and is not science – and in all of the above dependencies, the critical demarcation issue is between science and pseudoscience – the primary focus of this series, applied to ID.

A brief history of demarcation

Aristotle was the first philosopher to attempt to comprehensively describe scientific knowledge, but it was not until the rise of logical positivism in the twentieth century that demarcation became a central concern of philosophy of science. The logical positivists decided a claim was scientific only if it was empirically meaningful, and that required empirical verifiability.

Karl Popper rejected verificationism, instead proposing falsifiability as the sole demarcation criterion. Thomas Kuhn thought that science in its ‘normal’ phase resisted falsification, and was characterised by routine puzzle-solving, while Imre Lakatos and Paul Thagard proposed progressiveness as a demarcation criterion.

In the 1970s and 1980s, a variety of multi-criteria demarcations were proposed. These included being guided by natural law, testability, tentativeness and falsifiability. In 1983, Larry Laudan famously announced the demise of the demarcation problem, labelling it a ‘pseudoproblem’ that is ‘uninteresting and […] intractable’ (Laudan, 1983). Laudan complained that there are no necessary and sufficient conditions adequate for demarcation, as they invariably misclassify some fields. Work on demarcation subsequently stalled until a recent resurgence of interest, particularly with regard to distinguishing science from pseudoscience.

Hansson noted that a pseudoscience will ‘deliberately attempt to create the impression that it is scientific’ (Hansson, 1996). Pigliucci criticised the desire for a set of necessary and sufficient conditions (Pigliucci, 2013), noting that the demarcation problem is not sharply delineated. Both Pigliucci and Dupré (Dupré, 1993, 242) proposed Wittgenstein’s family resemblances as a basis for demarcation. Rather than a common set of compulsory criteria, family resemblance concepts are connected by overlapping similarities that may not apply to every instance. Pigliucci suggested evaluating disciplines based on their varying degrees of theoretical soundness and empirical support, with no sharp demarcation line.

Similarly, Mahner argued for a variable cluster of indicators that characterise science, requiring a certain number to apply for a field to be regarded as scientific (Mahner, 2013). He suggested there may be up to fifty relevant indicators.

Part 2: A demarcation strategy proposal

Intelligent design: science or pseudoscience?

Intelligent design: science or pseudoscience?

Society privileges science in various ways, particularly in education, and consequently it is important to be able to distinguish between science and non-science – an issue known as the demarcation problem in philosophy of science.

Some non-sciences such as astrology and homeopathy deliberately masquerade as science, and are known as pseudosciences.

This series of posts considers the status of intelligent design (ID), a derivative of creation science.  Creation science claims to be science, but is widely considered to be pseudoscience, primarily because of its insistence on the primacy of the Bible. ID is an attempt to adopt a more scientific approach.

To evaluate ID’s status, a demarcation strategy is proposed and an evaluation based on the strategy conducted.

The ethos of the Philosophical Apologist means that it was important the evaluation be as impartial as possible. A judgement was not finally made until the research for this series was completed.

An outline of this series is shown below.

Part 1:  What is the demarcation problem?

Part 2: A demarcation strategy proposal

Part 3: What is Intelligent Design?

Part 4: Evaluating Intelligent Design – part 1

Part 5: Evaluating Intelligent Design – part 2

Part 6: Is Intelligent Design science?

Part 7: Does it matter if ID isn’t science?