Has neuroscience destroyed the soul?

Has neuroscience destroyed the soul?

In a famous 1996 essay called Sorry, But Your Soul Just Died, writer Tom Wolfe predicted that the rapidly advancing field of neuroscience would soon destroy any notion of self or soul. The mind would be shown to be nothing more than brain processes, and the brain alone would be sufficient to generate the mind. It would be the final nail in the coffin for substance dualism, the view that the mind and the body are composed of different substances, where a substance is a constituent of reality.

What has twenty years of progress shown us? Has Wolfe’s prediction been realised?

According to Julien Musolino, the author of the The Soul Fallacy, published in 2015, the answer is an emphatic yes: “the current scientific consensus rejects any notion of soul or spirit as separate from the activity of the brain”.

Musolino is correct in his pronouncement – the current scientific consensus does seem to overwhelmingly reject a soul or spirit separate from the brain. But perhaps quoting the scientific consensus doesn’t properly address Wolfe’s prediction. After all, the scientific consensus in 1996 probably also rejected a separate soul or spirit as well. A more appropriate question is what evidence does neuroscience offer for science’s rejection of the soul? Or is this rejection primarily based on an a priori assumption of materialism?

To answer this question, we need to delve into neuroimaging, the technology used by neuroscience to explore the inner workings of the brain.

How neuroimaging works

One of the most frequently used neuroimaging techniques is functional magnetic resonance imaging (fMRI).

MRI is based on the principle of nuclear magnetic resonance. When certain atomic nuclei are placed in a magnetic field, they absorb and then emit characteristic electromagnetic radiation that can be measured. The frequency of this radiation depends on the nucleus and its environment. In MRI, hydrogen atoms are used as they are abundant in our bodies, being part of water and fat.

When neural activity in an area of the brain increases, neurons require more glucose for energy, and burning more glucose requires more oxygen. Blood flow to the area increases, bringing both glucose and oxygen, which is delivered via haemoglobin in red blood cells.

Haemoglobin has different magnetic properties depending on whether it is oxygenated or not, and these differences can be detected by MRI. This is known as blood oxygenation level dependent (BOLD) imaging.

Immediately after neural activation, blood oxygenation levels fall. It takes the vascular system several seconds to respond by increasing blood flow, which brings more oxygen. Oxygen levels peak after about six seconds before falling back to slightly below the initial levels.

The assumption behind fMRI is that the BOLD signal is linearly correlated with neural activity. During experiments, the subject’s head is placed in the MRI machine’s magnetic field, and it records the BOLD response throughout their brain for the duration of the experiment.

How experiments are conducted

In a typical experiment, participants perform an experimental task and a control task while brain activity is recorded. The control task and the experimental task share the same cognitive processes, but the experimental task has at least one additional process. For example, experimental participants might be shown a noun, and asked to state a verb that goes with the noun, while control participants are asked to repeat the noun. This is supposed to isolate the cognitive process that involves selecting an appropriate word from a different category.

The fMRI data of participants from each group is combined, and the brain activity responsible for the additional process is obtained by subtracting the activity from the control task, a technique known as cognitive subtraction.

Limitations of fMRI

Despite the widespread use of fMRI, we do not have a complete understanding of the relationship between BOLD signals and neural activity. Numerous factors are involved, and only broad correlations are currently possible. There are also a number of other limitations and caveats that are discussed below.

Not real-time

BOLD signals are at least five seconds behind the neural activity they are measuring. Temporal resolution is also constrained, as sampling frequencies add little information below one second. Neurons work many times faster, and so fMRI is currently of little help in understanding how brains work in real time.


fMRI does not measure the activity of individual neurons. Instead, its spatial resolution is divided into voxels, three dimensional cubes ranging from 1 mm to 5 mm in size. A voxel contains millions of neurons and tens of millions of synapses.


As the BOLD signal is relatively weak, care must be taken to control sources of noise, which include random neural activity, body movement and noise from the scanner. It is difficult to control all background effects, and subjects themselves can have variation in their neural activity from trial to trial.

There are a number of preprocessing steps performed to strip out noise prior to statistical analysis. For example, corrections are made for head motion, which moves voxels.

The data from each subject must also be normalised according to a standard brain “atlas” to eliminate structural variability between brains.

Statistical analysis

Interpreting raw experimental data to produce neuroimages is a complex statistical process. The infamous dead salmon study illustrated some of the issues. A salmon purchased from a store was shown photographs of people and asked to guess what the people were feeling. The researchers found that when the imaging data was analysed, a small part of the salmon’s brain showed activity in response to the photographs.

This is the multiple comparisons problem – if enough comparisons are performed, at least some of them will return positive results, even if they are false. Because fMRI scans divide the brain into 50,000 or more regions, they are very susceptible. Corrections can be made to account for the problem, but at the time of publication, 25-40% of fMRI studies were not doing so. Fortunately, this has dropped to around 10% by 2012 and is hopefully dropped further since, but it shows the complications involved.

Inhibitory neural activity

There are some indications that inhibitory neural activity may also increase the BOLD response, which obviously casts doubt on interpretation. More research is required in this area.

Cognitive subtraction

Recent research casts doubt on the technique of cognitive subtraction, used to isolate brain areas that contribute to a cognitive task in almost all brain mapping experiments. When subjects engage in an experiment, they suppress certain brain activity, and when they release the suppression, activity shoots up. So some parts of the brain show increased activity for less demanding tasks – a form of cognitive addition rather than subtraction.

Glial cells

Almost 90% of the brain is composed of glial cells, not neurons. For a long time glial cells were regarded merely as insulators for neurons, but research is now indicating that a type of glial cell called astrocytes may be involved in neuron signalling. Astrocytes have as many as 30,000 connections with surrounding cells, far more connections than neurons. According to researcher Andrea Volterra, “if glia are involved in signalling, processing in the brain turns out to be an order of magnitude more complex than previously expected”. For decades neurons have been the focus of brain research, and if astrocytes prove to be significant, a radical revision would be required. For now, their involvement is debated and being actively researched.

Are neuroimages photographs?

It should be clear from the explanation of fMRI above that neuroimages are not photographs of brain activity, despite similarities in their appearance.

fMRI does not directly measure brain activity, and data is not concurrent with the brain activity it represents. The data is highly preprocessed, and typically is a combination of results across multiple subjects, not a single brain. The end result is a statistical representation of a highly complex system.

1206 FMRI

The apparent similarities between neuroimages and photographs is problematic when it comes to interpretation by non-specialists.

According to Adina L. Roskies, “photography enjoys a privileged epistemic status”. Photographs are closely tied to reality, and accurately represent many of the qualities of their subjects. Importantly, we have a clear grasp of the causal relationship between photographs and their subjects. We regard a photograph as an objective representation, unaffected by the photographer’s beliefs.

Unfortunately, when non-specialists view neuroimages, they think they are seeing photographs of brain activity, and consequently find them compelling. They wrongly attribute the epistemic status of photographs to neuroimages, and develop an exaggerated concept of what they can tell us about the brain, which becomes part of popular culture.

What do neuroimages tell us?

As noted previously, neuroimages tell us little about how the brain works in real time. Instead, they provide information about which brain areas are correlated with particular mental events or stimuli. This is at a coarse level of millions of neurons, so if activity is occurring on a smaller scale, fMRI may not capture it. Given that it is difficult to discriminate between excitatory and inhibitory activity, we have little idea of what is going on at the level of individual neurons. That requires single-unit recordings, an invasive technique that involves inserting microelectrodes in the brain. For ethical and practical reasons, this can rarely be used, at least on humans.

Importantly, research so far shows that many regions of the brain have fairly general functions – a brain region may be engaged by many different cognitive processes. Specific cognitive processes involve networks of regions – they do not work independently. To determine the function of a particular region requires examining all of the cognitive processes that engage it. To exhaust all the possibilities for each region requires extensive research.

Correlation, dependence and causation

fMRI studies have established correlations between mental functions and areas of the brain, not causation. How might causation be established? Would this refute the idea of an immaterial mind or soul?

Brain damage is one possibility. When a person’s brain is damaged, it seems that their mind is damaged. From our fMRI correlations, we can reliably predict which mental functions will be impaired by damage in different regions of the brain. Cases such as Phineas Cage demonstrate that even our personalities can be radically changed when certain injuries occur. It would seem this establishes a degree of dependence of mental functions on the brain as well as correlations, although rigorous investigation requires the ability to safely deactivate and reactivate regions of the brain.

Doesn’t this dependence demonstrate that the mind is identical with or caused solely by the brain?

Not according to substance dualists, who claim that the mind uses the brain to express its abilities. The interaction between mind and brain means that correlation and even dependence is expected. A damaged brain results in a damaged expression of mind, even though the mind remains intact. Very tentative support for this view can be found in the rare cases of terminal lucidity primarily in Alzheimer’s patients.

What about consciousness?

What does neuroscience’s current progress tell us about the existence of the soul, or on a more philosophical level, about whether the mind and the brain are separate substances? For substance dualists, the mind is the soul, and so if neuroscience can explain the mind in its entirety as brain processes, then the soul is generated by the brain. It cannot be a separate substance.

To explain the mind, neuroscience must explain consciousness.

A science of consciousness must describe and explain the principal features of consciousness, and this involves two different types of data. Third-person aspects of consciousness are the “easy” problems. When a conscious system is observed, there is a range of specific behaviour accompanied by neural phenomena. For example, take someone listening to music. The third-party data involves the music, the effects on the ear and the auditory cortex of the brain, and the responses of the subject. All these must be explained in terms of neural mechanisms. But in addition, there is the “hard” problem of consciousness – the problem of explaining the subject’s subjective experience of listening to music. This is the first-person aspect of consciousness.

Neuroscience is making progress on explaining the third-person data, although the issues involved are anything but “easy”, particularly given the current limitations of fMRI. But presumably technology will eventually improve to the point that we can accurately correlate neural activity with mental functions to the level of single neurons. At this point we would have an incredibly detailed, complex map of what neurons are associated with each cognitive process. We may even be able to demonstrate causality.

What about first-person data? It is difficult to gather first-person data, as it is only indirectly available. We must rely on subjects’ verbal reports, which are hard to report accurately for experiences that are rich in detail.

Even if we can successfully gather first-person data, we are still left with the the “hard” problem of explaining how neural activity creates our subjective experiences. This is often known as the explanatory gap, and it is an open question in philosophy whether it can be resolved. Philosophers of mind have proposed numerous strategies to bridge the gap. Proposals range from extremes such as denying we have subjective experiences at all, to views such as panpsychism, the view that consciousness is a universal feature of matter. Substance dualism, of course, does bridge the explanatory gap.


One of the primary technologies used in neuroscience is fMRI – a relatively crude tool whose theoretical basis is not fully understood. Interpretation of fMRI data is also a complicated process. There are also uncertainties surrounding the role of astrocytes, and future research could result in a significant revision of how they interact with neurons and contribute to brain processes. So caution is required in when it comes to claims of what neuroscience has proven about the mind.

Neuroscience has made some progress on the “easy” problems of consciousness, establishing a number of correlations between regions of the brain and specific mental functions, but there is much yet to learn. Brain damage and resultant impaired mental functions indicate dependence of the mind on the brain, but dualism does account for this.

The “hard” problem of consciousness – subjective experience – is largely unexplored, and there are severe obstacles in producing an adequate explanatory account. However if the mind is to be explained as processes generated entirely by the brain, such an account is required.

It is clear that currently neuroscience is a very long way from destroying the soul, and any claim to the contrary is vastly overstating its capabilities and achievements.



Countering the simulation argument

Countering the simulation argument

The simulation hypothesis

In a wide-ranging interview at California’s Code Conference 2016, billionaire entrepreneur Elon Musk discussed the idea that we might be living in a computer simulation. Based on the previous 40 years of computer games evolution, he claimed that sooner or later “the games will become indistinguishable from reality”. He argued that at this point in the future there will be billions of computers running games that mimic reality. Therefore, the “odds that we are in base reality is one in billions”.

Musk’s basic point is that because there will be billions of instances of highly advanced virtual reality games being run in the future, we are almost certainly part of one of those virtual realities.

The simulation hypothesis was also recently the topic of the 2016 Isaac Asimov Memorial Debate, where possible evidence for this scenario was raised in a fascinating discussion.

Oxford philosopher Nick Bostrom is well known for his simulation hypothesis, discussed in his  2003 paper Are you living in a computer simulation? Bostrom provides some detailed calculations and comes to a similar conclusion  – there will eventually be ample computing power available to run simulations at the level of detail required. Bostrom calls this stage of technological development the “posthuman” stage of civilization. He uses a probabilistic argument to show the three options are i) we will be extinct, or ii) uninterested in running “ancestor-simulations”, or iii) we are in a simulation. In the absence of knowledge of the future, Bostrom assumes each option has a probability of a third. That is, if our descendants exist and they do run ancestor-simulations, we are almost certainly in one now. Of course if we are in a simulation, they are not our descendants, but rather descendants of the real human race (or whatever species created the simulation). Bostrom also notes the possibility of nested simulations, where our simulation creators may themselves be part of a simulation, and so on.

Putting the argument another way, Bostrom concludes that “unless we are now living in a simulation, our descendants will almost certainly never run an ancestor-simulation”, and estimates the probability of us being in a simulation as one chance in three.

Philosopher Julian Baggini attributes fascination with the simulation concept to “a common desire for there to be more to life than the interregnum between cradle and grave”, perhaps because of the possibility of virtual immortality.

Simulation and theism

Given the apparent logic of Bostrom and Musk’s case, theists need to take their arguments seriously, particularly because the possibility of being in a simulation seems incompatible with theistic religions (although not necessarily with theism itself).

For example, it implies that human history is simulated, and so there is reason to believe that it bears little or no relationship to real human history (if human history exists at all). Important events in the history of each religion are likely to be false, and the religions themselves invented by the simulation creator. Of course, this also has implications for naturalism, as the development of science and all our scientific knowledge would also be simulated.

The simulation argument also offers little support to theistic intelligent design. Musk and Bostrom are suggesting naturalistic beings are likely to be responsible for the apparent reality we are experiencing. These could be humans themselves (or perhaps our evolved descendants) in the (apparent) future. They could even be aliens. The rapid development of virtual reality (VR) games provides a cogent rationale for this scenario.

It should be noted that the simulation hypothesis does not purport to offer an explanation for our ultimate origins – rather its purpose is to explain the existence of our current apparent reality. In this sense theism can still be accommodated, where God is either the simulation creator himself, or is responsible for creating the simulation creators. There is, however, no equivalent game-playing rationale for why God might create a simulation, although of course this is still a possibility.

Could we know?

Is there any way we could determine if we were part of a simulation? Yes. Obviously, the VR creator could choose to reveal this to us directly, or perhaps they could leave a series of clues that allow us to infer our status. That could even be the point of the game!

Another possibility is detectable flaws in our simulation. It would be unnecessary to model every part of a simulated universe in equal detail. An optimisation would be to make realistic only the “region” containing simulated intelligent life. If we are the primary focus of the game, there would be no need for anything detailed beyond our programmed limits. We might be able to detect subtle breakdowns in virtual reality if we investigate the edges of where we can explore. Of course, as Bostrom notes, our brain states could be retrospectively edited by the simulation creator if they did not wish us to know.

The problem of evil

Baggini thinks there is a virtual problem of evil for the simulation hypothesis. He asks “if people like us created this virtual world, why on earth are the diseases so nasty, the poverty so widespread and the television so awful?” The problem of evil is one of the stronger arguments against theism, so is it effective against the simulation hypothesis?

No. Baggini clearly hasn’t indulged in survival horror games such as the Resident Evil series or The Walking Dead, which make our world look benign in comparison.There is no virtual problem of evil if the creator has no pretensions of goodness and enjoys virtual suffering. Given that the simulation creators are likely to be future humans, it seems entirely possible the evils of this world have been created by them.

Additionally, simulation creators may not realise that we are conscious beings. From their perspective we may just be extremely sophisticated, self-modifying AI programs which cannot experience suffering.

The real issue for the simulation theory

There are two assumptions underlying the simulation hypothesis.

Firstly, it uses an empirical argument based on our technological progress to conclude that our world is quite likely to be a simulation. But if we are in a simulation, doesn’t that undermine the empirical argument, as our technological progress must be simulated? How do we know anything at all about the underlying reality? Bostrom says if this is the case, we do know that the underlying reality permits simulations, and it contains at least one – ours. That means his third option is true. Conversely, if we are in reality, then the empirical argument is valid and one of Bostrom’s three options (extinction, disinterest in simulations, or simulation) holds. So the hypothesis seems reasonable.

Secondly, there is the issue of consciousness.

The crucial difference between the simulation hypothesis and The Matrix is that the latter is a brain-in-a-vat scenario – a simulation of the external world being fed to the brain. In The Matrix, human minds exist – we are conscious and our minds are still responsible for subjective experience. In the simulation hypothesis, human minds are generated by the simulation itself, and so the simulation must generate our consciousness.

Bostrom acknowledges that his argument requires the assumption that “mental states can supervene on any of a broad class of physical substrates”, including a computer, and that “a computer running a suitable program would be conscious”. Basically, the claim is that consciousness can be generated by replicating brain processes in sufficient detail.

This view is known as strong artificial intelligence, or strong AI, and it is a controversial position in philosophy of mind. Bostrom’s conclusions are of little significance unless strong support can be shown for this position.

The Chinese room

The question of whether AI systems are capable of consciousness has been raised since the earliest days of computer systems. Alan Turing, an AI pioneer, dismissed such questions as meaningless, and in 1950 proposed instead a behavioural test which he called the imitation game.  This involves a person remotely interrogating another person and a machine, and attempting to determine which is the computer based on their responses. This scenario and variations of it are now known as the Turing Test.

There have been many criticisms of the Turing Test, the best known being John R. Searle’s Chinese Room argument against computers having cognitive states. The Chinese room involves someone who does not read Chinese alone in a room with a set of instructions in English for manipulating Chinese symbols. Questions in Chinese are passed into the room, and the instructions are used to produce Chinese symbols that are the answers to the submitted questions.

The point of this thought experiment is that the person in the room does not understand Chinese, but is capable of passing a Turing Test for understanding Chinese. This is analogous to a computer performing symbol manipulations even to the point of passing a Turing Test, but having no understanding of what it is doing. As Searle puts it, “syntax is not the same as, nor is it by itself sufficient for, semantics”.

Searle’s Chinese room argument has been a powerful argument against strong AI that has provoked a great deal of discussion. There is currently no consensus as to the soundness of the argument, but it has not been conclusively refuted. Consequently, it remains an issue for the simulation argument.


The Chinese room analogy points to certain characteristics of consciousness that it seems doubtful a computer system could emulate.

One of these is intentionality. Our mental states are about things other than themselves, and this is called intentionality. Because the Chinese room has no understanding of Chinese, it lacks intentionality – its internal states are not about Chinese at all. The Chinese room argument shows that this is an issue for computer systems in general. How can electrical signals produced by computer hardware be about things, other than as directed by the external intentionality of the programmer? Of course, the underlying point is that the Chinese room is not conscious with respect to understanding Chinese.

The “hard” problem of consciousness

Intentionality is an unresolved issue for computational theories, but the “hard” problem of consciousness – accounting for the properties of experience known as qualia – is even more so. What is it like to feel pain? What is it like to see the colour red?

How could computer programs have qualia? Again, it is difficult to imagine how electric circuits can give rise to subjective experience.


Intentionality and qualia are actually problematic issues for the more general thesis of physicalism – the view that everything real is something physical or supervenes on something physical.

Strong AI is based on computational theories of mind,  which regard the mind as a computational system. By default they are physicalist theories, relying as they do upon a purely physical substrate of silicon chips and electrical signals. Physicalism can be rejected for substance dualism when it comes to human minds, but obviously substance dualism is not an option for computational theories.

There are a number of arguments against physicalism, and as strong AI is physicalist, they are equally applicable against it. However the increasingly popular physicalist solution of panpsychism is not available to strong AI proponents. Panpsychists such as Galen Strawson believe  consciousness is somehow a fundamental part of physical reality. Strong AI does not have this option, unless by some astonishing coincidence the computer hardware manages to generate consciousness – and this somehow interacts with executing AI programs. This is the interaction problem for AI!

Similarly, simulation theories can’t rely on panpsychism, as their worlds are entirely virtual. So the remaining options are eliminativism or epiphenomenalism. Eliminativism denies that we have subjective experiences, while epiphenomenalism denies that our minds have any causal powers. Both options seem incompatible with our intuitions about our minds.


Bostrom has calculated that there is a one in three chance we are living in a simulation. Given the theory’s incompatibility with theistic religions, the premises of his argument require careful examination.

Unless strong AI is possible, Bostrom’s case founders. Searle’s Chinese room illustrates the difficulties, and the usual arguments against physicalism apply. If physicalism is true, Bostrom’s probabilities are meaningful, and it seems likely we are part of a simulation. But there are good reasons to think otherwise.


Bostrom, N. (2003). Are you living in a computer simulation? Philosophical Quarterly 57(211): 243-255.

Bostrom, N. (2016). The Simulation Argument FAQ. [online]. Available at: http://www.simulation-argument.com/faq.html [Accessed 6 Jun. 2016].

Searle, J. (1980). Minds, Brains and ProgramsBehavioral and Brain Sciences, 3: 417–57

The case against physicalism

The case against physicalism

What is physicalism?

Physicalism is the view that everything real is ultimately something physical. A more traditional term often used interchangeably is materialism – the view that everything is matter – but this term is used less frequently now that we know forces such as gravity are not strictly material.

Physicalism does not deny that many things such as consciousness seem non-physical. But in the end physicalists usually claim such things supervene in some way on the physical, or can be reduced to physical entities. What does it mean to supervene? If you recall the old dot-matrix printers, think of how they printed pictures as a series of dots. The picture supervenes on the physical dots. If two pictures have an identical dot pattern, the pictures must be identical. Similarly, physicalism entails that apparently non-physical entities supervene on physical properties.

It should be noted that there are various physicalist views such as token and type physicalism, and reductive and non-reductive physicalism. There are also different conceptions on what it means for something to be physical (theory-based and object-based). These nuances can get very complicated, and won’t be explored here.

Physicalism and God

Given we normally conceive of God as non-physical, atheism is a consequence of physicalism. Physicalism also rules out substance dualism – the view that our minds or souls are made of a non-physical substance. It’s worth noting that there is a view known as Christian physicalism, which entails that there is no such thing as a non-physical mind, soul or spirit. Of course Christian physicalists are not true physicalists, as they admit the existence of God as a non-physical entity.

Why physicalism?

Why be a physicalist, when entities like minds seem obviously non-physical? According to Daniel Stoljar, “we live in an overwhelmingly physicalist or materialist intellectual culture. The result is that, as things currently stand, the standards of argumentation required to persuade someone of the truth of physicalism are much lower than the standards required to persuade someone of its negation” 1.

Physicalism is a view generally adopted by default. Few people go through the arguments for and against the view. As it turns out, the arguments in favour of physicalism are not very persuasive.

Argument from science

This argument is based on the success of science. Science has been extremely successful, and science has shown us that there is no non-physical stuff. So physicalism is true.

This is a dubious claim. Here’s an analogy by philosopher Edward Feser:

1. Metal detectors have had far greater success in finding coins and other metallic objects in more places than any other method has.
2. Therefore we have good reason to think that metal detectors can reveal to us everything that can be revealed about metallic objects.

This argument is fatally flawed. Metal detectors ignore shape, colour and other aspects of metallic objects and focus on their metallic composition. They don’t reveal everything about metallic objects.

Similarly, science is successful precisely because it focuses on aspects of nature that can be observed in some way, predicted and controlled. Non-material things are not as amenable to scientific investigation, but that doesn’t mean science has shown they don’t exist.

Argument from neuroscience

Doesn’t neuroscience show that the mind is nothing more than brain processes? This view was expressed in a famous essay by Tom Wolfe, Sorry, But Your Soul Just Died, reporting on the progress of neuroscience in 1996 2. Twenty years later, is that what we’ve concluded? No. Neuroscience doesn’t tell us how the brain works, or even conclusively which bit of the brain is responsible for each function.

Neuroscience uses brain imaging to investigate the workings of the brain. A common technique is functional magnetic resonance imaging (fMRI), and despite appearances fMRI images are not snapshots of the brain in action. fMRI doesn’t measure brain activity, but rather blood oxygen levels (BOLD) as a proxy for brain activity. This is poorly understood, and also has a 2-5 second delay and so isn’t concurrent with actual brain activity.

Importantly, fMRI images are not pictures of brain activity. They are statistical representations of a highly complex system. Each region interacts with
other regions and is part of many experiences. Seeing one area light up in response to a stimulus doesn’t necessarily mean a particular sensation is felt, and it doesn’t capture higher functions such as our sense of self, our intentions, and our voluntary actions.

Interpreting image data is also tricky, illustrated by the infamous dead salmon study. A salmon purchased from a store was shown photographs of people and asked to guess what the people were feeling. The researchers found that when the imaging data was analysed, a small part of the salmon’s brain showed activity in response to the photographs. This is the multiple comparisons problem – if you perform enough tests, at least some of them will return positive results, even if they are not real. Because fMRI scans divide the brain into 50,000 or more regions, they are very susceptible to this issue.

What do fMRI images tell us about the mind and the brain? Statistical flaws aside, they tell us that certain mental activities are usually correlated with increased blood flow to certain parts of the brain. We aren’t even sure exactly what increased blood flow is actually measuring – hopefully neuronal activity, although some researchers doubt this.

Raymond Tallis sums up our knowledge of the brain derived from neuroscience well: “The conclusion from (as we have seen, rather wobbly) correlations of bits of consciousness with bits of brain activity that the former is identical with the latter depends on several elementary errors, notably that correlation is causation and causation is identity and that necessary conditions are identical with sufficient conditions. As regards the latter, while it is obvious that a brain in good working order is a necessary condition of consciousness, it does not follow from this that it is a sufficient condition of consciousness or that its workings are identical with consciousness” 3 .

We are a long way from being able to demonstrate that the mind is nothing more than the brain, a necessary requirement for physicalism to be true.

Argument from causal closure

This argument relies on the principle of causal closure of the physical – that physical effects have only physical causes. If mental events cause physical events (e.g. wanting to raise your arm raises it), then mental events must either be physical or supervene on the physical (which is physicalism).

What is the status of the causal closure principle? It is an empirical thesis that has developed over time. It is not implied by physics, but is rather a metaphysical principle. Causal closure is widely accepted, but has no knockdown arguments in favour of it. After all, we don’t even really have a widely accepted theory of causality. And yet causal closure is one of the main argument for physicalism.

Causal closure also faces a number of problems, such as that of mental causation. If a mental event causes a physical event, there must be a physical cause of the event if causal closure is true. But does this mean the mental event is irrelevant? This is Kim’s exclusion argument, and a significant problem for non-reductive physicalism.

Another issue for causal closure (and thus physicalism) is Hempel’s dilemma. If we define physicalism by reference to contemporary physics, then it must be false — after all, who thinks that contemporary physics is complete or entirely correct? But if we define physicalism by reference to a future ideal physics, then the definition is meaningless — after all, who can predict what a future physics contains?

Issues for physicalism

As shown above, the arguments for physicalism are not particularly strong. But physicalism also faces a number of significant issues.


Our mental states are about things other than themselves – this is called intentionality. For example, consider ink on paper spelling the word “brain”. The ink shapes are meaningless until we give them meaning – the physical properties of the ink and the letters do not give them meaning. Our thoughts give them intentionality. But if physical neural processes are like physical ink marks, devoid of intentionality, how can thoughts (which have intentionality) be merely neural processes?

The “hard” problem of consciousness

According to David Chalmers (echoing Noam Chomsky), “the intentionality issue is a problem, but the qualia issue is a mystery” 4. What is it like to feel pain? What is it like to see the colour red? These are the properties of experience known as qualia, and they have a number of important characteristics.

Firstly, qualia are private, as they are experienced through the first person. Qualia are also ineffable – the experience of qualia cannot be fully conveyed through words. Finally, qualia are directly accessible through consciousness.

Physical objects don’t share these these properties, and so qualia are a problem for physicalism – there is an issue in explaining how physical properties give rise to the way things feel when they are experienced. This is known as the explanatory gap.

Can neuroscience correlate qualia and neural states? No, at best it can only correlate verbal reports of qualia with neural states. Qualia are private, so correlations could be consistent with qualia inversions (e.g. colour spectrum inversions).

There are a number of well-known arguments that demonstrate the difficulty qualia poses to physicalism. The best known is probably Jackson’s knowledge argument, which poses the following scenario:

Mary is a brilliant scientist specialising in the neurophysiology of vision. She knows all the physical facts about colour and perception. Mary has spent her life locked in a black and white room with a black and white monitor to view the world. When released from the room, and seeing red (say) for the first time ever, does she learn anything new?

Intuitively, yes. Mary’s experience of seeing red has taught her something new about how people see the world.The argument is outlined below:

1. Mary knows all the physical facts about colour vision.
2. Mary learns some new facts on leaving.
3. There are non-physical facts about colour vision, so physicalism is false.

There is clearly an issue for physicalism if the knowledge argument is sound. How do physicalists respond?

Primarily, they argue that Mary hasn’t really learnt anything new. Rather Mary has gained a new ability (or know-how), or she has obtained knowledge by acquaintance which she already knows.

Alternatively, some physicalists use a different definition of physicalism, or use a different definition of what it means to be physical. A more recent approach is to explain the apparent discrepancy in terms of phenomenal concepts.  The apparent gap between consciousness and the physical world is explained rather as a gap in our concepts of  consciousness and the physical world.

Frank Jackson himself no longer believes the knowledge problem is an issue for physicalism. He has adopted representationalism, holding that qualia are nothing more than intentional or representational mental contents. This reduces the hard problem of consciousness to the easier problem of intentionality.

Other responses to the explanatory gap include Dennett’s eliminativism. His belief is that our phenomenal features of the perceived world are not actually qualia, but only seem like qualia, and that there is no “hard problem” of consciousness. The Churchlands take eliminativism to the extreme and deny that qualia exist at all.

David Chalmers (who popularised philosophical zombies as another argument against physicalism) comprehensively argues that none of these strategies can sensibly close the explanatory gap 5. Instead, options reduce to either some form of dualism or consciousness somehow being a fundamental part of physical reality. An example of the latter is Galen Strawson’s panpsychism.

Epiphenomenalism is a form of dualism that holds that mental and physical properties are distinct, but mental properties have no causal powers. This denies mental causation, which is unacceptable to most people, and is incompatible with our knowledge of our own minds.

Historically, substance dualism (or Cartesian dualism) has been the most widely accepted solution. The explanatory gap is because the mental and the physical are different substances. Of course, if substance dualism is true, physicalism is false.

Substance dualism

Substance dualism is usually rejected out of hand as “outmoded”, or “implying magical substances”. How reasonable are the objections to substance dualism?

Given the rising popularity of forms of panpsychism, which implies all matter is conscious in some sense, accusations about magical substances seem rather misplaced. But what are the other objections to substance dualism?

Doesn’t neuroscience refute dualism?

No, we’ve seen that neuroscience does not demonstrate the mind is identical to the brain or brain processes. There are a small but significant number of neuroscientists who are dualists, such as Nobel Prize winner John Eccles, UCLA neuroscientist Jeffrey Schwartz, and Mario Beaureguard, author of The Spiritual Brain: A Neuroscientist’s Case for the Existence of the Soul.

Interestingly, it seems we are hard-wired in some way to be dualists,  even if we are neuroscientists who are physicalists. According to recent research, “although few neuroscientists openly endorse Cartesian dualism, careful reading reveals dualistic intuitions in prominent neuroscientific texts” .

The interaction problem

How does the immaterial mind interact with the material brain? Given dualism, there must be some kind of causal connection between the two entities, and currently we don’t have a viable explanation.

This is an issue for dualism, but it certainly is not a fatal objection. Newton didn’t have an answer to what the causal agent was for his gravitational force, which was inexplicably capable of acting at a distance. Arguably, we haven’t progressed much further – we explain the gravitational force in terms of hypothetical particles called gravitons that we have never observed and are unlikely to ever directly detect.

There are four fundamental forces we currently know of – the strong and weak nuclear forces, electromagnetism, and gravity. It seems possible that there is a fifth basic force associated with some kind of mental field. This isn’t far removed from panpsychist theories about consciousness being a fundamental part of nature.

Contemporary physics is also somewhat encouraging to the possibility of interactionism. Quantum mechanics altered the status of an observer to an active participant that affects a system’s behaviour. It is possible that consciousness plays a role – perhaps the defining feature of measurement is that it involves conscious observation. This is the essence of the Von Neumann–Wigner interpretation of quantum mechanics, where consciousness causes the collapse of the wave function. This is admittedly a fringe theory.

Causal closure (again)

We’ve mentioned causal closure previously, and it is closely related to the interaction problem. The principal of causal closure states that physical effects have only physical causes, which seems incompatible with substance dualism.

As explained earlier, causal closure is a metaphysical thesis rather than a law of physics, and it is by no means certain. If a fifth fundamental force does exist, or if consciousness is an important effect in quantum mechanics, then causal closure is false.

Dualism is still viable

Substance dualism is still a viable theory of consciousness, despite its relative unpopularity amongst philosophers. Lycan, a physicalist, nonetheless argues that “no convincing case has been against substance dualism, and that standard objections to it can be credibly answered”7. Dualism neatly resolves the problems of consciousness, and has as much support or more than physicalist alternatives.

Hylemorphic dualism

One dualistic alternative to substance dualism that has been gaining in acceptance amongst some philosophers, especially Thomists, is hylemorphic dualism, a view going back to Aristotle that regards being as a compound of matter and form. Form is the essence of a thing, what you might call its organising principle, and in hylemorphic dualism the human soul is regarded as the form of the body. A good defense of hylemorphic dualism by David Oderberg can be found here.


Although physicalism is the default view in our intellectual culture, the arguments for physicalism are not particularly strong, and there are some substantial arguments against it. It is equally intellectually respectable to hold to a dualistic view of reality.


Stoljar, D. (2001). Physicalism. [online] Plato.stanford.edu. Available at: http://plato.stanford.edu/entries/physicalism [Accessed 1 Jun. 2016]. 

2 Reprinted in Wolfe, T. (2000). Hooking up. New York: Farrar, Straus, and Giroux.

3 Tallis, R. (2009). Neurotrash | New Humanist. [online] Newhumanist.org.uk. Available at: https://newhumanist.org.uk/articles/2172/neurotrash [Accessed 1 Jun. 2016].

4 Chalmers, D., 1996, The Conscious Mind, New York: Oxford University Press, p24.

Chalmers, D. (2010). The character of consciousness. Oxford: Oxford University Press.

Mudrik, L. and Maoz, U. (2015). “Me & My Brain”: Exposing Neuroscience’s Closet Dualism. Journal of Cognitive Neuroscience, 27(2), pp.211-221.

Lycan, W. (2009). Giving Dualism its Due. Australasian Journal of Philosophy, 87(4), pp.551-563.