The simulation hypothesis
In a wide-ranging interview at California’s Code Conference 2016, billionaire entrepreneur Elon Musk discussed the idea that we might be living in a computer simulation. Based on the previous 40 years of computer games evolution, he claimed that sooner or later “the games will become indistinguishable from reality”. He argued that at this point in the future there will be billions of computers running games that mimic reality. Therefore, the “odds that we are in base reality is one in billions”.
Musk’s basic point is that because there will be billions of instances of highly advanced virtual reality games being run in the future, we are almost certainly part of one of those virtual realities.
The simulation hypothesis was also recently the topic of the 2016 Isaac Asimov Memorial Debate, where possible evidence for this scenario was raised in a fascinating discussion.
Oxford philosopher Nick Bostrom is well known for his simulation hypothesis, discussed in his 2003 paper Are you living in a computer simulation? Bostrom provides some detailed calculations and comes to a similar conclusion – there will eventually be ample computing power available to run simulations at the level of detail required. Bostrom calls this stage of technological development the “posthuman” stage of civilization. He uses a probabilistic argument to show the three options are i) we will be extinct, or ii) uninterested in running “ancestor-simulations”, or iii) we are in a simulation. In the absence of knowledge of the future, Bostrom assumes each option has a probability of a third. That is, if our descendants exist and they do run ancestor-simulations, we are almost certainly in one now. Of course if we are in a simulation, they are not our descendants, but rather descendants of the real human race (or whatever species created the simulation). Bostrom also notes the possibility of nested simulations, where our simulation creators may themselves be part of a simulation, and so on.
Putting the argument another way, Bostrom concludes that “unless we are now living in a simulation, our descendants will almost certainly never run an ancestor-simulation”, and estimates the probability of us being in a simulation as one chance in three.
Philosopher Julian Baggini attributes fascination with the simulation concept to “a common desire for there to be more to life than the interregnum between cradle and grave”, perhaps because of the possibility of virtual immortality.
Simulation and theism
Given the apparent logic of Bostrom and Musk’s case, theists need to take their arguments seriously, particularly because the possibility of being in a simulation seems incompatible with theistic religions (although not necessarily with theism itself).
For example, it implies that human history is simulated, and so there is reason to believe that it bears little or no relationship to real human history (if human history exists at all). Important events in the history of each religion are likely to be false, and the religions themselves invented by the simulation creator. Of course, this also has implications for naturalism, as the development of science and all our scientific knowledge would also be simulated.
The simulation argument also offers little support to theistic intelligent design. Musk and Bostrom are suggesting naturalistic beings are likely to be responsible for the apparent reality we are experiencing. These could be humans themselves (or perhaps our evolved descendants) in the (apparent) future. They could even be aliens. The rapid development of virtual reality (VR) games provides a cogent rationale for this scenario.
It should be noted that the simulation hypothesis does not purport to offer an explanation for our ultimate origins – rather its purpose is to explain the existence of our current apparent reality. In this sense theism can still be accommodated, where God is either the simulation creator himself, or is responsible for creating the simulation creators. There is, however, no equivalent game-playing rationale for why God might create a simulation, although of course this is still a possibility.
Could we know?
Is there any way we could determine if we were part of a simulation? Yes. Obviously, the VR creator could choose to reveal this to us directly, or perhaps they could leave a series of clues that allow us to infer our status. That could even be the point of the game!
Another possibility is detectable flaws in our simulation. It would be unnecessary to model every part of a simulated universe in equal detail. An optimisation would be to make realistic only the “region” containing simulated intelligent life. If we are the primary focus of the game, there would be no need for anything detailed beyond our programmed limits. We might be able to detect subtle breakdowns in virtual reality if we investigate the edges of where we can explore. Of course, as Bostrom notes, our brain states could be retrospectively edited by the simulation creator if they did not wish us to know.
The problem of evil
Baggini thinks there is a virtual problem of evil for the simulation hypothesis. He asks “if people like us created this virtual world, why on earth are the diseases so nasty, the poverty so widespread and the television so awful?” The problem of evil is one of the stronger arguments against theism, so is it effective against the simulation hypothesis?
No. Baggini clearly hasn’t indulged in survival horror games such as the Resident Evil series or The Walking Dead, which make our world look benign in comparison.There is no virtual problem of evil if the creator has no pretensions of goodness and enjoys virtual suffering. Given that the simulation creators are likely to be future humans, it seems entirely possible the evils of this world have been created by them.
Additionally, simulation creators may not realise that we are conscious beings. From their perspective we may just be extremely sophisticated, self-modifying AI programs which cannot experience suffering.
The real issue for the simulation theory
There are two assumptions underlying the simulation hypothesis.
Firstly, it uses an empirical argument based on our technological progress to conclude that our world is quite likely to be a simulation. But if we are in a simulation, doesn’t that undermine the empirical argument, as our technological progress must be simulated? How do we know anything at all about the underlying reality? Bostrom says if this is the case, we do know that the underlying reality permits simulations, and it contains at least one – ours. That means his third option is true. Conversely, if we are in reality, then the empirical argument is valid and one of Bostrom’s three options (extinction, disinterest in simulations, or simulation) holds. So the hypothesis seems reasonable.
Secondly, there is the issue of consciousness.
The crucial difference between the simulation hypothesis and The Matrix is that the latter is a brain-in-a-vat scenario – a simulation of the external world being fed to the brain. In The Matrix, human minds exist – we are conscious and our minds are still responsible for subjective experience. In the simulation hypothesis, human minds are generated by the simulation itself, and so the simulation must generate our consciousness.
Bostrom acknowledges that his argument requires the assumption that “mental states can supervene on any of a broad class of physical substrates”, including a computer, and that “a computer running a suitable program would be conscious”. Basically, the claim is that consciousness can be generated by replicating brain processes in sufficient detail.
This view is known as strong artificial intelligence, or strong AI, and it is a controversial position in philosophy of mind. Bostrom’s conclusions are of little significance unless strong support can be shown for this position.
The Chinese room
The question of whether AI systems are capable of consciousness has been raised since the earliest days of computer systems. Alan Turing, an AI pioneer, dismissed such questions as meaningless, and in 1950 proposed instead a behavioural test which he called the imitation game. This involves a person remotely interrogating another person and a machine, and attempting to determine which is the computer based on their responses. This scenario and variations of it are now known as the Turing Test.
There have been many criticisms of the Turing Test, the best known being John R. Searle’s Chinese Room argument against computers having cognitive states. The Chinese room involves someone who does not read Chinese alone in a room with a set of instructions in English for manipulating Chinese symbols. Questions in Chinese are passed into the room, and the instructions are used to produce Chinese symbols that are the answers to the submitted questions.
The point of this thought experiment is that the person in the room does not understand Chinese, but is capable of passing a Turing Test for understanding Chinese. This is analogous to a computer performing symbol manipulations even to the point of passing a Turing Test, but having no understanding of what it is doing. As Searle puts it, “syntax is not the same as, nor is it by itself sufficient for, semantics”.
Searle’s Chinese room argument has been a powerful argument against strong AI that has provoked a great deal of discussion. There is currently no consensus as to the soundness of the argument, but it has not been conclusively refuted. Consequently, it remains an issue for the simulation argument.
The Chinese room analogy points to certain characteristics of consciousness that it seems doubtful a computer system could emulate.
One of these is intentionality. Our mental states are about things other than themselves, and this is called intentionality. Because the Chinese room has no understanding of Chinese, it lacks intentionality – its internal states are not about Chinese at all. The Chinese room argument shows that this is an issue for computer systems in general. How can electrical signals produced by computer hardware be about things, other than as directed by the external intentionality of the programmer? Of course, the underlying point is that the Chinese room is not conscious with respect to understanding Chinese.
The “hard” problem of consciousness
Intentionality is an unresolved issue for computational theories, but the “hard” problem of consciousness – accounting for the properties of experience known as qualia – is even more so. What is it like to feel pain? What is it like to see the colour red?
How could computer programs have qualia? Again, it is difficult to imagine how electric circuits can give rise to subjective experience.
Intentionality and qualia are actually problematic issues for the more general thesis of physicalism – the view that everything real is something physical or supervenes on something physical.
Strong AI is based on computational theories of mind, which regard the mind as a computational system. By default they are physicalist theories, relying as they do upon a purely physical substrate of silicon chips and electrical signals. Physicalism can be rejected for substance dualism when it comes to human minds, but obviously substance dualism is not an option for computational theories.
There are a number of arguments against physicalism, and as strong AI is physicalist, they are equally applicable against it. However the increasingly popular physicalist solution of panpsychism is not available to strong AI proponents. Panpsychists such as Galen Strawson believe consciousness is somehow a fundamental part of physical reality. Strong AI does not have this option, unless by some astonishing coincidence the computer hardware manages to generate consciousness – and this somehow interacts with executing AI programs. This is the interaction problem for AI!
Similarly, simulation theories can’t rely on panpsychism, as their worlds are entirely virtual. So the remaining options are eliminativism or epiphenomenalism. Eliminativism denies that we have subjective experiences, while epiphenomenalism denies that our minds have any causal powers. Both options seem incompatible with our intuitions about our minds.
Bostrom has calculated that there is a one in three chance we are living in a simulation. Given the theory’s incompatibility with theistic religions, the premises of his argument require careful examination.
Unless strong AI is possible, Bostrom’s case founders. Searle’s Chinese room illustrates the difficulties, and the usual arguments against physicalism apply. If physicalism is true, Bostrom’s probabilities are meaningful, and it seems likely we are part of a simulation. But there are good reasons to think otherwise.
Bostrom, N. (2003). Are you living in a computer simulation? Philosophical Quarterly 57(211): 243-255.
Bostrom, N. (2016). The Simulation Argument FAQ. [online]. Available at: http://www.simulation-argument.com/faq.html [Accessed 6 Jun. 2016].
Searle, J. (1980). Minds, Brains and Programs. Behavioral and Brain Sciences, 3: 417–57
2 thoughts on “Countering the simulation argument”
I think there is a serious problem for the simulation hypothesis. I believe there is an obvious epistemological objection to the hypothesis.
1. I believe we are in a simulation.
2. Everything in the simulation is simulated.
3. Beliefs are based on the rules, including randomness, and inputs of the simulation.
4. Any attempt to determine if the simulation is rational are determined by the simulation.
5. We cannot know that our beliefs are rational
6. We cannot rationally affirm the belief that we are in a simulation
This objection is, broadly speaking, effective against any wholly deterministic system where one cannot choose between a set of possible beliefs, but one is determined to have those which they do. A simulation simply magnifies this concern, as we are merely puppets.
Is there any situation we can know that our beliefs are rational?