Consciousness and the mind upload problem- a thorough critique

It is popular for people to talk about mind uploading as part of our future as humans. Mind uploading involves the storage and recreation of a mind in a computer. Many futurists believe that this will become a part of most people’s end of life strategy. By uploading your brain, you can avoid the certain death of your physical body. Some even think that it will become popular to upload our minds earlier in our lives, to allow advantages such as being able to exist in a simulated world, or to avoid unexpected accidental death. An uploaded brain might be able to store ‘save points’ where an individual can go back to if their artificial body is destroyed.

Despite the amazing possibilities of copying and recreating conscious being, I think that many adherents of mind uploading are selling a false promise: that mind uploading will give you, or what we think is you, eternal life. To break this notion down, I will be addressing several key misconceptions and physical problems that could make this problem impossible. I will start with the easier misconceptions, and move through to more difficult to answer or complicated issues.


What is consciousness
For the purposes of this discussion, I will be defining consciousness as follows: consciousness is defined as having qualia. That is, having the experience of the world. This is distinct from acting in the world. When you see a sunrise, smell a rose, taste a steak, this is all qualia. Our senses aren’t simply sensors; we don’t just take in information, process it then perform an output, like a computer might (although, depending on what we’re doing, it may feel that way! e.g driving) There is something in between which is how we feel as we do these things. We could conceivably make a robot that performs functions very similar to a human which may appear to be intelligent and gives us reason to believe that the robot is conscious. Since we do not experience what the robot experiences, we cannot know whether the robot is actually conscious and having conscious experiences of the world.


The question of consciousness is a difficult one not just for robots, but for other humans as well. It is possible that everyone around us is simply a meat-bot, acting intelligently but not actually experiencing the world around them. This position is commonly referred to as solipsism. For this argument, we will assume that the people around us are real and do really have conscious experience of the world. We don’t have a lot of evidence that all the people around us have been created in a contrived system to trick us into think that everyone around has consciousness. We have some evidence that the people around us are somewhat like us, and hence they may have some qualities in common, specifically there is some evidence through extrapolation that the people around us could have consciousness. On the other hand, we do know that people are trying to create computers and robots that have the appearance of intelligence but possibly not consciousness.
The metaphysical implications on consciousness during mind uploading are important because we do not simply want a robot representation of the world to replace us. The purpose of mind uploading is to actually experience the world long after we would have otherwise died, or experience a different world through an electronic version of ourselves.


Storing your memories
The fallacy I see with most people’s idea of mind uploading is that they have taken the analogy of computers as brains too far. That is, they think that there are parts of the brain dedicated to memory storage (like RAM or hard drives in a computer) and parts of the brain dedicated to processing (like the CPU of the computer). This could not be further from the truth. The brain is made up of roughly 100 billion neurons. Each one of those neurons provides both memory and processing simultaneously. Amazingly, each neuron also generates all the energy they need to perform their function within each of the cells.
Simply storing your memories on a hard drive will not ensure eternal longevity any more than recording your voice will. While in a computer you can load up the RAM with data and run a similar processor and get the same results, the difference between brains is not simply the memories that are stored within them, but also the way it processes information itself.


To recreate the same conscious being (or at least a being resembling the old one), you would need to capture the complex interplay between the memories that you have and the methodology by which they are stored recreated and influence the production of new ideas and outputs.


Complex behaviour is not evidence of consciousness
Complex behaviour is often cited as evidence of ‘consciousness’ in the Turing test. The Turing test, invented by Alan Turing, is a test designed to determine if a computer can replicate human behaviour such that it resembles an intelligent human being. In the original Turing test, a computer operator has a text discussion with a computer designed to appear to be human. The computer passes the Turing test if the computer operator is incapable of distinguishing whether there is a human or a computer having the discussion on the other side. Despite the common misconception, the Turing test does not prove intelligent behaviour, let alone consciousness. In fact, Alan Turing did not call the test after himself. He called the test “the imitation game”. Many people have equated intelligent looking behaviour with intelligence, which is not quite true. In order to ascertain intelligence, many, many more test are required than a simple Turing test, as the Turing test is actually a tiny subset of human intelligence and behaviour.


Even more worrying than the above conclusion about passing the Turing test being equivalent to intelligent behaviour, is that some people have equated intelligent looking behaviour with consciousness. Complex behaviour is not evidence of consciousness. Complex behaviour is simply evidence of being able to perform complex tasks. While a computer may be able to fool a human into thinking that a human is responsible for the words that are typed onto a computer screen, that complex behaviour shows nothing about what it is like for the computer to feel what it is like to have a conversation with a human.


Functional theory of the mind assumes that the sensory input and behavioural outputs of the brain are what is important to the establishment of the consciousness, not the methodology by which that sensory inputs are converted into behavioural outputs. Functionalism asserts that two minds are equivalent if they return the same outputs given the same inputs. Functionalism has already been heavily criticised with relatively strong paradoxical examples, such as the China brain, the Chinese room, and the inverted spectrum. I have provided the relevant link to Wikipedia for you to explore on your own here.


Below I will try to explain in an experimental way (involving pseudo mathematical representation) one particular criticism I have of functionalist brain simulation equivalence. If you don’t understand, please move on to the next subheading where you should be able to pick up again.
A functionalist’s view of the brain with time
Imagine a real brain, B, that has a sensory input vector u_B, and returns behavioural output v_B i.e. v_B=B(u_B). When the brain B has operated on u to produce v, a qualia q_B is generated. Functionalists claim that 2 minds, B and F (a functional representation of the brain B) are equivalent when u_F=u_B and v_F=v_B. The implication of this, from the functionalist point of view, is that if u_F=u_B and v_F=v_B, it is necessary that q_F=q_B. That is, when inputs and outputs are identical, the brain and the functional representation of the brain experience equivalent qualia.


Let us consider a real brain. A real brain in the current world will live for a limited amount of time over which to receive inputs and give outputs. The set of input vectors that a brain will experience through a lifetime, U_L is a subset of all possible input vectors, U, (U_L c U). Imagine a machine that records all of the inputs U_L and the outputs V_L through the lifetime of an individual brain B. A brain simulation function G can be constructed such that for any u_B \eps U_L, a v_G is produced such that when u_G=u_B, v_G=v_B.


The functionalist position must hold that this function G perfectly replicates the inputs and outputs of a person’s brain B for their entire life, despite it being little more than a recording of inputs and outputs, and hence it counts as a perfect representation of the qualia of that person’s life. Although no other life-experience can occur than the one recorded, the qualia must be equivalent to the one recorded by the machine. The function G can be significantly less complicated than the brain B, as the brain B must be able to produce results v_B, intelligent or not, for all u_B \eps U, where U is the set of all possible inputs, whereas the function G has a much more restricted domain: it only accepts u_G \eps U_L. For all u_G \eps U\U_L (U which is not U_L), G can either have no domain over these points or can return garbage in v_G. This means that G might not be truly functionally equivalent B for all u.


Despite B and G not being truly functionally equivalent, they are functionally equivalent for a specific person’s life experience. So, while an individual brain B may be able to generate a qualia set Q_L’ that is different to Q_L given U_L’ that is different to U_L, the brain simulation G would not be able to generate Q_L’. Despite this, the real brain B has only ever had U_L as an inputs during its entire lifetime. Hence the only qualia that B experienced is Q_L. Therefore, the function G, which is simply a recording of inputs and outputs, has qualia that is equivalent to the brain B.


If a functionalist rejects this notion and believes that a simulated brain must be capable of creating a v_G for all u_G \eps U such that v_G=v_B for all u_G=u_B \eps U, then the functionalist believes that it is not simply the nature of the returned outputs given the inputs that matter, but also the nature by which the calculation takes place that matters. Such a functionalist point of view means that the brain cannot be considered a black box for delivering qualia irrespective of the types of calculations that take place.


Therefore there are only two possible positions that remain for people who reject that the recording function G has equivalent qualia; a) that only identical physical brains with identical physical inputs can have the same qualia (I’ll call this the Real Brain Qualia Hypothesis), or (the weaker position) b) that in order to generate the equivalent qualia of a real brain B, the functional representation of the brain S must simulate all physical processes that occur in the brain perfectly (I’ll call this the Whole Brain Simulation Qualia Hypothesis).


Metaphysical implications of the Real Brain Qualia Hypothesis (RBQH) and the Whole Brain Simulation Qualia Hypothesis (WBSQH)
The RBQH assumes that there is something special about the interaction of the matter that occurs in the brain that produces the qualia specific to us and our experiences. The WBSQH implies that the only thing that is important about the qualia that occurs in the brain is the calculation of the states of the matter. Note that the RBQH also says that the calculation of the states of the matter are important (the best representation of reality is reality itself, after all, and hence the universe is calculating the next step in the universe perfectly at all times). It should be noted that adherents of the WHBSQH assert that the physical nature of the interaction of matter during the operation of the brain is not important to the nature of consciousness. While this position is possible, the lack of evidence supporting this concept means that the WBSQH cannot be said to be more or less likely to be true than the RBQH.


To temper this position against WBSQH, it should be noted that a silicon (or otherwise) system used to generate a simulation of a complex brain may have some form of consciousness. Considering that electrical currents are generated in both, and many similar atomic and sub atomic particles are present, we can say that it is possible to maintain a RBQH for the type of experience a real brain would hold, while saying it is possible that complex computers may have a type of consciousness which is qualitatively different to a typical brain due to the physical and computational differences that occur between them.


To summarise, there are roughly 3 positions:
a) Only Neuron Qualia Hypothesis: the only real neurons (and possibly only human neurons) can establish any type of consciousness.
b) RBQH with possible qualia in complicated computers that is qualitatively different to our own qualia.
c) WBSQH: that any qualia can be generated through simulation of the brain components


I am obviously an adherent to position b.


What needs to be specified in the WBSQH is the level to which the simulation needs to occur. Do we need to simulate the firing of neurons? Do we need to simulate the molecules? The atoms? The subatomic particles? The quarks? For the purposes of this argument, I will assume that WBSQH adherents believe that only a full physical simulation (to the smallest important particle) is sufficient for simulating the brain to create qualia.


Whole brain simulation-why reading in a brain perfectly is likely impossible. 
I hope by now that I have convinced you that at the very minimum, storing the memories or recreating the inputs and outputs of the brain is not enough to create a brain that has a qualia equivalent to that of me or you. Let’s assume, for a moment, that we think that WBSQH is possible.


When we want to read in a brain into the simulation, we are confronted with a problem regarding the physics of matter itself. The Heisenberg uncertainty principle makes it impossible to determine a particle’s position and velocity (or energy state) precisely at the same time. To measure one requires changing the other. This, unfortunately, means that measuring a brain in its exact state would actually be impossible. When we determine the state of neurons, positions of electrons and atoms, we do so with uncertainty about either the position or velocity (or energy state) that it has. This means that after reading the brain in, despite having exactly the same inputs and outputs to begin with, the brain and the simulated whole brain would start to diverge in results, even if inputs and outputs are maintained the same. At first this will be a small problem. Quantum effects may impact only a few of the neural firings that may occur. But because the brain relies on its own output as part of its input, those misfirings will cascade into a totally different brain state at some point.


This argument does not preclude the idea that you can make a consciousness that is very similar to a real brain using whole brain simulation. It does, however, mean that generating what most people consider to be the ‘same’ brain is highly unlikely.


Dual experience problem
Let’s say you could overcome the impossible to solve problem of the Heisenburg uncertainty principle when trying to read in the brain. If we could scan and recreate the brain perfectly, we could theoretically create 2 brains at exactly the same state as at scanning, or even create one brain and leave the original brain in its current state. At that point, you have created 2 brains whose only connection is through their shared past memories. There is no way for those 2 brains to communicate beyond the traditional means of talking etc.


We know that our experiences are linked to the inputs and outputs of our brain. Therefore, having completely separate inputs and outputs for each brain, the brains would have independent conscious states. That is, the experiences of one would be separate from the experiences of the other.


Now, it may be possible to argue that our consciousness is never the same, that it is constantly changing. The conscious you of today may be entirely different to the conscious you that went to sleep last night, with only the memories of the past, same body and same rough location to keep a continuous self identity. What we do know for certain, however, is that there is no continuity of conscious experience between human A and a simulation of human A’s brain.


The dual experience problem doesn’t say whether the real or simulated brain is computationally equivalent. It also says nothing about whether the simulated brain is conscious or not. A computer simulation of the brain may be complicated enough such that it has some form of consciousness. The dual experience problem simply says that the simulated brain cannot be the same consciousness as the original brain off which it is based.


I think, therefore I am and the fraud of mind upload
Probably the most famous quote from any philosopher ever is Descartes’ “I think, therefore I am”. Part of the reason “I think, therefore I am” is so famous is that it cuts to the basics of epistemology; what is there that we truly know? We don’t know if what we hear or see is real. We don’t know if the people around us are real, let alone that they experience the world the way we do (for all we know they are soulless but complicated meatbots that appear to have similar feelings as us, ). “I think, therefore I am” doesn’t even allow us to know whether or not we even control our experience (the ability for us to control our experiences my simply be a trick to make us think we are controlling our experiences). All we know is that we have experiences. Probably a better way of writing “I think, therefore I am” is “I experience, therefore I experience”.


Despite being the most fundamental knowledge about the world we experience, what physically makes up the conscious experience (or qualia) has not been explained by science in the slightest. Where in the field of artificial intelligence is explained what it is like to smell a rose? To see green? To be in pain? To feel dizzy? Claims that we can recreate consciousness and therefore continue our existence through our qualia by simulating our brain are based on absolutely zero evidence. It is possible that the qualia that a person experiences is simply a product of the computational complexity that is occurring, but we have no evidence that it is simply the sum of computational complexity as opposed to the interaction of matter in the configuration that makes qualia.


Considering that estimates of increases in computational power put the possibility of simulating a brain within our lifetimes, promoting such methods of consciousness extension without evidence that it will indeed extend consciousness is highly unethical. If we are told that we can extend our conscious lives infinitely, we may be tempted to destroy our old bodies or let them fall into serious disrepair. If we do so, we may be destroying an existing conscious being while not actually creating a new one in the simulated mind. Further more, those around the individual who witness the procedure will likely find the differences between the original and simulated brain to be imperceptibly different, thereby encouraging more people to undergo such a procedure.


  1. OkinSama · January 30, 2013

    Most of these problems can be solved with advanced enough nanobots, in great enough numbers, working together with each brain-cell. But of course that’s a rather futuristic technology.

    • jamesjansson · January 30, 2013

      I think that you have missed the point of the article, I encourage you to read it through.

      “Despite being the most fundamental knowledge about the world we experience, what physically makes up the conscious experience (or qualia) has not been explained by science in the slightest. Where in the field of artificial intelligence is explained what it is like to smell a rose? To see green? To be in pain? To feel dizzy? Claims that we can recreate consciousness and therefore continue our existence through our qualia by simulating our brain are based on absolutely zero evidence. It is possible that the qualia that a person experiences is simply a product of the computational complexity that is occurring, but we have no evidence that it is simply the sum of computational complexity as opposed to the interaction of matter in the configuration that makes qualia.”

  2. Christopher Rachiele · January 30, 2013

    The “what is consciousness” argument seems like basically nonsense to me. If a philosopher starts describing his experience of mind, of having thoughts and being conscious of them, then we can assume he has the same consciousness that we do. The idea that someone or something without consciousness would just randomly start talking as if it did is incredibly unlikely; it reminds me of the whole philosophical concept of “zombies”, which has been pretty thoroughly debunked. It’s especially silly since we would be able to actually watch the digital brain think; if it was “lying” and “pretending” to have experiences that it wasn’t just to fool us (for some reason), then we should be able to tell that just by watching it’s thoughts.

    Thoughts are actual, physical things going on in your brain (including the “qualia” “I am seeing green right now”). They either exist or they don’t, and the difference can, at least in theory, be observed and measured.

  3. jamesjansson · January 31, 2013

    I think that your claim “The idea that someone or something without consciousness would just randomly start talking as if it did is incredibly unlikely” is extremely simplistic. Here’s an experiment I just did:

    Me: “Do you know any philosophy?”
    Cleverbot: “Yes.”
    Me: “Do you feel things?”
    Cleverbot: “Very much so.”

    I’m pretty sure most people would say that cleverbot is not conscious, or at least if it is, its consciousness is not that similar to ours. Yet your claim is that the necessary condition to assert consciousness is that the thing you are interacting with has to proclaim their consciousness!

    Therefore, we cannot tell if anything else is conscious based on the claims it makes about consciousness. We need something more to determine this consciousness.

    • lofh · January 31, 2013

      Actually he never said he is conscious.
      Me: Are you conscious?
      Cleverbot: Dun ask me.

  4. Christopher Rachiele · January 31, 2013

    The only time you can have something that acts like it’s conscious without actually being conscious is if there is a conscious mind in the loop somewhere. For example, the “recording” you gave above, or a program designed to imitate known human responses like cleverbot. There is still a consciousness involved, or you wouldn’t get answers like that.

    That’s like saying “I am talking to this telephone, and the telephone is saying it has thoughts, is it conscious”? Of course not, but the person on the other end of the telephone is.

    Anyway, again, a computer program that did nothing but play back some prerecorded or prelearned answer to philosophical questions would be fundamentally different from a real AI, in ways that would be fairly easy to figure out, especially if you can actually look at the source code.

    • lofh · January 31, 2013

      Interesting point. Some people are only willing to draw sharp borders between what is consciousness and not. Just wondering, if we as conscious individuals are part of a society, is that society conscious as well? If so, in a superior or an inferior way that us as individuals? Does complexity arouse from simplicity or can this happen too the other way around? As we are just discovering the key concepts of our own consciousness, are we prepared to recognize other forms of consciousness, saying that such a phenomenon could be way more simple or much more complex?

  5. spock · March 10, 2013

    i think it is possible to create artificial awareness someday ,because what ever that mechanism going inside our brain which evokes/creates consciousness for us is “spatially limited ” to be inside our could just hem in on that particular area by carefully analyzing layer by layer what contributes to it and what not and eventually dissect and segregate that system.

    Once we know the mechanism of consciousness definitively.we could recreate it

    some projects are currently underway

  6. jacob · March 23, 2013

    Why you gotta try to crush everyone’s dreams like that?

    • jamesjansson · March 23, 2013

      It’s not crushing dreams. There will be ways to extend our conscious lives without mind upload.

  7. Aleks · April 14, 2013

    The mind IS like a computer. You can actually “delete” responses, for example responses to fear stimuli if you trigger those responses WHILE learning a new thing.
    You can also delete certain memories, which is beyond beneficial for people with PTSD.

    Your complete lack of knowledge regarding neuroscience as well as artificial intelligence + your philosophical tripe makes this whole poorly written “article” as nothing more than a very limited, old and erroneous view of the future.

    And btw, the “you” changes everyday. You’re certainly not the person you were 10 years ago (both physically and mentally since cells regenerate and get replaced; that’s also true for neurons), and you won’t be the same person 10 years from now.

    • mikefossel · April 17, 2013

      The mind is like a computer in the way you say and I don’t think anyone is really arguing the other point. However your examples do not in anyway explain consciousness. The fact that you can “delete” memories does not in any way support the assertion that the conscious being who is experiencing these memories can be reduced to a computer theory.
      The question as to what is consciousness is still unanswered. The only way we can currently explain this is by saying that the self is an illusion which is certainly possible but goes against our everyday subjective experience. It’s hard to dismiss this common sense observation as it is our most basic experience (I think therefore I am) and if all science is at it’s most basic level based on empirical evidence then how can we just deny the self that is experiencing.

      But I don’t have any better answers.

      And as for your statement about the ever changing “you”.
      This is one of the most amazing and interesting facts about consciousness, the fact that we simply cannot attribute it to the physical. Which leads me to either accept the materialistic self-is-illusion theory or believe that conscious is something higher and akin to life emerging out of non-living atoms.

      • Des Malone · July 16, 2014

        Think back to your birthday 10 years ago. What happened? What were you doing? Notice how I’m acknowledging there is a you a decade earlier. Is the you from 10 years ago the “real” you or is the you today reading this the real you? How do you know? Which you is knowing?
        You is a figment of brain activity. There is no “consciousness” as a “thing” to be transferred that contains the essence of you. If you take a baseball bat to your brain, there is no “you” anymore. The mind is an end result of what the brain is doing, like the light is what the light box is doing, or the wiring. You can’t take the “light” itself and move it, it’s not tangible. What you can do is take the elements causing the light and move those to another substrate, which would be what they’d be trying to do. Getting caught up in some spiritual nonsense about the nature of you is ridiculous and only happens among those who don’t yet grasp the reality there is no you. You is an illusion. Consciousness = activation.

        So you know how you write a book and copy the contents or doc and put it somewhere safe? You do that, like a back up, so if one is destroyed or damaged, you have a back up copy. Is that copy any less your book? If you have a picture of a goldfish and make a copy, put one copy on a usb stick and another copy on the laptop, and another one on a website, any one of them is the same content. If you lose one, you have a copy.

        If the day comes we’re digitizing memories – which we all seem to presume is the essence of our lives at all – then we’ll be backing up memories to other substrates so we never have to lose them again.

        Once you get off the whole “you = something cosmic and special” track and understand you = brain activity then it’s easier to see the potential ways mind uploading and our own immortality can be achieved.

  8. mikefossel · April 17, 2013

    I agree with this post completely. It is stupid to waste our time on ideas like Mind Uploading when we really have no idea what consciousness is.
    If consciousness can really be reduced to the simple materialistic theory then I see no reason for us to desire to extend consciousness as our true selves are actually illusions and we are constantly changing and therefore dying. All of this makes sense of course but still there is something inside of me that seems to exist and does not fit within this theory.

  9. Chris McMorran · April 30, 2013

    I get you Aleks. These conversations are starting to sound like they are going through a loop. Consciousness is just being awake. its perception. all living organisms have perception. The kind of responses you will get from them depends on the technicality of their complex system. Humans are have a massive amount of connections in the brain (and beyond as all organisms have a symbiotic relationship with each other, matter and the environment) that allow for an almost infinite growing intelligence. We are just starting to get to know how these different systems work through science. REM, Delta wave sleep, consciousness, sub consciousness, not to mention the current understanding of how certain “mental” disorders like those people with sleeping disorder that cause them to sleep for weeks only waking but not being conscious to eat and drink acting like a emotionally unintelligent child as it seems for some reasons I can’t exactly say but their consciousness is forcing them selfs to get to up walk to the kitchen and feed them self. But in a state seemingly uncharacteristic of the person. With our current level of understanding mind uploading is very far fetched idea. Yes one day it might be possible. But todays computers are far from being as complex as our selfs. Perhaps if we had an organic like version of our selfs that exits in transient state we may be able to link the two similar to quantum entanglement. The mind is nothing but a thought with out the body.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s