Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Will AI ever achieve self awareness?

  1. Jan 4, 2015 #1
    Will advanced artificial intelligence ever achieve consciousness and self-awareness? Perhaps in the not-too-distant future?

    And is it possible for AI to match or surpass the intelligence of human beings?
  2. jcsd
  3. Jan 4, 2015 #2
    Consciousness? Yes - but not with current architectures. Current computers have no components that could support consciousness.

    Self-awareness? Also yes. This is much simpler. When you are aware of something, you are only aware of symbols for it - and certain associated information. For example, it you look at a tree, you are collecting photons that are reflecting off the tree and the image is encoded by the retina and neurons. Additional information processing allows you to recognize it as something that is in some ways familiar. So the result is neuronal activity that represents (symbolizes) the tree you are gazing at. Similarly, when you are aware of yourself or your thoughts, what you are aware of is a symbolic representation of yourself or your thoughts. Most people can readily recognize that they only have limited information about the tree - but "self-awareness" is a more convincing. Still, the only thing you can be aware of is neuronal representation of something.

    So for a computer, the question becomes "Does awareness imply consciousness?". If it doesn't, then you can already describe computers as self-aware - they can report their own temperature, memory and memory usage, processor usage, etc. If it does, then there would need to be a reason for the computer to process information about itself in the conscious realm - for example, self-locomotion, self-preservation, or socializing with other computers or people.

    Matching humans? It's hard to see anything that would get in the way of this eventually happening.
  4. Jan 4, 2015 #3


    User Avatar
    Gold Member

    What sort of components would support consciousness? What's the least complex thing that could have consciousness?
  5. Jan 4, 2015 #4
    As I have said before, when we are conscious, we are conscious of more than a small number of bits at once. So we would need a register that can store or process substantial information in a single state. To my knowledge, that is an unambiguous specification of QM superpositioning.

    Regarding the least complex thing: If superpositioning is the foundation of consciousness, then primitive forms of consciousness are ubiquitous.
  6. Jan 4, 2015 #5


    User Avatar
    Gold Member

    Wouldn't writing to a hard drive be the same as 'storing substantial information in a single state'?
  7. Jan 4, 2015 #6
    If I write "Tree" onto a hard drive, then there are 32 bits of information in 32 different locations comprising 32 completely independent states. There is no one place where "tree" exists so there is no place for consciousness of "tree" to exist. If you are conscious of "tree", then there has to be a you (perhaps some protein molecule) that has all of "tree". You can't do it with 32 distinct yous.
  8. Jan 4, 2015 #7


    User Avatar
    Gold Member

    I doubt there is a single place in my brain where the entire concept of "tree" exists either.
  9. Jan 4, 2015 #8
    It is possible that machine consciousness may not be supported by silicon-based microprocessors/classical computing methods/programming languages/algorithms. And only an artificial neural network (ANN) can support consciousness and sentience. As far as to my knowledge, it is not possible to create an ANN out of a 2D transistorized silicon die.

    There are many fundamental challenges in creating a strong AI.

    What causes consciousness and how the brain really operates is still not fully understood by neuroscience, and until the brain is fully reverse engineered and figured out to it's absolute 100% entirety, it will not be possible to synthesize it (in real time) inside of a computerized substitute. The human brain is the single most complex object known to science and is something that would requires decades worth of the combined effort and research of most cutting-edge physics and neuroscience in the world to fully master. A good analogy would be like deciphering the mysteries surrounding black holes and dark matter -- that's how staggeringly complex and enigmatic the human mind actually is.

    Successfully reverse-engineering the human brain and deciphering all of it's workings will be a momentous milestone in scientific and human history!

    Furthermore, the computational power required for whole brain emulation does not yet exist. It took one of the world's fastest supercomputers 40 minutes to simulate just one second of human brain activity. This is why quantum computing might be imperative for creating an artificial intelligence/neural network that is an exact 1 to 1 match with the human brain and is literally capable of performing any intellectual task that a human being can including writing this post. Quantum computers (particularly topological quantum computers) could theoretically be billions or trillions of times faster than classical Turing computers like the one you're using right now, which would provide many times more than enough computing power for real-time whole brain emulation. Unfortunately quantum computing is still in it's infancy and won't be perfected for quite some time.

    It's very likely that all of this will one day be possible, but in my opinion, probably not until sometime during the second half of this century and late within my lifetime (I am currently 27).

    Here's some food for thought regarding the (theorized) intrinsic relationship between quantum mechanics and the brain. This is why (in my above argument) radically advanced quantum computing could be required to (somehow) actively support the quantum effects that are believed to be inherent properties of brain dynamics like decision making, memory, conceptual reasoning, judgment, and perception.


    Last edited: Jan 4, 2015
  10. Jan 5, 2015 #9


    User Avatar
    Staff Emeritus
    Science Advisor

    One question I have is how would you emulate the non-linear and non-digital nature of the brain using digital electronics?
  11. Jan 5, 2015 #10
    Conventional non-linear analog systems can be modelled with more precision than the physical systems. As long as you know what the system is doing.

    QM systems are a different story. It is possible to create a QM system that is practically uncomputable with conventional digital processing - even with all the time and energy available in the universe.
  12. Jan 5, 2015 #11
    Artificial Neural Networks are a topic of AI - not of biology. We know what is happening with a relatively few actual (biological) neural circuits - and many don't work at all as the AI ones do - for example, the ones that do edge detection in a mammals visual cortex. ANNs can "learn" in an adaptive way, and the components are topologically similar to real neurons.

    Here is a link to 1989 paper where a few neurons were monitored in a monkey:

    There was also some work done with a cat's visual cortex that was a lot more detailed. The point is that the neurons in this part of the brain are preprogrammed or pre-wired for particular data processing functions. They are not performing the ANN-style adaptation.
  13. Jan 5, 2015 #12
    You don't need the entire concept in one state - only as much as you are conscious of at one instant.
  14. Jan 12, 2015 #13


    Staff: Mentor

    Depends on how you define the word "state". Suppose I look at a single register in the 64-bit processor in my machine. It has 64 bits. Does it have one "state" or 64 different "states"?

    Consider a single qubit: it has an infinite number of possible "states" (as in, its quantum state vector can assume an infinite number of possible values), but that doesn't mean a single qubit can store an infinite amount of information.
  15. Jan 12, 2015 #14
    A 64-bit register holds 64 independent states. So, if it held an integer and was used for your consciousness, the "you" that was conscious of the units position would be an entirely different from the "you" that was conscious of the two's position. If, in a single moment, you are conscious of a towering green tree, there has to be a place in your brain where the full concept of "towering green tree" exists in a single state.
    A qubit with an infinite number of "possible states" is in a single quantum state that can be described by common QM notation. A group of entangled qubits also has such a single QM state. For example, when looking for prime factors using the Shor algorithm, operations applied to one qubit effect the entire system of entangled qubits. This makes 64 entangled qubits very different from a common 64-bit register.
  16. Jan 12, 2015 #15


    Staff: Mentor

    I think you are assuming a lot about the properties that consciousness must have. Do you have an actual theory of consciousness to back this up (or a reference to one), or is it just your opinion?

    So is the entire universe. If you're going to take this approach, there are no separate objects at all; there is just one universal quantum state. Again, do you have an actual theory of consicousness (or a reference to one) that explains how it works if there are no separate objects but just one universal quantum state? Or is it just your opinion that all this makes sense?
  17. Jan 12, 2015 #16
    This one is based on two direct observations of your own conscious state. First, when you are conscious, are you conscious of information - a memory, an image, etc? When you are conscious, are you conscious of more than one bit of information? The experiment can be tricky because you may be conscious of a recently created memory that symbolizes more than you are really conscious of in one instant - but even then, it is more than a few bits.
    I am not talking about a "universal quantum state". I am talking about information processing devices that process information in a purposeful way as viewed by our species. In a broad sense, the entire universe is technically an information processing device - but in this thread we are limiting ourselves to brains and artificial intelligence machines that do something humanly purposeful (in the sense that we readily ascribe a purpose to it) with the information - such as a ECDIS system steering a ship, or a brain assisting an animal to survive.

    The specific example I gave was Shor's Algorithm (http://arxiv.org/abs/quant-ph/9508027v2 ). In that case, scores of qubits are manipulated so that when their states are finally measured, they provide indications of the prime factors of a large composite number. The purpose of that reference was to illustrate that there is something more sophisticated that simple binary states but short of the entire universe - and that it involves the processing of a single elaborate QM state to a purposeful end. There is a stage in that processing when the entire problem is in a single QM state - where any measurement will effect the entire system - not just the qubit being measured. This is very different from a common 64-bit register where the state of each bit remains entirely local to its hardware device.
    Another example is Grover's Algorithm (http://arxiv.org/abs/quant-ph/9605043 ). A device designed for Grover's Algorithm can search through many possibilities looking for a match. This is much more likely the kind of algorithm that would be biologically useful.

    The OP asked the question about self-awareness and consciousness in an AI device. A "self-awareness" independent of consciousness can be designed into an AI machine with little effort as I described above. "Consciousness" can also be quickly addressed - but not quickly designed in. With direct observations of our own experience of consciousness, we can discover some of the physical requirements for our consciousness such as:
    * It must be supported by a single state as described above;
    * It can effect our behavior - otherwise we would not ever discuss "consciousness"; and
    * The type of information we are conscious of is heavily processed - we're not conscious of the individual contributions of each rod and cone in our retina or each tone measurement in our ears.

    Laboratory results can also contribute to our list of requirements - though, so far, less directly.

    As far as "theory of consciousness" is concerned, there is a difference between the philosophical treatment of consciousness and the physical treatment of it. Even if we had a "theory of apples", it would not help us understand apples. Instead, we can examine the apple and discover all sorts of physical charateristics of apples and perhaps at a certain point determine if an artificially designed apple had all the characteristics we think we need to qualify as an apple.

    I can taste an apple and tell you whether is has an apple flavor. You can then repeat the experiment and decide if you agree. Those three observations about consciousness are my own observations - but you or anyone else can repeat the observation for yourself.
  18. Jan 12, 2015 #17


    Staff: Mentor

    You don't directly observe your conscious state, if by "state" you mean the physical state of your brain (or whatever medium your consciousness is instantiated in). So you can't draw conclusions from your conscious experience about what kind of physical state it's instantiated in.

    I know you're not intending to. But my point is that your logic about what a "state" is implies that the entire universe is a single state. There's no in between; you can't pick out a particular piece of the universe, such as your brain or a single neuron or a single particle, and say that that has a "single state", except arbitrarily; there's nothing in the physics that makes a single particle any more of a "single state" than the universe, if you're looking at QM the way you're looking at it.

    Which is an arbitrary limitation that is not picked out by the physics; it's picked out by our human choices. But if you're asking about an AI, it isn't human, so you can't assume that it will adhere to this limitation. An AI could be self-aware without having any purposes that we humans would understand.

    I agree there is a physical difference here; I just don't see how our experience of consciousness makes the first kind of physical system any more likely than the second kind as a substrate for consciousness to be instantiated in. It's an open question.

    I have no problem with your second and third items here. But I don't think the first one is valid, for the reasons I've given.
  19. Jan 13, 2015 #18


    User Avatar
    Science Advisor

    I think that someone needs to really define what awareness and consciousness actually means and what it actually is in a precise manner before even thinking about answer the initial question - because answering a question with terms that are ill-defined will never result in anything useful anyway.

    Anybody that has done any serious mathematics or fields that make use of it will tell you just how difficult it is to be precise about simple things - and I bet that the definition of awareness and consciousness is extremely far from even being close to something that is so simple.

    Personally I don't think psychologists have really defined the term let alone the computational neurosciences and as such I don't think the question has even got a remote chance of being satisfactorily answered given the current state of the sciences.

    This is not an attack at scientists in this field just an observation that the key requirement (i.e. a solid, objective, unambiguous and clearly interpretable definition) doesn't exist.
  20. Jan 13, 2015 #19


    User Avatar

    This thread contains some of the "better" discussions I've read in a long time... about a very difficult topic.

    I'm going to post a totally unscientific link to a YouTube video... however, I don't want to decrease the value of this thread, or cause it to be locked by doing something inappropriate...

    If it can stay, it's well worth watching, IMO... full screen is best, it seems a bit dark.

    If it can't stay, delete... that's completely fine by me...

    I didn't want it to embed, so the link is... here.
  21. Jan 13, 2015 #20
    Okay, then observe the characteristics of your consciouness. My key point here is that when consciousness exists, it has information content. Do you agree?

    I gave the Shor and Grover algorithms as examples - but I should have been more explicit with the term "state". What I meant was a "minimum independent state". When two particles become entangled, they share a single "minimum independent state" and they remain that way until one is measured. I think my response to your fourth point will make this clear.

    As I posted earlier, by a normal technical definition of "self-aware" as distinct from "conscious", some machines are already "self-aware".
    When I said we were addressing ourselves to machines that did something "humanly purposeful", it was intended only as a common sense limit. Most QM events happen without human notice - and even if noticed, would not be considered purposeful. So if I throw a log in the fire, there is all sorts of information processing that is incidental to the combustion of the log - but the final result of all that "computation" is simply ash, heat, light, and smoke. By common human standards, the details are of no consequence. It is an arbitrary limitation, but it is also a pragmatic one - and one that is implicitly used by all scientists all the time.

    Let's say we want to make our AI capable of consciously experiencing eight things, coded with binary symbols 000 to 111. For example: 000 codes for apple, 001 for banana, 010 for carrot, 011 for date, 100 for eggplant, 101 for fig, 110 for grape, and 111 for hay. In a normal binary register, hay would not be seen by any of the three registers - because none of them have all the information it takes to see hay.

    Now let's say that I use qubits. I will start by zeroing each qubit and then applying the Hadamard gate. Then I will use other quantum gates to change the code (111) to its complement (000) thus eliminating the 111 code from the superposition. At this point, the hay code is no longer local. The state of each qubit is, in part, determined by the full code. If measured, each qubit will report one (1) 37.5% of the time and zero (0) 62.5% of the time. But the code "111" will never be seen, so each one "knows" that if the other two report as 1, it must report as 0. This superpositioning will only last for as long as none of those qubits are measured. But during that time, the entire 3-bit code is held in a single state reflected by all three qubits.
    Last edited: Jan 13, 2015
  22. Jan 13, 2015 #21
    There is an aspect of "consciousness" that cannot be addressed at all. There is also a temptation to use that fact as an excuse for avoiding all critical thinking on the matter.

    In this thread, we are using the term "consciousness" as it was introduced in the OP. Something distinct from self-awareness or biology - very akin to "qualia".

    I know that none of the computers I have programmed are conscious - even though I only have an approximate definition for consciousness.
  23. Jan 13, 2015 #22


    User Avatar
    Science Advisor

    I get the idea that you need to decide where to draw the line and make a decision when it comes to using a definition that is "good enough" for whatever purpose.

    Having said that though the definitions are still really vague and thin and one has to remember that many people have different definitions along with expectations of what something is and what it means.

    The above is what really causes the problem and it's also why many questions and debates end up nowhere where any resolution is basically impossible since the terms are ill-defined and the question is not well defined as a result of this (or other things).

    It's like when you get into an argument of what energy is or what god is - there is so much ambiguity, subjectivity, and complete lack of clarity when it comes to defining these terms and yet people still argue completely missing this important fact. Eventually the arguments end up in a state of crap where people are flinging verbal diarrhea at each other - especially when one person thinks that there definition is an absolute one.

    Unfortunately what facilitates the above - including the inability to critically think and evaluate something is the language we use.

    The languages we use in the written and spoken manners in my opinion are horrible for reaching resolutions because they are more like labels rather than an actual description. You see this with the fact that many words have many meanings, and this helps cause some of the problems mentioned above. Perhaps if we had a language where the representation actually described what was being said in some invariant manner then well defined questions would get as well defined an answer as possible.

    I still think many definitions including those of consciousness, awareness, energy, god, and others share common characteristics that facilitate prohibiting one from really get any sort of useful resolution and you can see this in action when people talk about these concepts - it's not an excuse for anything: it just is what it is.
  24. Jan 14, 2015 #23


    User Avatar
    Science Advisor
    Gold Member

    You need to define 'conciousness' before it can be duplicated. I've seen no tenable definition. The presumption that sentience can be reproduced by a digital series of yes-no decisions does not work to my satisfaction.
    Last edited: Jan 14, 2015
  25. Jan 14, 2015 #24


    User Avatar
    2017 Award

    Staff: Mentor

    How do you know that?

    Where is the fundamental difference between a register having 32 bits of 1/0 and a set of 32 neurons in your brain, some of them active and some not? Both can represent "tree" in some way. I don't see why you mention quantum mechanics so often, but there are no superpositions on the level of firing neurons, their decoherence time is way too short.

    A single register bit is not sufficient for consciousness in the usual sense, but that was never the question - a single neuron does not have consciousness either.
  26. Jan 14, 2015 #25


    User Avatar
    Staff Emeritus
    Science Advisor

    I agree that to be able to answer the question "Can a computer be conscious?", you have to have a definition of "conscious". In my opinion, the definition should be something observable at the macroscopic level, rather than something at the microscopic level having to do with quantum effects. Why do I say that? Because we grant each other the status of being conscious based on outward behavior, without having any detailed microscopic theory of consciousness.

    That was sort of the idea behind the Turing Test, to make machine intelligence something that is observable, rather than how something is implemented. I think that the Turing Test is not really right: It's possible to pass the Turing Test by tricks that give the illusion of intelligence without a real implementation of understanding. In the other direction, I can imagine nonhuman intelligence (the intelligence of animals, or of aliens, or of robots) that would be different enough from human intelligence that it would fail to pass the Turing Test, but which we would probably consider intelligence of a different kind.

    So what is the definition of consciousness in terms of outwardly observable behavior? I don't know! It's one of those things that I think I would know it when I see it. I think that there is a whole package of fuzzy concepts that are tied together in our notion of what it means to be a "person". Emotions, plans, goals, ability to remember and learn from the past, evidence of an inner model for how the world works, evidence of updating that model based on experience, that sort of thing.
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook