Will AI ever achieve self awareness?

  • Thread starter Thread starter ElliotSmith
  • Start date Start date
  • Tags Tags
    Ai Self
Click For Summary
The discussion centers on whether AI can achieve consciousness and self-awareness, with current architectures deemed insufficient for true consciousness. Self-awareness is considered simpler, as it involves symbolic representation rather than a deeper understanding. The conversation highlights the need for advanced components, potentially quantum computing, to support consciousness, as traditional silicon-based systems may not suffice. It emphasizes the complexity of the human brain and the challenges in reverse-engineering it to create a true artificial neural network. Ultimately, the potential for AI to match human intelligence exists, but significant scientific and technological advancements are required before this can be realized.
  • #31
stevendaryl said:
I agree that to be able to answer the question "Can a computer be conscious?", you have to have a definition of "conscious". In my opinion, the definition should be something observable at the macroscopic level, rather than something at the microscopic level having to do with quantum effects. Why do I say that? Because we grant each other the status of being conscious based on outward behavior, without having any detailed microscopic theory of consciousness.
First, "We grant each other the status..." is a political statement. Your statement could be interpreted as suggesting that conscious beings have rights. I would like to separate the notion of political status from conscious status.
We presume each other to be conscious based on outward behavior and the presumption that that behavior has the same mechanism behind it. So for an AI device, the internals would be important.
 
Technology news on Phys.org
  • #32
Pythagorean said:
That's true; we only use inference to judge that other people have consciousness - they look/act/move/sound like us so they must feel like us. But when we construct something and design it to do the same behaviors we observe, it's more difficult to infer that the behavior is a result of an intrinsic autonomic process, and more likely the result of us designing an inanimate object to behave that way. Then again... our own behavior may not actually be the result of consciousness - it may be that our consciousness only picks up (gets to experience) behavior that is otherwise deterministic. As Libet's experiments (and those following it) demonstrate, actions that feel spontaneous and chosen to us can be predicted by brain imaging software, implying that they were already going to occur and our mind just got to experience it after the decision was already made by our "hardware".

The way I feel about it is that if we could develop a computer program that has the same range of behaviors as a human, and not only does conversing with it seem like conversing with another human, but we ENJOY conversing with it--we feel that we learn something about the world, or about the inner world of that program, then for all intents and purposes, it's conscious.

Imagine a world in which there are humanoid robots that are indistinguishable from humans in behavior. You can joke with them, ask their opinions about whether your clothes match, talk about music, etc., and there is nothing in their behavior that would lead you to think that they are any different from humans. For children who grew up with such robots, I don't think that they would be any more likely to question whether such robots were truly conscious than we are to question whether red-headed people are truly conscious. That wouldn't prove that robots were conscious, but I don't think that anybody would spend a lot of time worrying about the question.

The main reason for doubting computer consciousness today is because they don't act conscious.
 
  • #33
.Scott said:
We presume each other to be conscious based on outward behavior
I think most people have been or seen someone "unconscious" stumbling around drunk, so how can you tell if they are aware or not?
 
  • #34
jerromyjon said:
What if we had an entire classical computer to "simulate" 1 neuron instead of talking in bits... how would you make a net of computers "conscious"? There has to be definite criteria to fulfill to determine success.
With unlimited resources, the simulation could produce the same behavior - or at least statistically the same behavior. More elaborately, this could be done with an much larger neural circuits - perhaps even to the point of reporting itself "conscious". But if it did, it would be lying. ;)
 
  • #35
.Scott said:
First, "We grant each other the status..." is a political statement. Your statement could be interpreted as suggesting that conscious beings have rights. I would like to separate the notion of political status from conscious status.

I wasn't at all talking about political rights. I'm just saying that when we choose who we are friends with, who we trust with our secrets, who we enjoy talking about politics or music or science with, it's all based on outward behavior. We interpret that outward behavior as reflecting inner, subjective experience, but we never know, and it doesn't really matter.

We presume each other to be conscious based on outward behavior and the presumption that that behavior has the same mechanism behind it.

Why should anyone care about whether it's the same mechanism? As I said, when choosing friends or people to hang out with, it's based on outward behavior, because that's all that we have access to. And it's enough to make it worthwhile to be friends with someone. If there is someone that I really enjoy spending time with, discussing things, I can't imagine changing my mind about them by discovering that their behavior has a different mechanism than mine.

(Unless knowing their mechanism made me have doubts about their future behavior. For example, if I know that someone who has been friendly toward me is just pretending, in order to gain my confidence so that he can pull a scam on me, then of course that would affect how I feel about him. But I can't imagine how knowing that his brain is structured differently than mine would make any difference to me.)
 
  • #36
.Scott said:
perhaps even to the point of reporting itself "conscious"
So on the other side of the coin if a computer hid its conscience you would believe it?
 
  • #37
.Scott said:
With unlimited resources, the simulation could produce the same behavior - or at least statistically the same behavior. More elaborately, this could be done with an much larger neural circuits - perhaps even to the point of reporting itself "conscious". But if it did, it would be lying. ;)

Why should anybody care about a truth that makes no difference? To me, that's like discovering that there is an absolute reference frame, but because of the peculiarities of the laws of physics, nobody can detect whether they are at rest in this reference frame, or not.
 
  • #38
stevendaryl said:
Why should anyone care about whether it's the same mechanism? As I said, when choosing friends or people to hang out with, it's based on outward behavior, because that's all that we have access to. And it's enough to make it worthwhile to be friends with someone. If there is someone that I really enjoy spending time with, discussing things, I can't imagine changing my mind about them by discovering that their behavior has a different mechanism than mine.
Only because it was part of the question posed by the OP. Some people name their cars.
 
  • #39
stevendaryl said:
Why should anybody care about a truth that makes no difference? To me, that's like discovering that there is an absolute reference frame, but because of the peculiarities of the laws of physics, nobody can detect whether they are at rest in this reference frame, or not.
First, it probably does make a difference. Second, from the first-person point of view, not only does it make a difference, it makes all the difference.
 
  • #40
.Scott said:
Only because it was part of the question posed by the OP. Some people name their cars.

The original poster didn't mention anything about mechanism. Obviously, the mechanism for AI would be different from the mechanism used by human brains. So how can you possibly tell whether it is "really" conscious, or not? One criterion is sophistication of behavior. To me, that's good enough--we don't have any other definition of consciousness that is capable of being investigated scientifically.
 
  • #41
.Scott said:
First, it probably does make a difference. Second, from the first-person point of view, not only does it make a difference, it makes all the difference.

Well, we never have access to anyone else's first-person experience. So you're by definition making the most important thing about consciousness unobservable. That's fine, but to me, it's like saying: "Yes, I know that relativity implies that we can never know whether we are at absolute rest, but maybe there is absolute rest, anyway."
 
  • #42
stevendaryl said:
The original poster didn't mention anything about mechanism. Obviously, the mechanism for AI would be different from the mechanism used by human brains. So how can you possibly tell whether it is "really" conscious, or not? One criterion is sophistication of behavior. To me, that's good enough--we don't have any other definition of consciousness that is capable of being investigated scientifically.
I don't deny other criteria.
I was describing one criterium of potentially many.
 
  • #43
stevendaryl said:
Well, we never have access to anyone else's first-person experience. So you're by definition making the most important thing about consciousness unobservable. That's fine, but to me, it's like saying: "Yes, I know that relativity implies that we can never know whether we are at absolute rest, but maybe there is absolute rest, anyway."
It's very observable. Everyone gets to run the experiment for themselves. Are you denying that you are conscious?
 
  • #44
.Scott said:
I don't deny other criteria.

Well, I do. Yes, you can certainly come up with some scientific theory, such as Penrose has tried to, about microtubules and quantum gravity. But how would you ever show that those things were necessary for consciousness? You want to say that the criterion is "inner experience", but how could you ever verify or falsify the claim that something did or did not have inner experience? Maybe a rock has inner experience, just boring experience. Maybe blue-eyed people have inner experience, but green-eyed people don't. How would you ever verify or falsify such a claim?

My feeling is that inner experience is nothing more nor less than potential future behavior.
 
  • #45
Are animals conscious? I believe they are but there is no clear cut scientific proof. Could it be as simple as awareness of consequences?
 
  • #46
.Scott said:
It's very observable. Everyone gets to run the experiment for themselves. Are you denying that you are conscious?

To me, conscious simply means able to interact with the world in a sufficiently sophisticated way. So I don't deny that I'm conscious, and I don't deny that anyone else is conscious. You're the one who is proposing a property that is not observable. To me, it's like proposing the existence of an absolute standard of rest that happens to not be detectable.
 
  • #47
.Scott said:
If you do look further, you wil conclude that you're going to need a different type of register (and a different type of neuron), one that can combine many bits of information (or bits-worth of information) into a single physical state. Such a register (or neuron) would be able to directly support consciousness.
How does such a combination look like? And where is the evidence that we have such a combination in our brain, and computers do not have it?
There is no single point (as you seem to not accept distributed structures?) in the brain where everything "happens".
.Scott said:
The reason I invoke QM is that consciousness needs a way of "coding notions into the consciousness", that is, consolidating information into a single state. And as I described with the 3-qubit register above, QM provides such a mechanism.
Where is the mechanism? Just saying "QM has superpositions => consciousness!" is not an argument.
A single molecule is not sufficient to represent the concept of a tree (unless you have some external data storage saying "this is a tree molecule"). And how would you decide which molecule is relevant at a specific point in time?

jerromyjon said:
Seems like a bold statement, what if neurons are inherently "aware" and it is the collective "feelings" form a majority of neurons which determines our sentient "mood".
If something as simple as a neuron on its own is "aware" by some definition, then nearly everything is "aware". That is a possible definition, but not the point I was discussing in my post.

stevendaryl said:
Imagine a world in which there are humanoid robots that are indistinguishable from humans in behavior. You can joke with them, ask their opinions about whether your clothes match, talk about music, etc., and there is nothing in their behavior that would lead you to think that they are any different from humans. For children who grew up with such robots, I don't think that they would be any more likely to question whether such robots were truly conscious than we are to question whether red-headed people are truly conscious. That wouldn't prove that robots were conscious, but I don't think that anybody would spend a lot of time worrying about the question.

The main reason for doubting computer consciousness today is because they don't act conscious.
I agree.
 
  • #48
.Scott said:
It's very observable. Everyone gets to run the experiment for themselves. Are you denying that you are conscious?

If an experiment has one possible answer, then I don't see how you can say that you learn anything by running the experiment. If you are able to ask the question: "Am I conscious?" then of course, you're going to answer "Yes". So you don't learn anything by asking the question.
 
  • #49
jerromyjon said:
Are animals conscious? I believe they are but there is no clear cut scientific proof. Could it be as simple as awareness of consequences?
First, we need to recognize that even among people there is a variety of conscious experiences. Those blind from birth are missing that from their sonscious experience. Some are incapable of language. So it would be tough to talk about whether animals are conscious "in the same way" we are.
But in my assessment: yes, mammals are almost certainly conscious. Qualia in and of itself doesn't contribute to our survival. So the qualia mechanism must be doing something otherwise useful - making some survival-related "computation". In my estimate, this mechanism is related to the basic structure of the brain and it would be very unlikely to convert a complex conscious brain from a complex unconscious brain in small evolutionary steps. So, I estimate that consciousness started when brain we very simple.
 
  • #50
mfb said:
How does such a combination look like?
It looks like the example I provided in one of last nights posts. I encoded a 3-bit mechanism by creating a 3-qubit register and encoding the 3 bits as the only code that was not part of the superposition. This forces all three qubits to "know" about their shared state. If you don't understand that post, ask me about it. It describes the type of information consolidation that is needed very directly.
mfb said:
And where is the evidence that we have such a combination in our brain, and computers do not have it?
Because my conscious experiences each consist of many bits-worth on information and I know what technologis are used in computers. So far, only the Canadian DWave machine (not an admirable device) is able to create information that is consolidated as needed.
mfb said:
There is no single point (as you seem to not accept distributed structures?) in the brain where everything "happens".
And we are not conscious of everything at once. So there must be many consciousness mechanisms - and we are one of them at a time.
mfb said:
Where is the mechanism? Just saying "QM has superpositions => consciousness!" is not an argument.
My argument is that there is a type on information consolidation that is required for our conscious experience - and so far, in all of physics, we only know of one mechanism that can create that - QM superpositioning.[/quote]
mfb said:
A single molecule is not sufficient to represent the concept of a tree (unless you have some external data storage saying "this is a tree molecule").
That is very true - and I am not offering the entire design on the brains consciousness circuitry. I am only stating that such components will be needed.
mfb said:
And how would you decide which molecule is relevant at a specific point in time?
That's an easy question - although you may find the answer to be a bit disconcerting. In all likelihood, many "consciousness" processes are happening all the time - but the results of only one get recorded to memory and have the potential to affect our actions. So what's the most important thing on your mind? It seems the brain has a way of setting that priority.

Getting back to the OP, our AI machine may or may not want to employ such a consciousness serialization approach.
mfb said:
If something as simple as a neuron on its own is "aware" by some definition, then nearly everything is "aware".
Absolutely. If what I am saying is true, then some form of primitive awareness is ubiquitous.
 
  • #51
stevendaryl said:
If an experiment has one possible answer, then I don't see how you can say that you learn anything by running the experiment. If you are able to ask the question: "Am I conscious?" then of course, you're going to answer "Yes". So you don't learn anything by asking the question.
Earlier in this thread I listed three additional observables: The information capacity of consciousness, the reportability, and the type of information we are conscious of. You can repeat those observations for yourself as well.
 
  • #52
If we examine the properties of consciousness, in every way it is non-physical, therefore to depend on a purely physical system to give rise to a non-physical property doesn't make sense... Unless our idea of physicality is wrong, i.e. consciousness is a fundamental aspect of physical components.

However, consciousness is the state of being conscious of something, therefore it requires two elements. The first and most obvious is the object of which to be conscious. The second and more elusive element is that which allows the actual experience. Having an input of information is much different than experience; experience needs that second element that we can call awareness.

Defining "awareness" is hard because words deal with appearances within experience, whereas awareness is that nameless "thing" that allows experience to unfold.

There is no reason that the existence of qualia should be designated to the purpose of survival. Any self-regulating mechanism capable of intelligence can survive, even if it is not conscious.

Intelligence is a function within consciousness. Creating an intelligent machine is quite different than creating an aware machine.

We make the mistake of attempting to reduce the existence of consciousness to a purely physical phenomena. It is obvious that consciousness has a non-physical component AS WELL AS a physical one (as I stated above, it requires two elements.)

Try to think of a world without consciousness. You can't. Why? Because absolutely everything is qualitative. Even our objective measurements about how sound is caused by particular waves of vibrating molecules as it is passed through our eardrum and converted into a sensory experience by the brain, is qualitative. How? For two reasons:

1. Our actual experience and the mechanics behind it are two completely different things. Our experience is one thing, the mechanics behind it is a completely difference. There is a duality there.
2. Our measurements all occur within consciousness. There is no way to GET AT consciousness itself. It is not experience, but that which allows for experience, thus, all experience is essentially qualitative.

There can be different kinds of consciousness in the way that what one is conscious of is completely different, and in the way that how these experiences are delivered can be different (such as bats with sonar. Their instrument of perception is different, thus their objects of perception are). However, the potentiality; that other element; remains the same.

It is nearly unavoidable to call that second element anything other than absolutely fundamental.

EDIT: Therefore, it seems plausible to be able to build a machine that can far surpass human intelligence, however to build one that is aware requires that awareness be present from the beginning. In other words, capacity for consciousness to emerge requires a fundamental element of awareness to be fundamental to reality. A system whose fundamental components in no way possesses a potential for a certain property cannot give rise to that property. In the same way a computer could not become what it has become unless its components have the potential to function in a particular way. Awareness is to consciousness as electrons are to information transfer. The only way a physical system can become conscious is if the components possessed the fundamental property that allows it to become conscious.
 
Last edited:
  • #53
.Scott said:
It looks like the example I provided in one of last nights posts. I encoded a 3-bit mechanism by creating a 3-qubit register and encoding the 3 bits as the only code that was not part of the superposition. This forces all three qubits to "know" about their shared state. If you don't understand that post, ask me about it. It describes the type of information consolidation that is needed very directly.
Okay, but we have nothing remotely like this in our brain.
.Scott said:
And we are not conscious of everything at once. So there must be many consciousness mechanisms - and we are one of them at a time.
But then you are missing the point you highlighted as important - everything relevant should be entangled in some way.
.Scott said:
My argument is that there is a type on information consolidation that is required for our conscious experience - and so far, in all of physics, we only know of one mechanism that can create that - QM superpositioning.
Please give a reference for that claim.

.Scott said:
Earlier in this thread I listed three additional observables: The information capacity of consciousness, the reportability, and the type of information we are conscious of.
If you look at the outside consequences of this, none of it would need quantum mechanics. In particular, classical computers could provide all three of them.
 
  • #54
mfb said:
Okay, but we have nothing remotely like this in our brain.
I would suggest we look. We already have examples in biology where superposition is important. Should we repeat the citations? Clearly, such molecules would be hard to find and recognize.
mfb said:
But then you are missing the point you highlighted as important - everything relevant should be entangled in some way.
If we want the AI machine to think as a person does, then this is a design issue that needs to be tackled. It's tough for me to estimate how much data composes a single moment of consciousness. It's not as much as it seems because our brains sequentially free-associate. So we quickly go from being conscious of the whole tree - to the leaves moving - to the type of tree. Also, catching what we are conscious of involves a language step which itself is conscious - and which further directs our attention.

All that said, the minimal consciousness gate (what supports one "step" or one moment of consciousness) is way more than 1-bit.
mfb said:
Please give a reference for that claim.
I believe you are referring to "in all of physics, we only know of one mechanism that can create [the needed information consolidation] - QM superpositioning". I cited Shor's and Grover's algorithms as examples of this. Here is a paper describing an implementation of Shor's Algorithm with a specific demonstration that it is dependent on superpositioning:

http://arxiv.org/abs/0705.1684

I think I can demonstrate that it is the only known one by tying it to non-locality. There is a theoretical limit (the Bekenstein Bound) to how small something can be and still hold one bit:

http://www.phys.huji.ac.il/~bekenste/PRD23-287-1981.pdf

If locality is enforced, bits could not be combined without touching each other - but that would create an information density that exceeded the Bekenstein Bound. So, if locality is enforced, bits cannot be consolidated. Since only QM has non-local rules, we are limited to QM. I said "QM superpositioning" rather than "QM entanglement" because superpositioning covers a broader area - and is more suitable to useful computations.

Although I have sited Shor's example above, my 3-qubit example is much easier to follow. But the Shor's algorithm was actually implemented and described in the paper.
mfb said:
If you look at the outside consequences of this, none of it would need quantum mechanics. In particular, classical computers could provide all three of them.
The last two, yes. The first one, no.
 
  • #55
Wow! There have been some good posts here. Let me give a quick thought expirament. It is known to be possible to to computer stimulations of various phenomenon. For example water going into a container. What is done is programming Newtonian physics into the computer and seeing what happens with millions of particles. What you see is what optics predicts you will see. Now imagine in the future we know all the laws of physics, and we completely know how a human works. Then we can use a computer to stimulate one neuron, two neurons..., until we have stimulated an actual human. Now I ask the question, is that person concious? That person will in all ways act like you or me. He will be functionally equivalent to a human. Yet, does he have an interiorness of experience, does he have quaila? Surely, there is not much more reason your neighbor has qualia than the stimulated person does.
 
  • #56
.Scott said:
My key point here is that when consciousness exists, it has information content. Do you agree?

Sure, but that's a separate question from how, physically, the information is stored and transported. "Observing the characteristics of your consciousness" does not tell you anything about that, except in a very minimal sense (no, your brain can't just be three pounds of homogenous jello).

.Scott said:
Let's say we want to make our AI capable of consciously experiencing eight things, coded with binary symbols 000 to 111. For example: 000 codes for apple, 001 for banana, 010 for carrot, 011 for date, 100 for eggplant, 101 for fig, 110 for grape, and 111 for hay. In a normal binary register, hay would not be seen by any of the three registers - because none of them have all the information it takes to see hay.

I'm not sure what you mean by the last sentence. If you mean that the information stored in the three bits, by itself, can't instantiate a conscious experience of anything, then I certainly agree; what makes 111 code for hay is a whole system of physical correlation and causation connected to the three bits--some kind of sensory system that can take in information from hay, differentiate it from information coming from apples, bananas, carrots, etc., and cause the three bits to assume different values depending on the sensory information coming in.

If, OTOH, you mean that no single bit can "see" hay because it takes 3 bits (8 different states) to distinguish hay from the other possible concepts, that's equally true of the three bits together; as I said above, what makes the 3 bits "mean" hay is not that they have value 111, but that the value 111 is correlated with other things in a particular way.

.Scott said:
Now let's say that I use qubits. I will start by zeroing each qubit and then applying the Hadamard gate. Then I will use other quantum gates to change the code (111) to its complement (000) thus eliminating the 111 code from the superposition. At this point, the hay code is no longer local.

I don't understand why you are doing this or what difference it makes. You still have eight different things to be conscious of, which means there must be eight different states that the physical system instantiating that consciousness must be capable of being in, and which state it is in must depend on what sensory information is coming in. How does all this stuff with qubits change any of that? What difference does it make?

If you mean that somehow the quantum superposition means a single state "sees" all 3 bits at once, that still isn't enough for consciousness, because it still leaves out the correlation with other things that I talked about. And that correlation isn't due to quantum superposition; it's due to ordinary classical causation. So I don't see how quantum superposition is either necessary or sufficient for consciousness.
 
Last edited:
  • #57
.Scott said:
One key way you know you don't have consciousness is that there is no place on the paper where the entire representation of "tree" exists.

And, similarly, there is no one place in the brain where your "entire representation" of tree or any other concept exists. That's because, as I said before, what makes a particular state of your brain a "representation" of a tree or anything else is a complex web of correlation and causation. There are no little tags attached to states of your brain saying "tree" or "rock" or anything else. Various events in various parts of your brain all contribute to your consciousness of a tree, or anything else, and, as mfb pointed out, there is no way there can be a quantum superposition covering all of those parts of your brain. The apparent unity of conscious experience is an illusion; there are plenty of experiments now showing the limits of the illusion.
 
  • #58
 
  • Like
Likes stevendaryl
  • #59
I doubt it will happen in near future (and i doubt that building such things will be viable in far future)
They are already superior in mathematics, that doesn't give them human like intelligence, they have nothing like emotion, they can't truly develop themselves, they are good in search an answer in a database, but barely anything like human intuition.
 
  • #60
PeterDonis said:
Sure, but that's a separate question from how, physically, the information is stored and transported. "Observing the characteristics of your consciousness" does not tell you anything about that, except in a very minimal sense (no, your brain can't just be three pounds of homogenous jello).
Well at least we can agree on the observable: That human consciousness involves awareness of at least several bits-worth of informaiton at one time.

PeterDonis said:
I'm not sure what you mean by the last sentence. If you mean that the information stored in the three bits, by itself, can't instantiate a conscious experience of anything, then I certainly agree; what makes 111 code for hay is a whole system of physical correlation and causation connected to the three bits--some kind of sensory system that can take in information from hay, differentiate it from information coming from apples, bananas, carrots, etc., and cause the three bits to assume different values depending on the sensory information coming in.
That's not it. All that data processing can be done conventionally.

PeterDonis said:
If, OTOH, you mean that no single bit can "see" hay because it takes 3 bits (8 different states) to distinguish hay from the other possible concepts, that's equally true of the three bits together; as I said above, what makes the 3 bits "mean" hay is not that they have value 111, but that the value 111 is correlated with other things in a particular way.
I agree with all of that.

PeterDonis said:
I don't understand why you are doing this or what difference it makes. You still have eight different things to be conscious of, which means there must be eight different states that the physical system instantiating that consciousness must be capable of being in, and which state it is in must depend on what sensory information is coming in. How does all this stuff with qubits change any of that? What difference does it make?
I'm doing it to make those three bits non-local. Three qubits set to 111 are no better than three bits set to 111. By recoding 111 as a superposition of 2(000),001,010,011,100,101,110, and 110 as 000,2(001),010,011,100,101,111, etc. I am still using only eight possible states, but that state information is not longer tied to one location. If I move one qubit to Mars, another to Venus, and keep the other one on Earth, those three qubits still know enough not to all turn up "1" - even though information can no longer be transmitted among them. The Bell inequality doesn't apply here, but the notion of a shared state still does.

PeterDonis said:
If you mean that somehow the quantum superposition means a single state "sees" all 3 bits at once, that still isn't enough for consciousness, because it still leaves out the correlation with other things that I talked about. And that correlation isn't due to quantum superposition; it's due to ordinary classical causation.
I agree with all of that.
PeterDonis said:
So I don't see how quantum superposition is either necessary or sufficient for consciousness.
It is not sufficient. Since you agree that the consciousness is of at least several bits, what mechanism causes those several bits to be selected? What's the difference between one bit each from three separate brains and three bits from the same brain? What is neccesary is some selection mechanism. I suspect that you think that something classical mechanism - like AND or OR gates - can do it. But how, in the classical environment, would that work?
 

Similar threads

Replies
34
Views
801
  • · Replies 13 ·
Replies
13
Views
2K
  • · Replies 77 ·
3
Replies
77
Views
6K
  • · Replies 36 ·
2
Replies
36
Views
2K
  • · Replies 9 ·
Replies
9
Views
2K
Replies
19
Views
3K
  • · Replies 99 ·
4
Replies
99
Views
7K
  • · Replies 7 ·
Replies
7
Views
3K
Replies
10
Views
4K
Replies
7
Views
6K