Will AI ever achieve self awareness?

  • Thread starter Thread starter ElliotSmith
  • Start date Start date
  • Tags Tags
    Ai Self
AI Thread Summary
The discussion centers on whether AI can achieve consciousness and self-awareness, with current architectures deemed insufficient for true consciousness. Self-awareness is considered simpler, as it involves symbolic representation rather than a deeper understanding. The conversation highlights the need for advanced components, potentially quantum computing, to support consciousness, as traditional silicon-based systems may not suffice. It emphasizes the complexity of the human brain and the challenges in reverse-engineering it to create a true artificial neural network. Ultimately, the potential for AI to match human intelligence exists, but significant scientific and technological advancements are required before this can be realized.
  • #51
stevendaryl said:
If an experiment has one possible answer, then I don't see how you can say that you learn anything by running the experiment. If you are able to ask the question: "Am I conscious?" then of course, you're going to answer "Yes". So you don't learn anything by asking the question.
Earlier in this thread I listed three additional observables: The information capacity of consciousness, the reportability, and the type of information we are conscious of. You can repeat those observations for yourself as well.
 
Technology news on Phys.org
  • #52
If we examine the properties of consciousness, in every way it is non-physical, therefore to depend on a purely physical system to give rise to a non-physical property doesn't make sense... Unless our idea of physicality is wrong, i.e. consciousness is a fundamental aspect of physical components.

However, consciousness is the state of being conscious of something, therefore it requires two elements. The first and most obvious is the object of which to be conscious. The second and more elusive element is that which allows the actual experience. Having an input of information is much different than experience; experience needs that second element that we can call awareness.

Defining "awareness" is hard because words deal with appearances within experience, whereas awareness is that nameless "thing" that allows experience to unfold.

There is no reason that the existence of qualia should be designated to the purpose of survival. Any self-regulating mechanism capable of intelligence can survive, even if it is not conscious.

Intelligence is a function within consciousness. Creating an intelligent machine is quite different than creating an aware machine.

We make the mistake of attempting to reduce the existence of consciousness to a purely physical phenomena. It is obvious that consciousness has a non-physical component AS WELL AS a physical one (as I stated above, it requires two elements.)

Try to think of a world without consciousness. You can't. Why? Because absolutely everything is qualitative. Even our objective measurements about how sound is caused by particular waves of vibrating molecules as it is passed through our eardrum and converted into a sensory experience by the brain, is qualitative. How? For two reasons:

1. Our actual experience and the mechanics behind it are two completely different things. Our experience is one thing, the mechanics behind it is a completely difference. There is a duality there.
2. Our measurements all occur within consciousness. There is no way to GET AT consciousness itself. It is not experience, but that which allows for experience, thus, all experience is essentially qualitative.

There can be different kinds of consciousness in the way that what one is conscious of is completely different, and in the way that how these experiences are delivered can be different (such as bats with sonar. Their instrument of perception is different, thus their objects of perception are). However, the potentiality; that other element; remains the same.

It is nearly unavoidable to call that second element anything other than absolutely fundamental.

EDIT: Therefore, it seems plausible to be able to build a machine that can far surpass human intelligence, however to build one that is aware requires that awareness be present from the beginning. In other words, capacity for consciousness to emerge requires a fundamental element of awareness to be fundamental to reality. A system whose fundamental components in no way possesses a potential for a certain property cannot give rise to that property. In the same way a computer could not become what it has become unless its components have the potential to function in a particular way. Awareness is to consciousness as electrons are to information transfer. The only way a physical system can become conscious is if the components possessed the fundamental property that allows it to become conscious.
 
Last edited:
  • #53
.Scott said:
It looks like the example I provided in one of last nights posts. I encoded a 3-bit mechanism by creating a 3-qubit register and encoding the 3 bits as the only code that was not part of the superposition. This forces all three qubits to "know" about their shared state. If you don't understand that post, ask me about it. It describes the type of information consolidation that is needed very directly.
Okay, but we have nothing remotely like this in our brain.
.Scott said:
And we are not conscious of everything at once. So there must be many consciousness mechanisms - and we are one of them at a time.
But then you are missing the point you highlighted as important - everything relevant should be entangled in some way.
.Scott said:
My argument is that there is a type on information consolidation that is required for our conscious experience - and so far, in all of physics, we only know of one mechanism that can create that - QM superpositioning.
Please give a reference for that claim.

.Scott said:
Earlier in this thread I listed three additional observables: The information capacity of consciousness, the reportability, and the type of information we are conscious of.
If you look at the outside consequences of this, none of it would need quantum mechanics. In particular, classical computers could provide all three of them.
 
  • #54
mfb said:
Okay, but we have nothing remotely like this in our brain.
I would suggest we look. We already have examples in biology where superposition is important. Should we repeat the citations? Clearly, such molecules would be hard to find and recognize.
mfb said:
But then you are missing the point you highlighted as important - everything relevant should be entangled in some way.
If we want the AI machine to think as a person does, then this is a design issue that needs to be tackled. It's tough for me to estimate how much data composes a single moment of consciousness. It's not as much as it seems because our brains sequentially free-associate. So we quickly go from being conscious of the whole tree - to the leaves moving - to the type of tree. Also, catching what we are conscious of involves a language step which itself is conscious - and which further directs our attention.

All that said, the minimal consciousness gate (what supports one "step" or one moment of consciousness) is way more than 1-bit.
mfb said:
Please give a reference for that claim.
I believe you are referring to "in all of physics, we only know of one mechanism that can create [the needed information consolidation] - QM superpositioning". I cited Shor's and Grover's algorithms as examples of this. Here is a paper describing an implementation of Shor's Algorithm with a specific demonstration that it is dependent on superpositioning:

http://arxiv.org/abs/0705.1684

I think I can demonstrate that it is the only known one by tying it to non-locality. There is a theoretical limit (the Bekenstein Bound) to how small something can be and still hold one bit:

http://www.phys.huji.ac.il/~bekenste/PRD23-287-1981.pdf

If locality is enforced, bits could not be combined without touching each other - but that would create an information density that exceeded the Bekenstein Bound. So, if locality is enforced, bits cannot be consolidated. Since only QM has non-local rules, we are limited to QM. I said "QM superpositioning" rather than "QM entanglement" because superpositioning covers a broader area - and is more suitable to useful computations.

Although I have sited Shor's example above, my 3-qubit example is much easier to follow. But the Shor's algorithm was actually implemented and described in the paper.
mfb said:
If you look at the outside consequences of this, none of it would need quantum mechanics. In particular, classical computers could provide all three of them.
The last two, yes. The first one, no.
 
  • #55
Wow! There have been some good posts here. Let me give a quick thought expirament. It is known to be possible to to computer stimulations of various phenomenon. For example water going into a container. What is done is programming Newtonian physics into the computer and seeing what happens with millions of particles. What you see is what optics predicts you will see. Now imagine in the future we know all the laws of physics, and we completely know how a human works. Then we can use a computer to stimulate one neuron, two neurons..., until we have stimulated an actual human. Now I ask the question, is that person concious? That person will in all ways act like you or me. He will be functionally equivalent to a human. Yet, does he have an interiorness of experience, does he have quaila? Surely, there is not much more reason your neighbor has qualia than the stimulated person does.
 
  • #56
.Scott said:
My key point here is that when consciousness exists, it has information content. Do you agree?

Sure, but that's a separate question from how, physically, the information is stored and transported. "Observing the characteristics of your consciousness" does not tell you anything about that, except in a very minimal sense (no, your brain can't just be three pounds of homogenous jello).

.Scott said:
Let's say we want to make our AI capable of consciously experiencing eight things, coded with binary symbols 000 to 111. For example: 000 codes for apple, 001 for banana, 010 for carrot, 011 for date, 100 for eggplant, 101 for fig, 110 for grape, and 111 for hay. In a normal binary register, hay would not be seen by any of the three registers - because none of them have all the information it takes to see hay.

I'm not sure what you mean by the last sentence. If you mean that the information stored in the three bits, by itself, can't instantiate a conscious experience of anything, then I certainly agree; what makes 111 code for hay is a whole system of physical correlation and causation connected to the three bits--some kind of sensory system that can take in information from hay, differentiate it from information coming from apples, bananas, carrots, etc., and cause the three bits to assume different values depending on the sensory information coming in.

If, OTOH, you mean that no single bit can "see" hay because it takes 3 bits (8 different states) to distinguish hay from the other possible concepts, that's equally true of the three bits together; as I said above, what makes the 3 bits "mean" hay is not that they have value 111, but that the value 111 is correlated with other things in a particular way.

.Scott said:
Now let's say that I use qubits. I will start by zeroing each qubit and then applying the Hadamard gate. Then I will use other quantum gates to change the code (111) to its complement (000) thus eliminating the 111 code from the superposition. At this point, the hay code is no longer local.

I don't understand why you are doing this or what difference it makes. You still have eight different things to be conscious of, which means there must be eight different states that the physical system instantiating that consciousness must be capable of being in, and which state it is in must depend on what sensory information is coming in. How does all this stuff with qubits change any of that? What difference does it make?

If you mean that somehow the quantum superposition means a single state "sees" all 3 bits at once, that still isn't enough for consciousness, because it still leaves out the correlation with other things that I talked about. And that correlation isn't due to quantum superposition; it's due to ordinary classical causation. So I don't see how quantum superposition is either necessary or sufficient for consciousness.
 
Last edited:
  • #57
.Scott said:
One key way you know you don't have consciousness is that there is no place on the paper where the entire representation of "tree" exists.

And, similarly, there is no one place in the brain where your "entire representation" of tree or any other concept exists. That's because, as I said before, what makes a particular state of your brain a "representation" of a tree or anything else is a complex web of correlation and causation. There are no little tags attached to states of your brain saying "tree" or "rock" or anything else. Various events in various parts of your brain all contribute to your consciousness of a tree, or anything else, and, as mfb pointed out, there is no way there can be a quantum superposition covering all of those parts of your brain. The apparent unity of conscious experience is an illusion; there are plenty of experiments now showing the limits of the illusion.
 
  • #58
 
  • Like
Likes stevendaryl
  • #59
I doubt it will happen in near future (and i doubt that building such things will be viable in far future)
They are already superior in mathematics, that doesn't give them human like intelligence, they have nothing like emotion, they can't truly develop themselves, they are good in search an answer in a database, but barely anything like human intuition.
 
  • #60
PeterDonis said:
Sure, but that's a separate question from how, physically, the information is stored and transported. "Observing the characteristics of your consciousness" does not tell you anything about that, except in a very minimal sense (no, your brain can't just be three pounds of homogenous jello).
Well at least we can agree on the observable: That human consciousness involves awareness of at least several bits-worth of informaiton at one time.

PeterDonis said:
I'm not sure what you mean by the last sentence. If you mean that the information stored in the three bits, by itself, can't instantiate a conscious experience of anything, then I certainly agree; what makes 111 code for hay is a whole system of physical correlation and causation connected to the three bits--some kind of sensory system that can take in information from hay, differentiate it from information coming from apples, bananas, carrots, etc., and cause the three bits to assume different values depending on the sensory information coming in.
That's not it. All that data processing can be done conventionally.

PeterDonis said:
If, OTOH, you mean that no single bit can "see" hay because it takes 3 bits (8 different states) to distinguish hay from the other possible concepts, that's equally true of the three bits together; as I said above, what makes the 3 bits "mean" hay is not that they have value 111, but that the value 111 is correlated with other things in a particular way.
I agree with all of that.

PeterDonis said:
I don't understand why you are doing this or what difference it makes. You still have eight different things to be conscious of, which means there must be eight different states that the physical system instantiating that consciousness must be capable of being in, and which state it is in must depend on what sensory information is coming in. How does all this stuff with qubits change any of that? What difference does it make?
I'm doing it to make those three bits non-local. Three qubits set to 111 are no better than three bits set to 111. By recoding 111 as a superposition of 2(000),001,010,011,100,101,110, and 110 as 000,2(001),010,011,100,101,111, etc. I am still using only eight possible states, but that state information is not longer tied to one location. If I move one qubit to Mars, another to Venus, and keep the other one on Earth, those three qubits still know enough not to all turn up "1" - even though information can no longer be transmitted among them. The Bell inequality doesn't apply here, but the notion of a shared state still does.

PeterDonis said:
If you mean that somehow the quantum superposition means a single state "sees" all 3 bits at once, that still isn't enough for consciousness, because it still leaves out the correlation with other things that I talked about. And that correlation isn't due to quantum superposition; it's due to ordinary classical causation.
I agree with all of that.
PeterDonis said:
So I don't see how quantum superposition is either necessary or sufficient for consciousness.
It is not sufficient. Since you agree that the consciousness is of at least several bits, what mechanism causes those several bits to be selected? What's the difference between one bit each from three separate brains and three bits from the same brain? What is neccesary is some selection mechanism. I suspect that you think that something classical mechanism - like AND or OR gates - can do it. But how, in the classical environment, would that work?
 
  • #61
PeterDonis said:
The apparent unity of conscious experience is an illusion; there are plenty of experiments now showing the limits of the illusion.
I am certainly not advocating a unity of consciousness - just a consolidation of the information we are conscious of, illusion or not.
 
  • #62
.Scott said:
I am certainly not advocating a unity of consciousness - just a consolidation of the information we are conscious of, illusion or not.

Right, but you still have yet to make the case that consciousness requires a single physical state, whether you want to call it unity or not. And if all you can do is make logical arguments about it (i.e. you can't provide evidence) then anybody else can come up with logical arguments challenging it and everybody is just having logical arguments with no evidence, which isn't very productive.
 
  • #63
Pythagorean said:
Right, but you still have yet to make the case that consciousness requires a single physical state, whether you want to call it unity or not. And if all you can do is make logical arguments about it (i.e. you can't provide evidence) then anybody else can come up with logical arguments challenging it and everybody is just having logical arguments with no evidence, which isn't very productive.
There seems to be very little argument over the evidence - its a direct observable. And the results, we all experience lots of data in a moment. I've cited sources describing the physical limitations of what it takes to create that situation. If I can make my logic clearer, let me know and I will respond.
 
  • #64
There is problem in definition of self-awereness I think.
 
  • #65
.Scott said:
There seems to be very little argument over the evidence - its a direct observable.

It's still not directly observable to me that consciousness requires one physical state. I know you've presented a lot of evidence about other things; things which I don't really dispute anyway, but which are irrelevant if this point can't be demonstrated.
 
  • #66
Pythagorean said:
It's still not directly observable to me that consciousness requires one physical state. I know you've presented a lot of evidence about other things; things which I don't really dispute anyway, but which are irrelevant if this point can't be demonstrated.
I agree that the requirement for one physical state is not a direct observable. And I obviously shouldn't treat it as self-evident.

Let's see if I can describe the alternative model.
That would be that we are conscious of a set of information that is dispersed throughout the brain. That our consciousness is not a single device (or a single device at a time), but something that automatically arises through the processing of the data.

I think I need some help with that "automatically arises" part. If you are thinking that conventional data processing creates qualia, you're saying that there is something intrinsically different about shuffling bits, shuffling neuron signals, and shuffling a deck of cards - unless shuffling a deck of cards also creates qualia. In broad conceptual terms, what physical condition that might be in our brains might cause qualia?

In the brain, what is the difference between the circuitry that processes information from the retina into a 3D model and the part that can become conscious of the result? If it is because the retina data isn't wired directly into our conscious and the data from the model is, then what is it that it is wired into?

On a computer, what type of operation would create qualia? A database look-up? A multiply? Image processing? Navigating as an autopilot? Synthesizing speech? Simulating a Turing tape machine? If I lined up a bunch of computers and each one was doing a different type of data processing would that build up the qualia?

I see a fundamental problem with the alternatives that I am having a problem expressing. The alternatives involve "new physics" - something that happens when information is shuffled or handled in some special way or at some level of complexity - but it's not QM.
 
  • #67
I sense a false dilemma: you propose that consciousness must be either your idea or the alternative you outline - and I'm not sure of what alternative(s) you outline besides computational since they're not laid out carefully. But there's not much to suggest that these are the extent of out choices.

And second, It wouldn't require new physics if there was no top down causation (i.e. free will) and free will experiments so far tend to suggest that people feel like they've made a spontaneous decision after the predictible brain activity (in other words, the researchers were able to predict people's "spontaneous" decisions before the people even felt like they made a decision). Not to mention, the idea of free will violates physics in the first place (an entity acting independently of cause and effect, yet still somehow causing and affecting.)
 
  • #68
Pythagorean said:
I sense a false dilemma: you propose that consciousness must be either your idea or the alternative you outline - and I'm not sure of what alternative(s) you outline besides computational since they're not laid out carefully. But there's not much to suggest that these are the extent of out choices.
What are the alternatives? I was trying to come up with some that might make some sense. Since some, but not all, of the information gets into the consciouness, there has to be some involvement with information - don't you agree?

Pythagorean said:
And second, It wouldn't require new physics if there was no top down causation (i.e. free will) and free will experiments so far tend to suggest that people feel like they've made a spontaneous decision after the predictible brain activity (in other words, the researchers were able to predict people's "spontaneous" decisions before the people even felt like they made a decision). Not to mention, the idea of free will violates physics in the first place (an entity acting independently of cause and effect, yet still somehow causing and affecting.)
If you want to can free will, that is fine with me. My personal estimate is that it is simply a purposeful, wired-in illusion. The "new physics" I was taking about was selecting the information that would contribute to consciousness. If the bits aren't selected by merging them into a single state, how else do they get associated? By proximity? If by proximity, how does that work? By mashing them together in NAND gates? If so, how does that work? That's what I mean by "new physics".
 
  • #69
I have to go back to Fred Hoyle's thought experiment:
Set me down at a workbench with an assortment of fundamental particles , a magnifier strong enough to see and tweezers small enough to handle them, and
Task me to duplicate myself atom by atom right down to the spin of the very last electron.
When I'm done, there on the table lies my exact physical double.

Will it wake, sit up and thank me for all that work? Will it know right from wrong? Will it think Mary Steenburgen is the prettiest creature since Helen of Troy ?

I don't think it will.

Watson imitated awareness but i doubt he felt jubilant at winning Jeopardy.

So it's back to defining self awareness, imho.

Are you software engineers working on introspective programs ?
 
  • #70
jim hardy said:
Will it wake, sit up and thank me for all that work? Will it know right from wrong? Will it think Mary Steenburgen is the prettiest creature since Helen of Troy ?

I don't think it will.

Why do think this? Given that scenario I would be incredibly shocked if it didn't.
 
  • #71
DavidSnider said:
Why do think this? Given that scenario I would be incredibly shocked if it didn't.

Because i think there's a metaphysics that we're not very much aware of yet.
Alive vs Dead is in that realm.
We don't yet know what is "the spark of life".

If i knew how to strike that "spark" in my double it would be a sentient, feeling being of course because its neurons are wired for that.

But -- i don't want to go off topic , metaphysics and philosophy are troublemakers.

We perceive the universe via our electrochemical computer, the brain
i suppose that as you fellows suggest similar perception can be emulated electronically
but original thought and awareness of self i believe require "that spark".

Probably it's out there in that absolute reference frame...

old jim
 
  • #72
I don't see why "alive" versus "dead" needs to be any more special than "functioning" and "not functioning". People don't die from their spirits just deciding to leave. Nobody leaves a working body lying around.

If we were able to perform the Hoyle experiment above and all we got was a corpse then the idea that there must be "some spark" might occur to me, but until then I don't know why we would need that concept yet.
 
Last edited:
  • #73
ElliotSmith said:
Will advanced artificial intelligence ever achieve consciousness and self-awareness?
We know next to nothing about how neural tissue/brain/matter "spits" out the experiential/consciousness/qualia. So I don't see how we can achieve or model something about which we have zero understanding.
 
  • #74
.Scott said:
at least we can agree on the observable: That human consciousness involves awareness of at least several bits-worth of informaiton at one time.

I'm not sure we even agree on that, because that "at one time" is vague. Do you mean literally at the same instant? Or just within the same short period of time, where "short" means "short enough that we can't consciously perceive it as an interval of time". From experiments on how long an interval there must be between two events for us to consciously perceive them as separate events, that window of time is on the order of 10 to 100 milliseconds. But it's perfectly possible for a classical mechanism to be "aware" of multiple bits of information in 10 to 100 milliseconds.

.Scott said:
Since you agree that the consciousness is of at least several bits, what mechanism causes those several bits to be selected?

I already described it: the mechanism that links those particular bits to incoming sensory information.

.Scott said:
What's the difference between one bit each from three separate brains and three bits from the same brain?

Um, the fact that they're in the same brain as opposed to separate brains? Meaning they're all connected to the same stream of incoming sensory information, instead of three different streams?
 
  • #75
.Scott said:
I am certainly not advocating a unity of consciousness - just a consolidation of the information we are conscious of, illusion or not.

But a classical mechanism can "consolidate" information. You seem to be shifting your ground.

.Scott said:
we all experience lots of data in a moment.

No, we all experience lots of data in some finite window if time. See my previous post. You are assuming that we somehow experience all of that data in an instant, instead of spread over a finite time interval. Since we can't consciously discriminate time intervals shorter than a certain threshold (10 to 100 milliseconds, per my previous post), we can't consciously tell the difference between experiencing all the data in an instant vs. experiencing it in a finite time interval that's shorter than the threshold. So the data simply does not require the interpretation you are putting on it. Which is why I said you are assuming a "unity" of consciousness (the "experience it all in an instant") which is, I believe, an illusion--we think we are perceiving all the data in an instant, but that's because we can't discriminate short enough time intervals.
 
  • #76
jim hardy said:
original thought and awareness of self i believe require "that spark".

While this belief cannot be refuted, it is not really amenable to argument or testing (certainly nobody is going to run the Hoyle experiment any time soon), so it is not a suitable topic for discussion here.
 
  • #77
.Scott said:
In the brain, what is the difference between the circuitry that processes information from the retina into a 3D model and the part that can become conscious of the result?

We don't know, because we don't know enough about the circuitry. There is so little data in this area that the field for speculation is very wide. It could be that some sort of QM effect is required for consciousness (for example, Penrose and Hameroff's speculations about quantum coherence in microtubules), or it could be that some fundamentally new physics is required (Penrose's speculations about objective state-vector reduction as a quantum gravity effect come to mind), or it could be that it's just sufficiently complex data processing and there isn't anything fundamentally new, physically, going on (this is basically Dennett's position in Consciousness Explained, for example). We simply don't know enough to tell at this point.
 
  • Like
Likes Pythagorean
  • #78
I feel that it boils down to the amount of memories you process, perhaps weigh for relevance and compare to current situations. There is a point in everyone's life where they begin to process enough memories to become conscious, recalling memories and your conscious thoughts during those memories. It's more than what you know it's how you remember learning it that leads to consciousness. It seems to me the quantum function advantage is going to be essential like .scott keeps saying it is reference of multiple bits of data in various locations leading to memories which have to be intricate webs of correlations. Seeing a tree and recognizing it isn't particularly difficult, it is memories being recalled from a vast sea of memories pertaining to trees and your conscious thoughts if you are a logger or tree hugger would obviously differ greatly.

On a different note, what purpose would a conscious "machine" fulfill? Other than ask it "intelligent" questions and feed your craving for curiosities would it have any practical applications? I can think of many things it could be good at but humans would object making it impractical.
 
  • #79
If we want to think about Turing was doing it was basically an idea with information itself and I think this has some connections with the concept of self-awareness.

Turning basically advocated that intelligence has a structure to it. The language aspects formed a lot of that where the responses showed some sort of pattern that suggested an intellect or an ability to make sense of random phrases.

Nowadays with the research of psycho-linguistics, linguistic grammars and syntax structures as well as the mathematical treatment of language, this idea of finding patterns and exploiting them to make a computer look intelligent is not really as much of a leap as it was for Turning to propose his famous test.

In terms of being self-aware we don't just have this idea of a pattern but rather we have the idea of a reaction as well. Statistically the simplest kind of connection we can conceive of is a correlation and in order to be aware of something at the simplest level there has to be some kind of correlation - it may not be a simple linear one and could exist in a complicated reference frame (think differential geometry) where transformations are required to get a linear relationship but the point is that self-awareness at any level requires this criteria in some form.

The other thing with consciousness (and something that has been pointed out by a few posters in this thread) is the idea of information.

In statistics we have this idea of an information matrix. Essentially the amount of information constrains our ability to estimate parameters and if we don't have enough information then it means that we will also have uncertainty in some form - it is a fundamental theorem of statistical inference.

It doesn't matter if the information is there and we have yet to find it or whether we can't physically access it - the mathematics doesn't change and this is necessary if one wants to evaluate the idea of self-awareness and consciousness on this level - especially if they are arguing about consciousness in the form of artificial intelligence.

If information can't be accessed regardless of whether it is "partitioned" through the laws of physics (or the ones we know) or whether we don't know where to look, then if that information is required to have some attribute of consciousness and self-awareness (again through the laws of physics and our techniques to probe the relevant forces and extract said information through interactions of some sort) and it can't be accessed then it means that the idea - given what physics tells us at the present moment is not feasible.

This is also not just for artificial intelligence but also for intelligence itself and even in psychology you get some theories like that of Carl Jung who hypothesized a kind of "global consciousness" that we can all access in specific ways - and there are many experiments that show this idea as well as things like savant syndrome that have no real explanation using conventional thinking.

It is one thing to measure something and quantify it with mathematics and objectivity but it is another thing entirely to know whether it can be measured and even if consciousness can be defined clearly using mathematics the other thing is dealing with accessing the information itself - and this is really the thing that will cause a lot of headaches.
 
  • #80
If AI machines one day become sentient and as/more intelligent than their biological counterparts, would that mean that they would gain the same legal rights as humans have?
 
  • #81
zoki85 said:
There is problem in definition of self-awereness I think.

Lol... :oldwink:
 
  • #82
ElliotSmith said:
gain the same legal rights as humans have
Will all humans ever have the same legal rights some do now? Here in USA everyone is supposed to have equal rights yet many have remarkable privileges and some are abused and forsaken. Will robots rights make them the new middle class, doing all the work and saving all their earnings for the benefit of the "country"?
 
  • #83
The legal rights issue, however interesting it is, is off topic here. (It might be appropriate in General Discussion if someone wants to start a separate thread there.)
 
  • #84
Closed pending moderation.
 

Similar threads

Replies
7
Views
2K
Replies
77
Views
6K
Replies
19
Views
1K
Replies
9
Views
2K
Replies
99
Views
7K
Replies
7
Views
2K
Back
Top