Will AI ever achieve self awareness?

  • Thread starter ElliotSmith
  • Start date
  • Tags
    Ai Self
In summary: Earth.This is a difficult question. It is possible that machine consciousness may not be supported by silicon-based microprocessors/classical computing methods/programming languages/algorithms. And only an artificial neural network (ANN) can support consciousness and sentience. As far as to my knowledge, it is not possible to create an ANN out of a 2D transistorized silicon die.Successfully reverse-engineering the human brain and deciphering all of it's workings will be a momentous milestone in scientific and human history!
  • #71
DavidSnider said:
Why do think this? Given that scenario I would be incredibly shocked if it didn't.

Because i think there's a metaphysics that we're not very much aware of yet.
Alive vs Dead is in that realm.
We don't yet know what is "the spark of life".

If i knew how to strike that "spark" in my double it would be a sentient, feeling being of course because its neurons are wired for that.

But -- i don't want to go off topic , metaphysics and philosophy are troublemakers.

We perceive the universe via our electrochemical computer, the brain
i suppose that as you fellows suggest similar perception can be emulated electronically
but original thought and awareness of self i believe require "that spark".

Probably it's out there in that absolute reference frame...

old jim
 
Technology news on Phys.org
  • #72
I don't see why "alive" versus "dead" needs to be any more special than "functioning" and "not functioning". People don't die from their spirits just deciding to leave. Nobody leaves a working body lying around.

If we were able to perform the Hoyle experiment above and all we got was a corpse then the idea that there must be "some spark" might occur to me, but until then I don't know why we would need that concept yet.
 
Last edited:
  • #73
ElliotSmith said:
Will advanced artificial intelligence ever achieve consciousness and self-awareness?
We know next to nothing about how neural tissue/brain/matter "spits" out the experiential/consciousness/qualia. So I don't see how we can achieve or model something about which we have zero understanding.
 
  • #74
.Scott said:
at least we can agree on the observable: That human consciousness involves awareness of at least several bits-worth of informaiton at one time.

I'm not sure we even agree on that, because that "at one time" is vague. Do you mean literally at the same instant? Or just within the same short period of time, where "short" means "short enough that we can't consciously perceive it as an interval of time". From experiments on how long an interval there must be between two events for us to consciously perceive them as separate events, that window of time is on the order of 10 to 100 milliseconds. But it's perfectly possible for a classical mechanism to be "aware" of multiple bits of information in 10 to 100 milliseconds.

.Scott said:
Since you agree that the consciousness is of at least several bits, what mechanism causes those several bits to be selected?

I already described it: the mechanism that links those particular bits to incoming sensory information.

.Scott said:
What's the difference between one bit each from three separate brains and three bits from the same brain?

Um, the fact that they're in the same brain as opposed to separate brains? Meaning they're all connected to the same stream of incoming sensory information, instead of three different streams?
 
  • #75
.Scott said:
I am certainly not advocating a unity of consciousness - just a consolidation of the information we are conscious of, illusion or not.

But a classical mechanism can "consolidate" information. You seem to be shifting your ground.

.Scott said:
we all experience lots of data in a moment.

No, we all experience lots of data in some finite window if time. See my previous post. You are assuming that we somehow experience all of that data in an instant, instead of spread over a finite time interval. Since we can't consciously discriminate time intervals shorter than a certain threshold (10 to 100 milliseconds, per my previous post), we can't consciously tell the difference between experiencing all the data in an instant vs. experiencing it in a finite time interval that's shorter than the threshold. So the data simply does not require the interpretation you are putting on it. Which is why I said you are assuming a "unity" of consciousness (the "experience it all in an instant") which is, I believe, an illusion--we think we are perceiving all the data in an instant, but that's because we can't discriminate short enough time intervals.
 
  • #76
jim hardy said:
original thought and awareness of self i believe require "that spark".

While this belief cannot be refuted, it is not really amenable to argument or testing (certainly nobody is going to run the Hoyle experiment any time soon), so it is not a suitable topic for discussion here.
 
  • #77
.Scott said:
In the brain, what is the difference between the circuitry that processes information from the retina into a 3D model and the part that can become conscious of the result?

We don't know, because we don't know enough about the circuitry. There is so little data in this area that the field for speculation is very wide. It could be that some sort of QM effect is required for consciousness (for example, Penrose and Hameroff's speculations about quantum coherence in microtubules), or it could be that some fundamentally new physics is required (Penrose's speculations about objective state-vector reduction as a quantum gravity effect come to mind), or it could be that it's just sufficiently complex data processing and there isn't anything fundamentally new, physically, going on (this is basically Dennett's position in Consciousness Explained, for example). We simply don't know enough to tell at this point.
 
  • Like
Likes Pythagorean
  • #78
I feel that it boils down to the amount of memories you process, perhaps weigh for relevance and compare to current situations. There is a point in everyone's life where they begin to process enough memories to become conscious, recalling memories and your conscious thoughts during those memories. It's more than what you know it's how you remember learning it that leads to consciousness. It seems to me the quantum function advantage is going to be essential like .scott keeps saying it is reference of multiple bits of data in various locations leading to memories which have to be intricate webs of correlations. Seeing a tree and recognizing it isn't particularly difficult, it is memories being recalled from a vast sea of memories pertaining to trees and your conscious thoughts if you are a logger or tree hugger would obviously differ greatly.

On a different note, what purpose would a conscious "machine" fulfill? Other than ask it "intelligent" questions and feed your craving for curiosities would it have any practical applications? I can think of many things it could be good at but humans would object making it impractical.
 
  • #79
If we want to think about Turing was doing it was basically an idea with information itself and I think this has some connections with the concept of self-awareness.

Turning basically advocated that intelligence has a structure to it. The language aspects formed a lot of that where the responses showed some sort of pattern that suggested an intellect or an ability to make sense of random phrases.

Nowadays with the research of psycho-linguistics, linguistic grammars and syntax structures as well as the mathematical treatment of language, this idea of finding patterns and exploiting them to make a computer look intelligent is not really as much of a leap as it was for Turning to propose his famous test.

In terms of being self-aware we don't just have this idea of a pattern but rather we have the idea of a reaction as well. Statistically the simplest kind of connection we can conceive of is a correlation and in order to be aware of something at the simplest level there has to be some kind of correlation - it may not be a simple linear one and could exist in a complicated reference frame (think differential geometry) where transformations are required to get a linear relationship but the point is that self-awareness at any level requires this criteria in some form.

The other thing with consciousness (and something that has been pointed out by a few posters in this thread) is the idea of information.

In statistics we have this idea of an information matrix. Essentially the amount of information constrains our ability to estimate parameters and if we don't have enough information then it means that we will also have uncertainty in some form - it is a fundamental theorem of statistical inference.

It doesn't matter if the information is there and we have yet to find it or whether we can't physically access it - the mathematics doesn't change and this is necessary if one wants to evaluate the idea of self-awareness and consciousness on this level - especially if they are arguing about consciousness in the form of artificial intelligence.

If information can't be accessed regardless of whether it is "partitioned" through the laws of physics (or the ones we know) or whether we don't know where to look, then if that information is required to have some attribute of consciousness and self-awareness (again through the laws of physics and our techniques to probe the relevant forces and extract said information through interactions of some sort) and it can't be accessed then it means that the idea - given what physics tells us at the present moment is not feasible.

This is also not just for artificial intelligence but also for intelligence itself and even in psychology you get some theories like that of Carl Jung who hypothesized a kind of "global consciousness" that we can all access in specific ways - and there are many experiments that show this idea as well as things like savant syndrome that have no real explanation using conventional thinking.

It is one thing to measure something and quantify it with mathematics and objectivity but it is another thing entirely to know whether it can be measured and even if consciousness can be defined clearly using mathematics the other thing is dealing with accessing the information itself - and this is really the thing that will cause a lot of headaches.
 
  • #80
If AI machines one day become sentient and as/more intelligent than their biological counterparts, would that mean that they would gain the same legal rights as humans have?
 
  • #81
zoki85 said:
There is problem in definition of self-awereness I think.

Lol... :oldwink:
 
  • #82
ElliotSmith said:
gain the same legal rights as humans have
Will all humans ever have the same legal rights some do now? Here in USA everyone is supposed to have equal rights yet many have remarkable privileges and some are abused and forsaken. Will robots rights make them the new middle class, doing all the work and saving all their earnings for the benefit of the "country"?
 
  • #83
The legal rights issue, however interesting it is, is off topic here. (It might be appropriate in General Discussion if someone wants to start a separate thread there.)
 
  • #84
Closed pending moderation.
 

Similar threads

  • Computing and Technology
3
Replies
99
Views
5K
Replies
7
Views
668
Replies
10
Views
2K
Replies
19
Views
2K
  • Computing and Technology
Replies
4
Views
935
Writing: Input Wanted Number of Androids on Spaceships
  • Sci-Fi Writing and World Building
Replies
9
Views
453
  • Computing and Technology
Replies
1
Views
1K
  • Programming and Computer Science
Replies
1
Views
1K
Replies
7
Views
4K
  • General Discussion
Replies
2
Views
1K
Back
Top