AI Consciousness?

  • Thread starter Thread starter victoriaperine
  • Start date Start date
  • Tags Tags
    Ai Writing
AI Thread Summary
The discussion centers on the controversial topic of whether AI could eventually achieve consciousness, defined as possessing self-created morals, thoughts, and a personality. Participants explore the technological advancements needed to create an artificial brain indistinguishable from a natural one, emphasizing the necessity of a scientific definition of consciousness, which currently does not exist. The conversation references Isaac Asimov's works, highlighting the portrayal of conscious robots facing human dilemmas. There is skepticism regarding current AI capabilities, particularly concerning the replication of qualia and the limitations of classical computing architectures. The need for an integrated system that can self-modify is emphasized as crucial for developing consciousness in AI. The discussion invites insights from those with computational backgrounds to further explore the relationship between computational and biological consciousness.
victoriaperine
Messages
3
Reaction score
4
I know this topic is extremely contraversial and debated, but I'm writing a book where an AI attempts to become as human as possible. Would it, eventually, especially in the far future, be possible for an AI to gain a conscious? To be clear, my definition of a consciousness being the ability to possess self-created morals, thoughts, and views, AKA a whole personality.

And if this is possible (and let's just say it is for this question), about how long may it take for something to happen and under what conditions might this be able to happen?
 
Physics news on Phys.org
Given that we are conscious, the only questions are a) how sophisticated an AI needs to be; and, b) what biological or social parameters are needed in order for it to gain human-like consciousness?

It must be possible, unless you believe consciousness is a religious or supernatural gift.
 
  • Like
Likes 256bits, javisot and DaveC426913
I have nothing to add to PeroK's concise assessment.

Take it to its logical limit: at some point, we will have sufficiently-advanced technology to artificially construct a brain that's made of synapses and neurons, all built from proteins and organics. Now we have an artificial brain that's indistinguishable from a natural brain.
 
  • Like
Likes Lren Zvsm, 256bits and javisot
victoriaperine said:
I know this topic is extremely contraversial and debated, but I'm writing a book where an AI attempts to become as human as possible. Would it, eventually, especially in the far future, be possible for an AI to gain a conscious? To be clear, my definition of a consciousness being the ability to possess self-created morals, thoughts, and views, AKA a whole personality.
Conscious or conscience? Because you said the former but seemed to imply the latter (and the sentence was grammatically incorrect).
 
  • Like
Likes DaveC426913 and 256bits
More than forty years algo, Isaac Asimov wrote several novels where robots were conscious. Asimov relied on the so called "positronic brain". These robots faced very human dilemma. I feel these novels give a good starting point to further the subject.
 
Gordianus said:
More than forty years algo, Isaac Asimov wrote several novels where robots were conscious.
You have a gift for understatement.

True, 85 years is "more than 40 years".
True, 37 is "several novels".

:wink:
 
  • Like
Likes phinds, Hornbein and nsaspook
I am making an independent effort trying to approach this question!

My stance at this point is that our current technology cannot replicate qualia, as the majority of artificial intelligence I have encountered is feed-forward only. I don't believe classical Von Neuman architecture could give rise to consciousness, as one would need a whole integrated system and the ability to self-modify.

I have a tough time reconciling the algorithmic nature of current programs with axioms present in Integrated Information Theory. Not to mention I read a news article recently (getting around to the paper) that suggests consciousness may arise more in the posterior, sensory regions of the brain. A question I struggle with is, if computational consciousness is possible, does this have to be modeled on biological consciousness?

My background is in neuroscience, so if there are any more computationally oriented individuals able to comment, I am open to learning more!

If anyone is interested in seeing some of the written work so far, here's the Github link: [Link redacted by the Mentors]

(I have been out of academia for a couple years, any constructive comments welcome!)
 
Last edited by a moderator:
reflectiveatlas said:
I am making an independent effort trying to approach this question!

My stance at this point is that our current technology cannot replicate qualia, as the majority of artificial intelligence I have encountered is feed-forward only. I don't believe classical Von Neuman architecture could give rise to consciousness, as one would need a whole integrated system and the ability to self-modify.

I have a tough time reconciling the algorithmic nature of current programs with axioms present in Integrated Information Theory. Not to mention I read a news article recently (getting around to the paper) that suggests consciousness may arise more in the posterior, sensory regions of the brain. A question I struggle with is, if computational consciousness is possible, does this have to be modeled on biological consciousness?

My background is in neuroscience, so if there are any more computationally oriented individuals able to comment, I am open to learning more!

If anyone is interested in seeing some of the written work so far, here's the Github link: [Link redacted by the Mentors]

(I have been out of academia for a couple years, any constructive comments welcome!)
 
  • #10
Though we cannot "make" qualia in an artificial system, we cannot predict that qualia exist all and only from neurological data either. So, for the purposes of writing fiction, you could posit that any machine that could perform all of the tasks that the human brain can would, no matter how mysterious the reasons, be conscious.
 
  • #11
reflectiveatlas said:
I have a tough time reconciling the algorithmic nature of current programs with axioms present in Integrated Information Theory.
In what sense are current Large Language Models (LLMs), which are based on Artificial Neural Networks (ANNs), algorithmic? Do you think your brain is algorithmic? If not, why would you think ANNs are algorithmic?
 
  • #12
PeroK said:
Given that we are conscious, the only questions are a) how sophisticated an AI needs to be; and, b) what biological or social parameters are needed in order for it to gain human-like consciousness?

It must be possible, unless you believe consciousness is a religious or supernatural gift.
One need not invoke religion or any supernatural belief in order to believe that the nature of subjective sensations (qualia) is currently unexplained.

I have very little doubt that qualia have a physical basis, because physical events are known to cause them and change them in predictable ways. At worst, qualia are epiphenomena of physical events. More parsimoniously, the current mysteriousness of qualia could be an artifact of aspects of observation and perception that people have yet to learn about.

But it makes no sense to call qualia "an illusion." Rigorous chains of scientific inference have revealed differences between our subjective perceptions and the objective world, so of course many of our perceptions are illusions, but the nature of these "illusions" are precisely what, IMO, we haven't explained yet.
Equating subjective sensations with brain states is also problematic, since equation implies shared properties between the things equated. My visual perception of a horse, or my mental image of a horse, for that matter, looks nothing like active neurons.

For a much better treatment of the hard problem of consciousness, see this article, which contains some insights by cognitive scientist and psycholinguist Stephen Pinker. https://www.themontrealreview.com/Articles/Problem_of_consciousness_Steven_Pinker.php
 
  • #13
russ_watters said:
Conscious or conscience? Because you said the former but seemed to imply the latter (and the sentence was grammatically incorrect).
I disagree. I psychopath can be conscious without having a conscience. So can human babies and non-human animals, which lack the capacity to think very much about what they think, feel, and do, and therefore can't be moral agents.
 
  • #14
Lren Zvsm said:
But it makes no sense to call qualia "an illusion."
I must admit that I am unsure what people really mean when they just refer to that. I have done so myself and it really is sloppy terminology.

I understand the statement "the subjective experience of consciousness is an illusion" to simply imply that such experience emerge when our brain/mind are sufficiently configured for it, similar to how the fluid experience of movie emerge when individual movie frames are projected in just the right way. As the brain/mind configuration is changed the experience of consciousness may change too in some way (e.g. like slowly getting drunk, suffering a mental illness, etc.) or even stop completely (e.g. in some sleep states, when in severe fight-or-flight, etc.). So I do not take calling consciousness "an illusion" to mean that consciousness is without structure or function, but more that is is a emergent capability of a sufficiently complex agentic system.

My unresearched guess would be that the normal capability of consciousness in the human brain has arisen due to an evolutionary advantage, e.g. enabling better survival rate or adaptation. If so, it would not be surprising if an artificial agentic system (e.g. AGI with full agency) developed over time with selection pressure also would end up having some form of something that we would classify as consciousness if it overall gives an advantage.
 

Similar threads

Replies
9
Views
2K
Replies
21
Views
3K
Replies
30
Views
3K
Replies
83
Views
8K
Replies
2
Views
3K
Replies
7
Views
3K
Back
Top