Ken G
Gold Member
- 4,949
- 573
Yes, I would not go on record to say it can't be done, I'm just agnostic about whether or not a self-aware brain can figure out what self-awareness is. That agnosticism is related to the issue of whether or not a Turing test can really cut it-- it seems to me, a Turing test is the prescription whereby a brain can fool itself into thinking it is in contact with another awareness, but the best it can do might not be good enough. (No harm in trying of course.)reilly said:But I think it could be: the basic idea is, a pyramid of "watchers"(this is the core of Minsky's "Society of Mind" models.) For example; an image(s) can be stored in computer memory. At the crudest level, and somewhat oversimplified, a sequence of neural networks is trained to recognize the image, it size, color. location, duration,...Then translation networks map language descriptions into or onto the image, which can respond to queries about the image. That is, one constructs a system able to pass a Turing-like test, so that self-awareness can be demonstrated.
Right now, the logic seems to be, "I'm self aware, so I will posit that anything that responds to stimulus in a way that is indistinguishable from how I would must also be self aware". The best we can do, perhaps, but is it ever enough? How do we bridge the gap between an operational definition of how awareness acts, and what it is actually like to be self aware?
It reminds me of your point about Hume and causality-- if we see a close connection between two things, such as our own awareness and how we act in various situations, can we reliably reason backward from those actions by other agents and infer they have a similar self awareness? The problem, as with cause and effect, is that one cause may always be followed by a certain effect, but that effect may not always be preceded by that cause. Even if it always seems to be, we never really get to know the complete connection, the "divine providence" if you will, until we've "seen under every rock" and noted every possibility in the whole universe. How else can we rule out the possibility that something that we know is not self aware in the way we experience it could still support an architecture that could "fool" a Turing test?
It reminds me of when Kasparov was beaten in chess by Deep Blue. Kasparov knew that Deep Blue was programmed essentially expressly to beat him (it has never played a public game with anyone else, presumably because it might present "bugs" against a different style player), so it must have given him a weird feeling of looking in the mirror. Was he seeing a reflection of his own awareness in the actions of the machine? Perhaps Kasparov has a more visceral sense of artificial intelligence than anyone else, as a result, but even so, are we forever relegated to seeing only that part of ourselves when we look outside?
Last edited: