Quantum myth 3: nature is fundamentally random

Click For Summary
The discussion centers on the claim that the idea of nature being fundamentally random is a myth, as presented in the Demystifier's paper "Quantum mechanics: myths and facts." Participants express skepticism about objective randomness, arguing that attributing outcomes to randomness implies a lack of causation, which they find unsatisfactory. The conversation also touches on the implications of randomness for free will, suggesting that if quantum mechanics were not fundamentally random, it could challenge the concept of free will as deterministic. Additionally, the distinction between myths and models is debated, with an emphasis on the need for scientific testability. Overall, the thread explores the philosophical and scientific implications of randomness and determinism in understanding reality.
  • #61
reilly said:
But I think it could be: the basic idea is, a pyramid of "watchers"(this is the core of Minsky's "Society of Mind" models.) For example; an image(s) can be stored in computer memory. At the crudest level, and somewhat oversimplified, a sequence of neural networks is trained to recognize the image, it size, color. location, duration,...Then translation networks map language descriptions into or onto the image, which can respond to queries about the image. That is, one constructs a system able to pass a Turing-like test, so that self-awareness can be demonstrated.
Yes, I would not go on record to say it can't be done, I'm just agnostic about whether or not a self-aware brain can figure out what self-awareness is. That agnosticism is related to the issue of whether or not a Turing test can really cut it-- it seems to me, a Turing test is the prescription whereby a brain can fool itself into thinking it is in contact with another awareness, but the best it can do might not be good enough. (No harm in trying of course.)

Right now, the logic seems to be, "I'm self aware, so I will posit that anything that responds to stimulus in a way that is indistinguishable from how I would must also be self aware". The best we can do, perhaps, but is it ever enough? How do we bridge the gap between an operational definition of how awareness acts, and what it is actually like to be self aware?

It reminds me of your point about Hume and causality-- if we see a close connection between two things, such as our own awareness and how we act in various situations, can we reliably reason backward from those actions by other agents and infer they have a similar self awareness? The problem, as with cause and effect, is that one cause may always be followed by a certain effect, but that effect may not always be preceded by that cause. Even if it always seems to be, we never really get to know the complete connection, the "divine providence" if you will, until we've "seen under every rock" and noted every possibility in the whole universe. How else can we rule out the possibility that something that we know is not self aware in the way we experience it could still support an architecture that could "fool" a Turing test?

It reminds me of when Kasparov was beaten in chess by Deep Blue. Kasparov knew that Deep Blue was programmed essentially expressly to beat him (it has never played a public game with anyone else, presumably because it might present "bugs" against a different style player), so it must have given him a weird feeling of looking in the mirror. Was he seeing a reflection of his own awareness in the actions of the machine? Perhaps Kasparov has a more visceral sense of artificial intelligence than anyone else, as a result, but even so, are we forever relegated to seeing only that part of ourselves when we look outside?
 
Last edited:
Physics news on Phys.org
  • #62
I posit that a generic discussion of artificial intelligence or sentience belongs in either the computer or philosophy forum (depending on the content) -- if we think such notions are relevant to the thread, we ought to write down an operational definition, and work solely with that.
 
  • #63
The hope for sentience to be on topic in a thread on randomness is the hope that we can use the concept to understand the apparent randomness of human behavior in terms of internal degrees of freedom, internal sentience. But that just leads to the usual paradox that neither internal randomness nor internal determinism seems to explain where sentience comes into play. If we remove from the equation "sentience is what I have" on the grounds that such a subjective requirement is unscientific, I'm not sure there's anything left science can talk about, on a "randomness" thread or any other for that matter. We can look at the biological process of a brain making a decision, and look at what is inherently random and what isn't, that's the "operational definition" approach that we probably can do no better than. Like randomness, sentience then becomes a model of something else.
 
Last edited:
  • #64
I wasn't intending to talk about AI, but it is an example of why we, _inside_ the universe, cannot know everything there is to know about the universe from the inside. And I think I "proved" that theorem in an earlier post, because the "stuff" needed to observe "A" is always greater than "A".

The computer simulation is a great example because you will reach the limits of self-awareness within the confines of the system, outside of which you cannot step. You can experiment within the system to learn its rules, but you cannot discover the mechanisms enforcing those rules. That is why the computer cannot be 100% self-aware - it cannot use its own programming to learn everything about its own programming.
 
  • #65
Determinism is plausible when and if the Law of Large Numbers/Central Limit Theorm converges for a set of experiments -- measure the initial conditions and the outcomes -- mass sliding down an inclined plane; a months movement of the earth; starting a car and getting it moving, ...Our intuition suggests these are deterministic situations; and many measurements will confirm determinism within experimental error.(This is pretty much the main idea behind Shannon's work, made rigorous by Feinstein. -- see Khinchin's Mathematical Foundations of Information Theory, Dover-- a superb book.

RE self awareness -- indeed it's not usually considered an appropriate topic for physics threads. I beg to differ, given Sir Francis Crook's take on the matter -- neural hypothesis and all that; he really talks about the physics of the brain as paramount for the study of mental phenomena. With all due respect, it would appear that few if any here have spent much time with the research literature of brain science; there are no theorems, no grand philosophical pronouncements Rather one sees articles like, Attentional Mechanisms in Visual Cortex(Maunsell and Ferrera), Neurophysiological Networks Integrating human Emotions (Halgren and Marinkovic), two of 92 papers, mostly experimental, in The Cognitive Neurosciences --edited by M.S.Gazzaniga . I have the first edition of this bible (1997); there's a revised one out. A must read if you want to get a real sense of what's going on, and how far the field has come from the days of intense AI and philosophical arguments, which are more and more becoming historical curiosities. There are tons of data on everything to the specifics of neuro-transmitters to consciousness. And that's where the action is.

Back to physics.
Regards,
Reilly

Re Turing? What's better?
 

Similar threads

  • · Replies 12 ·
Replies
12
Views
2K
  • · Replies 23 ·
Replies
23
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 9 ·
Replies
9
Views
3K
  • · Replies 32 ·
2
Replies
32
Views
483
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 19 ·
Replies
19
Views
2K
Replies
23
Views
2K
  • · Replies 2 ·
Replies
2
Views
1K
Replies
13
Views
3K