Do weak measurement prove randomness is not inherent?

  • #51
SpectraCat said:
So, the question now is, are vacuum fluctuations (or whatever) truly random?
The one thing we can know for sure is that no scientist will ever know the answer to that question.
I don't know enough about QFT or quantum cosmology to even approach answering that question.
Knowing QFT would only tell you if the theory of QFT models the fluctuations as truly random, it wouldn't tell you if they are or not.
 
Physics news on Phys.org
  • #52
Ken G said:
The one thing we can know for sure is that no scientist will ever know the answer to that question.Knowing QFT would only tell you if the theory of QFT models the fluctuations as truly random, it wouldn't tell you if they are or not.

I would take a different tack. I would say that since we can take the following to be true:

1) QFT requires that these observable phenomena (nuclear decay and spontaneous emission) be triggered by interactions with vacuum fluctuations

2) to the best of our ability to measure them, these phenomena are random (meaning that they are stochastic)

The most logical conclusion based on the available data is that IF vacuum fluctuations are what triggers those events, then they are the source of the randomness. In the absence of a competing, experimentally falsifiable hypothesis, I can't see what other conclusion one could draw.
 
  • #53
SpectraCat said:
I would take a different tack. I would say that since we can take the following to be true:

1) QFT requires that these observable phenomena (nuclear decay and spontaneous emission) be triggered by interactions with vacuum fluctuations

2) to the best of our ability to measure them, these phenomena are random (meaning that they are stochastic)

The most logical conclusion based on the available data is that IF vacuum fluctuations are what triggers those events, then they are the source of the randomness. In the absence of a competing, experimentally falsifiable hypothesis, I can't see what other conclusion one could draw.
My remark was not about what is the best hypothesis, it was about what we know, and what do we not know. What we do not know, and what we never will know, is that the fluctuations are "truly random." This is simply not a goal of science to know, though many people seem compelled to repeat all the mistakes of scientific history. But I certainly agree with you that our best current hypothesis, and this may always be our best hypothesis, is that the fluctuations are best modeled as random. That's the hypothesis that gets best agreement with observation, does the best job of motivating new observations, and is the simplest.
 
  • #54
Ken G said:
I'm not sure what context you mean. I would place the "fundamental" aspects of a law in the nature of the derivations used for that law, not in the nature of the systems the law is used to predict. That's mixing two different things.
It is indeed mixing two different things. So what you are saying is that the map is more important in defining the nature of "fundamental", even when we explicitly and intentionally throw away position information to gain a classical ensemble mapping, than is the system the map represents. That is not a question because that is what you said, but you are welcome to take issue with the characterization.

In fact the truth of the system trumps the truth of the map (theory) every time, which is why empirical facts are judge, jury, and executioner of theories. Empirical facts can mean different things in different theories making the claims associated with empirical facts less than factual, but nonetheless contradiction of those facts is a theory killer.

Hence I ALWAYS put the system at a far more fundamental level than ANY theory and/or derivation thereof.


Ken G said:
We don't actually know that, because our knowledge is always limited.
Well of course. The mere fact of QM as an underpinning to classical physics is alone enough to kill the point from a system point of view. Yet if your idealized system is strictly defined by classical theory, much like you put theory ahead of system above to justify a "fundamental" status, then as a matter of fact stochastic behavior is an illusion induced by limited knowledge. Luckily science puts theory in the back seat to the system it describes. In fact many things thought to be fundamental were found to be derivable from something else.

So when you say "We don't actually know that," that must also apply to whatever thing in QM you want to define as "fundamental" in the present theoretical picture. Only the system, not the theory, contains or does not contain the information that specifies these things. Theory can only give a probable best guess based on the level of empirical consistency attained, and even then mutually exclusive interpretations can posses equal empirical consistency.

Ken G said:
We have no way to test your assertion.
We need no such empirical test to say what classical theory says, which was all the point I was making. We only need such test to see if nature agrees, which is why I found you claim at the top strange, since you placed theory ahead of nature in defining the quality of what constituted "fundamental".

Only now that nature has spoken and required classical physics to be a limiting case "no way to test your assertion" becomes applicable to the nature of the stochastic behaviors in QM. So your own rebuttal to classical illusion of stochastic phenomena is now turned in the same court to the QM models used to claim the opposite.

Ken G said:
Indeed, in classical chaos, we generally find the stochasticity penetrates to all levels-- no matter what the ignorance is initially, it rapidly expands toward ergodicity. This has a flavor of being more than an apparent aspect of the behavior, instead the behavior is a kind of ode to ignorance. The idea that we could ever complete our information of a classical system is untenable-- ironically, classical systems are far more unknowable than quantum systems, because classical systems have vastly many degrees of freedom.
Again, in the classical chaos it is indeed unknowable even though the unknown information is by definition there. So the fact of unknowability requires models that treats certainties as though they were random, but the model does not make the system random.

On a personal note I suspect that quantum systems have countlessly many more orders of magnitude of degree of freedom than classical systems, and we merely wrap them in ensembles, call the randomness "fundamental", and wash our hands of the unknowns as though they were merely imaginary. Quantum computers can easily justify this position. How else can a quantum computer do calculations in seconds that a standard computer with as many registers as there are particles in the Universe could not do in the life of the Universe? I even once seen this argument used as evidence for many worlds. Many worlds or not something gives.

Ken G said:
It is that vastness that allows us to mistake expectation values for deterministic behavior, we see determinism in the context where the behavior is least knowable. Determinism is thus a kind of "mental defense mechanism," I would say.
Saying "expectation values" mistaken for "deterministic behavior" is woefully inappropriate when as a matter of fact, an independent of ANY real system, define a deterministic TOY model which is by definition entirely and completely deterministic and we can still PROVE that the model has unpredictable behavior.

It is not model dependent. It is NOT dependent on what is or is not real in ANY real system. Hence claiming that determinism in such systems was merely mistaken "expectation values" is DEAD wrong. It was the "expectation values" that proved stochastic behavior when the toy model was explicitly restricted to deterministic mechanics. So your claim is backwards and "expectation values" NEVER amounted to any evidence of determinism.

Ken G said:
The laws are the theory, so the foundation of the laws is only the structure of the theory, regardless of how successfully they test out.
Again, make up a set of laws, any set of laws. They do not have to represent any real laws of anything, just make sure these laws are strictly deterministic. It can then be proved that based on these fake deterministic laws, and these fake laws alone, that the results of such deterministic laws MUST require stochastic behavior to model reasonably complex systems operating on these laws alone. That is the point. Ignorance of the details of a system do not entail that the system is devoid of deterministic causes behind the unpredictable behavior.


Ken G said:
I think you take the perspective that there really are "laws", and our theories are kinds of provisional versions of those laws. My view is that the existence of actual laws is a category error-- the purpose of a law is not to be what nature is actually doing, it is to be a replacement for what nature is actually doing, a replacement that can fit in our heads and meet some limited experimental goals.
Now I am getting a little more about your perspective so I will articulate my own. It is not the laws that are fundamental. I think there really are symmetries in nature and laws are written to provide a perspective of those symmetries. For instance we could just as easily say the gravitational constant G varies relative to the depth of a gravitational field, but Einstein chose to say the apparent mass varied, which is physically equivalent. Yet regardless of what form or what perspective the laws are written from the symmetries are ALWAYS the same. Then theories are merely provisional specifications of those symmetries and their domains of applicability. So the only category errors that occur is when a symmetry specification is slightly off the mark or the domain of applicability has been misidentified. Symmetries do not require us to specify what nature is "actually" doing, though guessing can help better specify the symmetries.

Now how does this apply to the randomness issue. Well, we know for a fact the the symmetries defined by stochastic behavior is derivable from non-stochastic causal certainties given some unavoidable level of ignorance. This dependence of theory on stochasticity does NOT make it a "fundamental" symmetry even though it remains possible that it is at some level of the system. Because it is derivable from ANY system with "fundamental" stochastic behavior or not. I am perfectly happy with a "replacement" for what nature is actually doing as long as the "replacement" is as complete as we can fundamentally get from within our limitations. I do not think that simply proclaiming that is all we are after, therefore should not search more deeply for what nature might actually be doing, is helpful in getting more accurate and complete "replacements" for predicting what nature is going to do in more detailed circumstances.

Ken G said:
I ask, what difference does it make the "foundational" structure of our laws? We never test their foundational structure, we only test how well they work on the limited empirical data we have at our disposal. The connection at the foundational level will always be a complete mystery, or a subject of personal philosophy, but what we know from the history of science is that the foundational level of any law is highly suspect.
True. But the efforts have produced theories which can predict far higher volumes of empirical behavior. These historical breakthrough are often the result of taking some foundational stance at odds with the prevailing stance. So it makes no difference that the foundational stance itself is and will always remain in question, they still play an important role in development. Then to deny the historical importance of the foundational stances in the development of science, and reject their relevance, denies some of the most important elements of theory building in our humanistic tool box.

So no, I will not accept some foundational anti-stance as a substitute for working through the apparent consequences of alternative foundational stances. The foundational stance itself needs no more justification or claims of absolute validity than the contributions to the "replacement" models predictive value.

Ken G said:
Yes, that is an important difference.
Yes, the one that gives me nightmares :-p

Ken G said:
It is not the laws that are fundamental, because that makes a claim about their relationship to reality. It is only the fundamental of the law that we can talk about-- there's a big difference.
Yes, This is where I make the distinction between a symmetry and a law, with fundamental stances merely playing a role in morphing perspectives within the symmetries. I do not even thin IMO that fundamental physical constants are fundamental, or not derivable from other models. I do not take foundational stances any more seriously in an absolute sense than a coordinate choice. But coordinate choices can nonetheless be extremely useful.

Ken G said:
I think this is your key point here, the degree of ignorance is worse in QM applications. I concur, but then we are both Copenhagen sympathizers!
Yes we are. However, my sympathies for Copenhagen is a twist in my foundational stance no different from a change in perspective as a result of a coordinate transform. There are no such absolutes to this or that stance that makes it fundamentally closer to any actual reality than any other. So, except for the fact of some anti-Copenhagen stances thinking their stance is somehow a better representation of absolute reality, I can also (sometimes) sympathize with their version. I move out of the Copenhagen picture for operational and conceptual reasons having nothing to do with rejecting the validity of the stance itself. Again, it is more like a coordinate transform than a rejection of the stance.

Ken G said:
Yes, I agree that randomness in our models is inevitable-- chaos theory is another reason.
Perhaps I tried too hard above to get the point made. In the sense I was trying to use classical theory to make a point it is identical to chaos theory case. I was not trying to make the point that classical physics had any particular level of validity, only that strictly under the assumption of classical physics without any fundamental randomness you still get stochastic behavior. Just like in chaos theory.

Ken G said:
Yes, I see what you mean, the absence of any concept of a quantum demon is very much a special attribute of quantum theory, although Bohmians might be able to embrace the concept.
Here the nightmarish part for me. If morphing between foundational stances is no more or less fundamental than a change in coordinate choices as I perceive it to be, then quantum demons should at least principle be possible in some sense irrespective of the absolute validity of their existence. It does not even require them to correspond to any direct measurable, just that they are in principle quantitatively possible under some perspective of what constitutes a measurable. My issues with Bohmian Mechanics run far deeper than any issue I have with CI.
 
  • #55
my_wan said:
It is indeed mixing two different things. So what you are saying is that the map is more important in defining the nature of "fundamental", even when we explicitly and intentionally throw away position information to gain a classical ensemble mapping, than is the system the map represents.
The word "fundamental" doesn't really mean anything, but "fundamentally" does-- we were talking about a phrase like "theory X is fundamentally Y". That statement can be addressed only by looking at the theory, there is no need to know anything about the observations that determine the success of the theory.
In fact many things thought to be fundamental were found to be derivable from something else.
A natural result, I would say the whole idea that anything can be "fundamental" is a persistent myth.
So when you say "We don't actually know that," that must also apply to whatever thing in QM you want to define as "fundamental" in the present theoretical picture.
The things we can know are our own theories, their predictions, and the outcomes of experiments. That's it, that's scientific knowledge. A theory can be "fundamentally something" and it can be built from fundamental pieces (fundamentals of the theory), but the theory itself is never "fundamental". There is never a "fundamental scientific truth", but there are "the fundamentals of doing science." The term is a bit loaded.

Theory can only give a probable best guess based on the level of empirical consistency attained, and even then mutually exclusive interpretations can posses equal empirical consistency.
Certainly. We are empiricists.

We need no such empirical test to say what classical theory says, which was all the point I was making. We only need such test to see if nature agrees, which is why I found you claim at the top strange, since you placed theory ahead of nature in defining the quality of what constituted "fundamental".
Actually, I never did that, I never even used the word "fundamental" (I used "fundamentally stochastic", which is quite different).
So your own rebuttal to classical illusion of stochastic phenomena is now turned in the same court to the QM models used to claim the opposite.
Phenomena aren't stochastic, theories that describe phenomena can be stochastic. It's a distinction rarely made, but important.
On a personal note I suspect that quantum systems have countlessly many more orders of magnitude of degree of freedom than classical systems, and we merely wrap them in ensembles, call the randomness "fundamental", and wash our hands of the unknowns as though they were merely imaginary.
I agree, except for the idea that your statement does not also apply to classical systems. The goal of science is to simplify, which can have a certain "hand washing" element, we just need to strive not to deceive ourselves.

Quantum computers can easily justify this position. How else can a quantum computer do calculations in seconds that a standard computer with as many registers as there are particles in the Universe could not do in the life of the Universe? I even once seen this argument used as evidence for many worlds.
Well, quantum computers can be explained without invoking many worlds, but I see your point-- there is something about the quantum degrees of freedom that are more accessible to computation than classical degrees of freedom. But it's more an issue of accessibility than counting degrees of freedom-- a classical computer has a mind-boggling number of degrees of freedom (here I refer to Avogadro's number issues), but only a tiny fraction of them are actually involved in doing the computation.

Saying "expectation values" mistaken for "deterministic behavior" is woefully inappropriate when as a matter of fact, an independent of ANY real system, define a deterministic TOY model which is by definition entirely and completely deterministic and we can still PROVE that the model has unpredictable behavior.
Well, if you are saying that deterministic models are generally essentially toy models, then I completely agree. Of course, I think stochastic models are toy models too. I don't think there is anything in modern physics that is not a toy model. We should not deceive ourselves-- we are children playing with toys, a few thousand years of civilization has not changed that.
Hence claiming that determinism in such systems was merely mistaken "expectation values" is DEAD wrong. It was the "expectation values" that proved stochastic behavior when the toy model was explicitly restricted to deterministic mechanics. So your claim is backwards and "expectation values" NEVER amounted to any evidence of determinism.
I'm not following, I'm the one saying that tracking expectation values and imagining they are real things never amounted to evidence for determinism. I'm also the one saying that having uncertainties and limitations never amounted to evidence for stochasticity. There is no such thing as evidence for determinism or stochasticity, because those are both attributes of models, so all we could ever do is judge whether or not those models were serving our purposes.
Now I am getting a little more about your perspective so I will articulate my own. It is not the laws that are fundamental. I think there really are symmetries in nature and laws are written to provide a perspective of those symmetries. For instance we could just as easily say the gravitational constant G varies relative to the depth of a gravitational field, but Einstein chose to say the apparent mass varied, which is physically equivalent. Yet regardless of what form or what perspective the laws are written from the symmetries are ALWAYS the same. Then theories are merely provisional specifications of those symmetries and their domains of applicability. So the only category errors that occur is when a symmetry specification is slightly off the mark or the domain of applicability has been misidentified. Symmetries do not require us to specify what nature is "actually" doing, though guessing can help better specify the symmetries.
I agree with that perspective, I would just back off a little from claiming the symmetries are "in nature." I think they are "in the way we think about nature." I pretty much believe that if there was such a thing as "nature herself", she would be quite bemused by pretty much everything we think is "in" her, sort of how we would be bemused if we knew what our dog thinks we are. It amounts to what we see as important, or what a dog sees as important, compared to nature, who doesn't think anything is more important than anything else, it all just is. Life, death, taxes, symmetries-- what does nature care? We hold these templates over her, and say "this pleases my thought process, it works for me." That's all, it comes from us.

Because it is derivable from ANY system with "fundamental" stochastic behavior or not.
I think this is the fundamental source of our disconnect-- you are equating "fundamentally stochastic behavior", which I view as an attribute of a theory, with "fundamental stochastic behavior," which I view as an unsupportable claim on reality. So we're not really disagreeing once that distinction is made, except that I also feel that way about any concept of fundamental determinism.
I do not think that simply proclaiming that is all we are after, therefore should not search more deeply for what nature might actually be doing, is helpful in getting more accurate and complete "replacements" for predicting what nature is going to do in more detailed circumstances.
I certainly never said we shouldn't search deeper, I'm saying we should expect it to be "models all the way down."

So no, I will not accept some foundational anti-stance as a substitute for working through the apparent consequences of alternative foundational stances. The foundational stance itself needs no more justification or claims of absolute validity than the contributions to the "replacement" models predictive value.
Yes, I am not disagreeing there. The only foundational anti-stance I take is the rejection of the idea that we seek what is "fundamental". Fundamentality is a direction, not a destination, so the best we can aspire to is "more fundamental."
 
  • #56
I see that this debate has stemmed from a slight distortion of perspective more than any 'real' disagreement. When I think in terms of nature verses our model I see it as a necessary distinction because often useful insight does not come from within the framework of the model. What nature "actually" is cannot be summed up so easily even it we could know and such thinking is normally more like just another perspective of it.

I do not see blending these distinctions between model and nature, where we think in terms of model extensions only, as having the conceptual latitude needed for the kinds of insights it will fairly likely take for QG. That the model centric perspective is in fact ultimately valid before and after any such achievements does not moot the value of the model/nature distinction or thinking in terms of nature outside the models. Some of the terms you used fell into one category for me and what I perceive as traditional, such as fundamental. Hence when it was used in reference to the model itself it entailed consequences that was a bit outrageous. Stochasticity does blur that line in that it is a fundamental limitation on models that all models must posses irrespective whether the system being modeled has that property or not.
 
  • #57
my_wan said:
I see that this debate has stemmed from a slight distortion of perspective more than any 'real' disagreement.
Yes, the same has occurred to me.
When I think in terms of nature verses our model I see it as a necessary distinction because often useful insight does not come from within the framework of the model. What nature "actually" is cannot be summed up so easily even it we could know and such thinking is normally more like just another perspective of it.
Yes, and I think it is actively our goal not to try to understand nature completely, we wouldn't use science if we wanted a complete understanding (for example, we get a very different kind of understanding by "living a life", where we understand by loving, feeling, hurting, striving, reaching, dying). The goal of science is a very particular slice of reality-- relating to objective predictive power.
Stochasticity does blur that line in that it is a fundamental limitation on models that all models must posses irrespective whether the system being modeled has that property or not.
Yes, that is the crux of the matter, we agree.
 
Back
Top