Do weak measurement prove randomness is not inherent?

In summary: Question: When we try to get which-way info (we cause de-coherence, we create phase difference), do we increase or decrease randomness or does it remain the same?In summary, when we try to get which-way info (we cause de-coherence, we create phase difference), we decrease randomness.
  • #36
Actually, I would point out that the decay of the uranium is not actually evidence of randomness "in the operation of the universe." I agree with your main point, that it requires considerable suspension of disbelief to say that the decay is deterministic in the absence of any evidence that it is, but we don't have an either/or situation. We often see the fallacy that "if it isn't random, it must be deterministic, and if I see no evidence that it is deterministic, it must be random." Randomness and determinism are both elements of models we use to describe the operation of the universe, but they are never elements of the operation of the universe. Scientists can only test the success of our models by comparing to the outcomes of experiment. The tests of the operation of the universe are the experiments themselves, not the success of the models-- that's something different.
 
Physics news on Phys.org
  • #37
Ken G said:
Actually, here is a different way to present your exact same argument. Throughout history, whenever we thought something was deterministic, we found a deeper level that was random. Did you really just argue that this means the people who think there will always be inherent randomness are ignoring the lessons of history?

Great point. I believe this is accurate in physical science. As we factor in new variables (to get more accurate results), it becomes harder and harder to cite anyone as the "cause" of the result. And somehow, a new level of indeterminacy firmly creeps in. We used to think of that as relating to "initial conditions" but it doesn't appear that way any longer (at least to me). I don't think the human brain is a deterministic machine either.
 
  • #38
my_wan said:
@Fyzix
I have debated DrChinese and his views are NOT weird. ...

Although there are a few questions about my taste in clothes. :smile:
 
  • #39
My problem with randomness is the TRUE randomness yes, not the "mathematical randomness" / lack of knowledge on the human part.
That's exactly what I am arguing.

Someone mentioned an exampe of a uranium atom decaying after 4 billion years, SOMETHING must cause it.
I can't see any other way around it.
It decaying itself is a mechanism! it's just ignorant of humans to think we already understand enough to say "hey randomness exists, because we don't know everything yet".

People would say the same about EVERYTHING 300 years ago.
 
  • #40
So we have encountered two logical fallacies:
1) saying that because we don't know what causes something means it is uncaused
(that's called "argument from ignorance")
2) saying that because we cannot imagine something isn't caused means it must be caused
(that's called "argument from incredulity")
Scientific thinking should always avoid logical fallacies, and that's exactly why we must be clear on the difference between the features of our models, and how successful they are when compared with experiment, versus the features of whatever is making the experiments come out the way they do. The only way to avoid fallacies is to be very clear with ourselves what we are really doing when we enter into scientific thought.
 
  • #41
Randomness exists by Occam's razor.

In Fyzix's world, any event must have a preceding event that caused it, how does the causal mechanism work between two events?

In a random world, any event doesn't need a preceding event to cause it, so we don't need to explain anything further.

However, we know there is some order in the world, so we ought to impose some constraints on the randomness (to explain the world), eg we could insist that the randomness is guided by an evolution equation, like Schrödinger's equation for example.

So we have deterministic evolution of probabilistic states.

There, that's the world.
 
  • #42
Fyzix said:
My problem with randomness is the TRUE randomness yes, not the "mathematical randomness" / lack of knowledge on the human part.
That's exactly what I am arguing.

Someone mentioned an exampe of a uranium atom decaying after 4 billion years, SOMETHING must cause it.
I can't see any other way around it.
It decaying itself is a mechanism! it's just ignorant of humans to think we already understand enough to say "hey randomness exists, because we don't know everything yet".

People would say the same about EVERYTHING 300 years ago.

This is a PHYSICS forum, of course we're concerned with mathematical randomness. I'm getting the strong sense that you don't really have any physics background. Is this the case? The fact is, like it or not, there is an enormous amount of evidence against a deterministic universe and assuming a probabilistic universe has given us the most accurate (in predicting reality) mathematical model ever created. It is from this understanding that we invented the transistor (i.e. the microchip), the laser, modern chemistry, etc. Furthermore, if quantum mechanics were wrong (or just an effective theory of a more general higher order one) and there were a deeper deterministic theory we still have some very strict mathematical limitations on what that deterministic theory must look like and it would have to break a whole lot of rules that every experiment tells us are correct (for example, a deterministic theory CANNOT be local, but locality seems very much to be an inextricable part of reality).

You must realize that physics and quantum mechanics are a SCIENCE, baseless philosophical pondering devoid of actual knowledge of physics is worthless. Physics is applied math, if you don't understand that math then you can't possible understand the issues. Not liking an extraordinarily accurate theory doesn't mean a thing unless you've got a more accurate theory to supersede it.

Also, one could of course easily make quantum randomness an aspect of the macroscopic world. Take a cathode ray tube (which we'll say sends out only 1 electron at a time), pass the electron through an Sz Stern-Gerlach machine, take the output and put it through an Sx one, take the output and pass it through an Sz again. If it comes out spin up, cleave a random person's head off with an ax, if it comes out spin down, don't. Wham! Real world consequences of quantum randomness ;)
 
  • #43
Ken G said:
I'm not entirely clear what you are saying here, because I would have said that classical thermodynamics is the deterministic theory, and statistical mechanics is the random one. For example, thermodynamics uses variables like temperature that are supposed to mean something specific, whereas statistical mechanics uses ensemble averages that are really just mean values. So I would interpret the discovery that statistical mechanics can derive the theorems of thermodynamics to be a classic example of how randomness is continually found to underpin theories that we initially thought were deterministic. Quantum mechanical trajectories would be another prime example, as would chaos theory in weather.

First classical thermodynamics is formulated as a set laws of what was then considered fundamental laws. Statistical mechanics developed later (read 'the statistics of mechanics') was developed later and from which the laws of thermodynamics were found to be derivable from. Statistical mechanics is essentially the kinetic theory of gases.
[url]http://www.wolframscience.com/reference/notes/1019b[/url] said:
The idea that gases consist of molecules in motion had been discussed in some detail by Daniel Bernoulli in 1738, but had fallen out of favor, and was revived by Clausius in 1857. Following this, James Clerk Maxwell in 1860 derived from the mechanics of individual molecular collisions the expected distribution of molecular speeds in a gas.

This kicked of a controversy because:
[url]http://www.wolframscience.com/reference/notes/1019b[/url] said:
At first, it seemed that Boltzmann had successfully proved the Second Law. But then it was noticed that since molecular collisions were assumed reversible, his derivation could be run in reverse, and would then imply the opposite of the Second Law.

To continue the above quote does this look familiar in todays context?
[url]http://www.wolframscience.com/reference/notes/1019b[/url] said:
Much later it was realized that Boltzmann’s original equation implicitly assumed that molecules are uncorrelated before each collision, but not afterwards, thereby introducing a fundamental asymmetry in time. Early in the 1870s Maxwell and Kelvin appear to have already understood that the Second Law could not formally be derived from microscopic physics, but must somehow be a consequence of human inability to track large numbers of molecules. In responding to objections concerning reversibility Boltzmann realized around 1876 that in a gas there are many more states that seem random than seem orderly. This realization led him to argue that entropy must be proportional to the logarithm of the number of possible states of a system, and to formulate ideas about ergodicity.

Gibbs developed the Gibbs ensemble construction around 1900, providing a more general formal foundation for the whole thing. A few years later (1905) Brownian motion put the final seal on statistical mechanics based on papers over the last 25 years.

Yet here is another interesting and funny bit. The formal definition of Gibbs ensembles define the fundamental bits of the QM formalism on which the many worlds hypothesis was constructed. The many worlds hypothesis is basically the result of postulating every copy of a Gibbs ensembles is existentially real. Hence the many worlds are the Gibbs ensembles.

The only place randomness survives in the theoretically 'pure' form is in subatomic physics.
 
Last edited:
  • #44
my_wan said:
The only place randomness survives in the theoretically 'pure' form is in subatomic physics.

If by SUBatomic you mean atomic then I suppose. Though I'd ultimately disagree. Statistical mechanics is simply IMPLICITLY "random", yet it is still random. For example, Fermi-Dirac statistics are founded on the Pauli Exclusion Principle. However, the exclusion principle is a direct result of the indistinguishability of particles and Born's rule. Both of these EXPLICITLY relate to the blurred out, probabilistic core of quantum mechanics and the Schrodinger equation. Thus, by taking Pauli Exclusion as axiom, statistical mechanics inherits the underlying assumption of "randomness" even if the behaviour of large ensembles ends up being deterministic. I'd imagine this is particularly obvious in the Mesoscopic regime.

Also, FYI I believe thermodynamics was always a phenomological theory (as opposed to a fundamental one). It was developed around the same time as E&M and I think the notion of an atom was gaining a little bit of traction. The notion that there was ultimately some "under the hood" electromagnetic interaction driving the whole thing was likely in the air. Tragically, Boltzmann committed suicide after his atomistic reduction of thermodynamics continually faced derision.
 
  • #45
unusualname said:
Randomness exists by Occam's razor.
Occam's razor is a technique for deciding on the most parsimonious way to think about reality. It is not a way to establish "what exists." A very wrong way that many people understand Occam's razor is "the simplest explanation is most likely the correct one."
That is wrong for at least two reasons:
1) Occam's razor is a way to choose between theories, not a way to dictate how reality works, and
2) the statement is patently false, contradicted over and over in a wide array of scientific examples.
So the correct way to state Occam's razor is: "since our goal is to understand, and since understanding involves simplification, the simplest theory that meets our needs is the best."
So if we take that correct statement of the razor, and parse your claim, it comes out "randomness exists because it is easier for us to understand randomness." That should expose the problem.

As for your argument that randomness is in fact a simpler description of many of the phenomena we see, including the decay of uranium, I agree.

So we have deterministic evolution of probabilistic states.

There, that's the world.
Correction, that's our simplest description of the world. Big difference. For one thing, you left out the most puzzling part of all-- how a deterministic evolution of probabilistic states gives way to particular outcomes.
 
  • #46
Ken G said:
Occam's razor is a technique for deciding on the most parsimonious way to think about reality. It is not a way to establish "what exists." A very wrong way that many people understand Occam's razor is "the simplest explanation is most likely the correct one."
That is wrong for at least two reasons:
1) Occam's razor is a way to choose between theories, not a way to dictate how reality works, and
2) the statement is patently false, contradicted over and over in a wide array of scientific examples.
So the correct way to state Occam's razor is: "since our goal is to understand, and since understanding involves simplification, the simplest theory that meets our needs is the best."
So if we take that correct statement of the razor, and parse your claim, it comes out "randomness exists because it is easier for us to understand randomness." That should expose the problem.

As for your argument that randomness is in fact a simpler description of many of the phenomena we see, including the decay of uranium, I agree.

Correction, that's our simplest description of the world. Big difference. For one thing, you left out the most puzzling part of all-- how a deterministic evolution of probabilistic states gives way to particular outcomes.

In the Consistent Histories interpretation this is not a problem, once we have a measurement we can know how the probabilities evolved. There is no way to know this without making a measurement of course.

Also, constructing the Schrödinger evolution at the microscopic level is of course a huge problem, why all the linear group structures in the Standard Model? How does gravity emerge for such an evolution? And the big one - how does human free-will seem to enable us to further guide this evolution beyond (afawk) what exists anywhere else in the universe?
 
Last edited:
  • #47
my_wan said:
First classical thermodynamics is formulated as a set laws of what was then considered fundamental laws. Statistical mechanics developed later (read 'the statistics of mechanics') was developed later and from which the laws of thermodynamics were found to be derivable from. Statistical mechanics is essentially the kinetic theory of gases.
All true, but that's why thermodynamics is the deterministic theory (heat flows from hot to cold, etc.) and statistical mechanics is the random (statistical) theory (heat is more likely to flow from hot to cold, etc.). So I would say this is an example of the natural tendency for seemingly deterministic laws to later be reinterpreted as emergent from more fundamentally stochastic laws.
To continue the above quote does this look familiar in todays context?
Yes, I too have noticed the appearance of physicist-as-participant-in-physics effects even in classical thermodynamics. It's there in relativity too. The idea that "observer effects" are purely quantum in nature is narrow-minded.
Yet here is another interesting and funny bit. The formal definition of Gibbs ensembles define the fundamental bits of the QM formalism on which the many worlds hypothesis was constructed. The many worlds hypothesis is basically the result of postulating every copy of a Gibbs ensembles is existentially real. Hence the many worlds are the Gibbs ensembles.
On another thread, I am making the point (to little favor, I might add) that many-worlds is a completely classical concept that picks up nothing particularly special in the quantum context. In both cases, it is only the fact that science has to address the sticky problem that a given observer gets a given observed outcome, that is the actual nature of the problem, not quantum vs. classical. I think you would be sympathetic to that view.
The only place randomness survives in the theoretically 'pure' form is in subatomic physics.
This is where we diverge. I don't think the problem is with the impurity of randomness, because I view all mental constructs (like randomness and determinism alike) as "impure." They are all effective theories, all models, and randomness is the model used in statistical processes like statistical mechanics. Including all the Gibbs ensembles really doesn't remove the need for randomness, because we don't get an ensemble when we do the experiment, we get an outcome. That's where the randomness concept connects most closely to reality, but it is still impure and incomplete, because we still have no idea why we get a particular outcome, when all our theories can only give us statistical distributions. This is a fundamental disconnect between physics and reality that cannot be resolved by imagining the universe is fundamentally random or fundamentally deterministic, because either idea can be made to work with sufficient suspension of disbelief, and anyway there's no reason to imagine the universe is "fundamentally" any of those things.
 
  • #48
Ken G said:
All true, but that's why thermodynamics is the deterministic theory (heat flows from hot to cold, etc.) and statistical mechanics is the random (statistical) theory (heat is more likely to flow from hot to cold, etc.). So I would say this is an example of the natural tendency for seemingly deterministic laws to later be reinterpreted as emergent from more fundamentally stochastic laws.
The manner in which you have defined "intrinsic" determinism in the context of thermodynamics is also shared by QM in the underlying wave equations. When thermodynamics was developed the laws were defined in irreversible form. Only when statistical mechanics was further developed it created problems for this assumption, written as law, that such processes were irreversible. Exactly because the real state of the system is defined not by ensembles, but by mechanistic certainties if the particular state each ensemble was actually in was known.

In this context stochastic laws are not fundamental to the system, they are only fundamental to our level of knowledge about the system. Thus saying "fundamentally stochastic laws" is a misnomer of what the physics actually entail, at least in this context.

Now obviously, it is quiet trivial to decompose Gibbs ensembles of a classical medium into distinct physical units. Yet QM is fundamentally quiet different in that respect. Even quantization involves properties rather than parts and do not stay put in any part-like picture ever conceived. Perhaps in the quantum regime "fundamentally" really does belong in front of "stochastic laws", but in thermodynamics it most certainly does not, as illustrated by statistical mechanics. In a classical regime stochastic is merely a consistently 'apparent' property resulting from a limitation in the completeness of our knowledge.

Now the big question. If we as observers have fundamental limits on our knowledge that physical law dictates we cannot 'empirically' get around by any means, would that constitute "fundamental" stochastic laws even if the theory entailed a complete lack of stochastic behavior at the foundational level? That is what we have in classical stochastic behavior, but QM lack a similar underlying mechanism that defines stochastic behavior as purely a product of limited knowledge. That is THE key difference between classical and Quantum mechanics. Saying "fundamentally stochastic laws" requires the presumption that a an ignorance of our ignorance is evidence of a lack of ignorance, i.e., "fundamental". Whereas classically we are aware of our ignorance such that in that context it is not fundamental to the system itself.

Ken G said:
Yes, I too have noticed the appearance of physicist-as-participant-in-physics effects even in classical thermodynamics. It's there in relativity too. The idea that "observer effects" are purely quantum in nature is narrow-minded.
Agreed. It is a whole range of these observations that leads me to assume it quiet likely that the conceptual problems in QM is not just ignorance, but an ignorance of our ignorance.

Ken G said:
On another thread, I am making the point (to little favor, I might add) that many-worlds is a completely classical concept that picks up nothing particularly special in the quantum context. In both cases, it is only the fact that science has to address the sticky problem that a given observer gets a given observed outcome, that is the actual nature of the problem, not quantum vs. classical. I think you would be sympathetic to that view.
It is quiet likely that I would. Maybe I will check it out shortly.

Ken G said:
This is where we diverge. I don't think the problem is with the impurity of randomness, because I view all mental constructs (like randomness and determinism alike) as "impure." They are all effective theories, all models, and randomness is the model used in statistical processes like statistical mechanics. Including all the Gibbs ensembles really doesn't remove the need for randomness, because we don't get an ensemble when we do the experiment, we get an outcome. That's where the randomness concept connects most closely to reality, but it is still impure and incomplete, because we still have no idea why we get a particular outcome, when all our theories can only give us statistical distributions. This is a fundamental disconnect between physics and reality that cannot be resolved by imagining the universe is fundamentally random or fundamentally deterministic, because either idea can be made to work with sufficient suspension of disbelief, and anyway there's no reason to imagine the universe is "fundamentally" any of those things.
The concept of randomness will in fact ALWAYS be needed in science. We can never have perfect knowledge about any system period. We cannot even write down that many decimal places to acquire such knowledge if it was possible. The key difference, that statistical mechanics illustrates, is that classically a perfect Maxwellian Demon could ONLY in principle do away with stochastic behavior altogether, but in QM we have no clue how to construct any model that would allow this Maxwellian Demon to do the same in that regime, even in principle.
 
  • #49
I believe that according to QFT, nuclear decay events are attributed to the same thing that "causes" spontaneous emission of radiation from excited quantum states, namely, interaction of the metastable quantum system with a vacuum fluctuation (or virtual photon, or spaghetti monster tears, or whatever name you want to give to the hypothetical phenomenon). Some sort of interaction is required within the framework of quantum theory for excited molecular or atomic eigenstates to decay, because they are *eigenstates*, and thus their probability density is conserved.

So, the question now is, are vacuum fluctuations (or whatever) truly random? I don't know enough about QFT or quantum cosmology to even approach answering that question. Personally, I have a strong predilection to believe that they are in fact random, but it's just a gut feeling at this point.
 
  • #50
The determinism of thermodynamics is in the structure of the theory itself. We can predict a deterministic evolution of temperature, for example, in thermodynamics, and first students of thermodynamics are generally not taught that this is just a statistical average they are solving. But quantum predictions are not framed deterministically, instead we speak of testing probability distributions explicitly in QM, via repetition of the same experiment-- a device never used in thermodynamics. In QM, we don't generally test expectation values, whereas in thermo, we are not even taught that the observables are expectation values (even though they are). So thermodynamics is a deterministic theory, and quantum mechanics isn't.
In this context stochastic laws are not fundamental to the system, they are only fundamental to our level of knowledge about the system. Thus saying "fundamentally stochastic laws" is a misnomer of what the physics actually entail, at least in this context.
I'm not sure what context you mean. I would place the "fundamental" aspects of a law in the nature of the derivations used for that law, not in the nature of the systems the law is used to predict. That's mixing two different things.
In a classical regime stochastic is merely a consistently 'apparent' property resulting from a limitation in the completeness of our knowledge.
We don't actually know that, because our knowledge is always limited. We have no way to test your assertion. Indeed, in classical chaos, we generally find the stochasticity penetrates to all levels-- no matter what the ignorance is initially, it rapidly expands toward ergodicity. This has a flavor of being more than an apparent aspect of the behavior, instead the behavior is a kind of ode to ignorance. The idea that we could ever complete our information of a classical system is untenable-- ironically, classical systems are far more unknowable than quantum systems, because classical systems have vastly many degrees of freedom. It is that vastness that allows us to mistake expectation values for deterministic behavior, we see determinism in the context where the behavior is least knowable. Determinism is thus a kind of "mental defense mechanism," I would say.
Now the big question. If we as observers have fundamental limits on our knowledge that physical law dictates we cannot 'empirically' get around by any means, would that constitute "fundamental" stochastic laws even if the theory entailed a complete lack of stochastic behavior at the foundational level?
The laws are the theory, so the foundation of the laws is only the structure of the theory, regardless of how successfully they test out. I think you take the perspective that there really are "laws", and our theories are kinds of provisional versions of those laws. My view is that the existence of actual laws is a category error-- the purpose of a law is not to be what nature is actually doing, it is to be a replacement for what nature is actually doing, a replacement that can fit in our heads and meet some limited experimental goals. I ask, what difference does it make the "foundational" structure of our laws? We never test their foundational structure, we only test how well they work on the limited empirical data we have at our disposal. The connection at the foundational level will always be a complete mystery, or a subject of personal philosophy, but what we know from the history of science is that the foundational level of any law is highly suspect.

That is what we have in classical stochastic behavior, but QM lack a similar underlying mechanism that defines stochastic behavior as purely a product of limited knowledge. That is THE key difference between classical and Quantum mechanics.
Yes, that is an important difference.
Saying "fundamentally stochastic laws" requires the presumption that a an ignorance of our ignorance is evidence of a lack of ignorance, i.e., "fundamental".
It is not the laws that are fundamental, because that makes a claim about their relationship to reality. It is only the fundamental of the law that we can talk about-- there's a big difference.
It is a whole range of these observations that leads me to assume it quiet likely that the conceptual problems in QM is not just ignorance, but an ignorance of our ignorance.
I think this is your key point here, the degree of ignorance is worse in QM applications. I concur, but then we are both Copenhagen sympathizers!
The concept of randomness will in fact ALWAYS be needed in science. We can never have perfect knowledge about any system period.
Yes, I agree that randomness in our models is inevitable-- chaos theory is another reason.


The key difference, that statistical mechanics illustrates, is that classically a perfect Maxwellian Demon could ONLY in principle do away with stochastic behavior altogether, but in QM we have no clue how to construct any model that would allow this Maxwellian Demon to do the same in that regime, even in principle.
Yes, I see what you mean, the absence of any concept of a quantum demon is very much a special attribute of quantum theory, although Bohmians might be able to embrace the concept.
 
  • #51
SpectraCat said:
So, the question now is, are vacuum fluctuations (or whatever) truly random?
The one thing we can know for sure is that no scientist will ever know the answer to that question.
I don't know enough about QFT or quantum cosmology to even approach answering that question.
Knowing QFT would only tell you if the theory of QFT models the fluctuations as truly random, it wouldn't tell you if they are or not.
 
  • #52
Ken G said:
The one thing we can know for sure is that no scientist will ever know the answer to that question.Knowing QFT would only tell you if the theory of QFT models the fluctuations as truly random, it wouldn't tell you if they are or not.

I would take a different tack. I would say that since we can take the following to be true:

1) QFT requires that these observable phenomena (nuclear decay and spontaneous emission) be triggered by interactions with vacuum fluctuations

2) to the best of our ability to measure them, these phenomena are random (meaning that they are stochastic)

The most logical conclusion based on the available data is that IF vacuum fluctuations are what triggers those events, then they are the source of the randomness. In the absence of a competing, experimentally falsifiable hypothesis, I can't see what other conclusion one could draw.
 
  • #53
SpectraCat said:
I would take a different tack. I would say that since we can take the following to be true:

1) QFT requires that these observable phenomena (nuclear decay and spontaneous emission) be triggered by interactions with vacuum fluctuations

2) to the best of our ability to measure them, these phenomena are random (meaning that they are stochastic)

The most logical conclusion based on the available data is that IF vacuum fluctuations are what triggers those events, then they are the source of the randomness. In the absence of a competing, experimentally falsifiable hypothesis, I can't see what other conclusion one could draw.
My remark was not about what is the best hypothesis, it was about what we know, and what do we not know. What we do not know, and what we never will know, is that the fluctuations are "truly random." This is simply not a goal of science to know, though many people seem compelled to repeat all the mistakes of scientific history. But I certainly agree with you that our best current hypothesis, and this may always be our best hypothesis, is that the fluctuations are best modeled as random. That's the hypothesis that gets best agreement with observation, does the best job of motivating new observations, and is the simplest.
 
  • #54
Ken G said:
I'm not sure what context you mean. I would place the "fundamental" aspects of a law in the nature of the derivations used for that law, not in the nature of the systems the law is used to predict. That's mixing two different things.
It is indeed mixing two different things. So what you are saying is that the map is more important in defining the nature of "fundamental", even when we explicitly and intentionally throw away position information to gain a classical ensemble mapping, than is the system the map represents. That is not a question because that is what you said, but you are welcome to take issue with the characterization.

In fact the truth of the system trumps the truth of the map (theory) every time, which is why empirical facts are judge, jury, and executioner of theories. Empirical facts can mean different things in different theories making the claims associated with empirical facts less than factual, but nonetheless contradiction of those facts is a theory killer.

Hence I ALWAYS put the system at a far more fundamental level than ANY theory and/or derivation thereof.


Ken G said:
We don't actually know that, because our knowledge is always limited.
Well of course. The mere fact of QM as an underpinning to classical physics is alone enough to kill the point from a system point of view. Yet if your idealized system is strictly defined by classical theory, much like you put theory ahead of system above to justify a "fundamental" status, then as a matter of fact stochastic behavior is an illusion induced by limited knowledge. Luckily science puts theory in the back seat to the system it describes. In fact many things thought to be fundamental were found to be derivable from something else.

So when you say "We don't actually know that," that must also apply to whatever thing in QM you want to define as "fundamental" in the present theoretical picture. Only the system, not the theory, contains or does not contain the information that specifies these things. Theory can only give a probable best guess based on the level of empirical consistency attained, and even then mutually exclusive interpretations can posses equal empirical consistency.

Ken G said:
We have no way to test your assertion.
We need no such empirical test to say what classical theory says, which was all the point I was making. We only need such test to see if nature agrees, which is why I found you claim at the top strange, since you placed theory ahead of nature in defining the quality of what constituted "fundamental".

Only now that nature has spoken and required classical physics to be a limiting case "no way to test your assertion" becomes applicable to the nature of the stochastic behaviors in QM. So your own rebuttal to classical illusion of stochastic phenomena is now turned in the same court to the QM models used to claim the opposite.

Ken G said:
Indeed, in classical chaos, we generally find the stochasticity penetrates to all levels-- no matter what the ignorance is initially, it rapidly expands toward ergodicity. This has a flavor of being more than an apparent aspect of the behavior, instead the behavior is a kind of ode to ignorance. The idea that we could ever complete our information of a classical system is untenable-- ironically, classical systems are far more unknowable than quantum systems, because classical systems have vastly many degrees of freedom.
Again, in the classical chaos it is indeed unknowable even though the unknown information is by definition there. So the fact of unknowability requires models that treats certainties as though they were random, but the model does not make the system random.

On a personal note I suspect that quantum systems have countlessly many more orders of magnitude of degree of freedom than classical systems, and we merely wrap them in ensembles, call the randomness "fundamental", and wash our hands of the unknowns as though they were merely imaginary. Quantum computers can easily justify this position. How else can a quantum computer do calculations in seconds that a standard computer with as many registers as there are particles in the Universe could not do in the life of the Universe? I even once seen this argument used as evidence for many worlds. Many worlds or not something gives.

Ken G said:
It is that vastness that allows us to mistake expectation values for deterministic behavior, we see determinism in the context where the behavior is least knowable. Determinism is thus a kind of "mental defense mechanism," I would say.
Saying "expectation values" mistaken for "deterministic behavior" is woefully inappropriate when as a matter of fact, an independent of ANY real system, define a deterministic TOY model which is by definition entirely and completely deterministic and we can still PROVE that the model has unpredictable behavior.

It is not model dependent. It is NOT dependent on what is or is not real in ANY real system. Hence claiming that determinism in such systems was merely mistaken "expectation values" is DEAD wrong. It was the "expectation values" that proved stochastic behavior when the toy model was explicitly restricted to deterministic mechanics. So your claim is backwards and "expectation values" NEVER amounted to any evidence of determinism.

Ken G said:
The laws are the theory, so the foundation of the laws is only the structure of the theory, regardless of how successfully they test out.
Again, make up a set of laws, any set of laws. They do not have to represent any real laws of anything, just make sure these laws are strictly deterministic. It can then be proved that based on these fake deterministic laws, and these fake laws alone, that the results of such deterministic laws MUST require stochastic behavior to model reasonably complex systems operating on these laws alone. That is the point. Ignorance of the details of a system do not entail that the system is devoid of deterministic causes behind the unpredictable behavior.


Ken G said:
I think you take the perspective that there really are "laws", and our theories are kinds of provisional versions of those laws. My view is that the existence of actual laws is a category error-- the purpose of a law is not to be what nature is actually doing, it is to be a replacement for what nature is actually doing, a replacement that can fit in our heads and meet some limited experimental goals.
Now I am getting a little more about your perspective so I will articulate my own. It is not the laws that are fundamental. I think there really are symmetries in nature and laws are written to provide a perspective of those symmetries. For instance we could just as easily say the gravitational constant G varies relative to the depth of a gravitational field, but Einstein chose to say the apparent mass varied, which is physically equivalent. Yet regardless of what form or what perspective the laws are written from the symmetries are ALWAYS the same. Then theories are merely provisional specifications of those symmetries and their domains of applicability. So the only category errors that occur is when a symmetry specification is slightly off the mark or the domain of applicability has been misidentified. Symmetries do not require us to specify what nature is "actually" doing, though guessing can help better specify the symmetries.

Now how does this apply to the randomness issue. Well, we know for a fact the the symmetries defined by stochastic behavior is derivable from non-stochastic causal certainties given some unavoidable level of ignorance. This dependence of theory on stochasticity does NOT make it a "fundamental" symmetry even though it remains possible that it is at some level of the system. Because it is derivable from ANY system with "fundamental" stochastic behavior or not. I am perfectly happy with a "replacement" for what nature is actually doing as long as the "replacement" is as complete as we can fundamentally get from within our limitations. I do not think that simply proclaiming that is all we are after, therefore should not search more deeply for what nature might actually be doing, is helpful in getting more accurate and complete "replacements" for predicting what nature is going to do in more detailed circumstances.

Ken G said:
I ask, what difference does it make the "foundational" structure of our laws? We never test their foundational structure, we only test how well they work on the limited empirical data we have at our disposal. The connection at the foundational level will always be a complete mystery, or a subject of personal philosophy, but what we know from the history of science is that the foundational level of any law is highly suspect.
True. But the efforts have produced theories which can predict far higher volumes of empirical behavior. These historical breakthrough are often the result of taking some foundational stance at odds with the prevailing stance. So it makes no difference that the foundational stance itself is and will always remain in question, they still play an important role in development. Then to deny the historical importance of the foundational stances in the development of science, and reject their relevance, denies some of the most important elements of theory building in our humanistic tool box.

So no, I will not accept some foundational anti-stance as a substitute for working through the apparent consequences of alternative foundational stances. The foundational stance itself needs no more justification or claims of absolute validity than the contributions to the "replacement" models predictive value.

Ken G said:
Yes, that is an important difference.
Yes, the one that gives me nightmares :tongue:

Ken G said:
It is not the laws that are fundamental, because that makes a claim about their relationship to reality. It is only the fundamental of the law that we can talk about-- there's a big difference.
Yes, This is where I make the distinction between a symmetry and a law, with fundamental stances merely playing a role in morphing perspectives within the symmetries. I do not even thin IMO that fundamental physical constants are fundamental, or not derivable from other models. I do not take foundational stances any more seriously in an absolute sense than a coordinate choice. But coordinate choices can nonetheless be extremely useful.

Ken G said:
I think this is your key point here, the degree of ignorance is worse in QM applications. I concur, but then we are both Copenhagen sympathizers!
Yes we are. However, my sympathies for Copenhagen is a twist in my foundational stance no different from a change in perspective as a result of a coordinate transform. There are no such absolutes to this or that stance that makes it fundamentally closer to any actual reality than any other. So, except for the fact of some anti-Copenhagen stances thinking their stance is somehow a better representation of absolute reality, I can also (sometimes) sympathize with their version. I move out of the Copenhagen picture for operational and conceptual reasons having nothing to do with rejecting the validity of the stance itself. Again, it is more like a coordinate transform than a rejection of the stance.

Ken G said:
Yes, I agree that randomness in our models is inevitable-- chaos theory is another reason.
Perhaps I tried too hard above to get the point made. In the sense I was trying to use classical theory to make a point it is identical to chaos theory case. I was not trying to make the point that classical physics had any particular level of validity, only that strictly under the assumption of classical physics without any fundamental randomness you still get stochastic behavior. Just like in chaos theory.

Ken G said:
Yes, I see what you mean, the absence of any concept of a quantum demon is very much a special attribute of quantum theory, although Bohmians might be able to embrace the concept.
Here the nightmarish part for me. If morphing between foundational stances is no more or less fundamental than a change in coordinate choices as I perceive it to be, then quantum demons should at least principle be possible in some sense irrespective of the absolute validity of their existence. It does not even require them to correspond to any direct measurable, just that they are in principle quantitatively possible under some perspective of what constitutes a measurable. My issues with Bohmian Mechanics run far deeper than any issue I have with CI.
 
  • #55
my_wan said:
It is indeed mixing two different things. So what you are saying is that the map is more important in defining the nature of "fundamental", even when we explicitly and intentionally throw away position information to gain a classical ensemble mapping, than is the system the map represents.
The word "fundamental" doesn't really mean anything, but "fundamentally" does-- we were talking about a phrase like "theory X is fundamentally Y". That statement can be addressed only by looking at the theory, there is no need to know anything about the observations that determine the success of the theory.
In fact many things thought to be fundamental were found to be derivable from something else.
A natural result, I would say the whole idea that anything can be "fundamental" is a persistent myth.
So when you say "We don't actually know that," that must also apply to whatever thing in QM you want to define as "fundamental" in the present theoretical picture.
The things we can know are our own theories, their predictions, and the outcomes of experiments. That's it, that's scientific knowledge. A theory can be "fundamentally something" and it can be built from fundamental pieces (fundamentals of the theory), but the theory itself is never "fundamental". There is never a "fundamental scientific truth", but there are "the fundamentals of doing science." The term is a bit loaded.

Theory can only give a probable best guess based on the level of empirical consistency attained, and even then mutually exclusive interpretations can posses equal empirical consistency.
Certainly. We are empiricists.

We need no such empirical test to say what classical theory says, which was all the point I was making. We only need such test to see if nature agrees, which is why I found you claim at the top strange, since you placed theory ahead of nature in defining the quality of what constituted "fundamental".
Actually, I never did that, I never even used the word "fundamental" (I used "fundamentally stochastic", which is quite different).
So your own rebuttal to classical illusion of stochastic phenomena is now turned in the same court to the QM models used to claim the opposite.
Phenomena aren't stochastic, theories that describe phenomena can be stochastic. It's a distinction rarely made, but important.
On a personal note I suspect that quantum systems have countlessly many more orders of magnitude of degree of freedom than classical systems, and we merely wrap them in ensembles, call the randomness "fundamental", and wash our hands of the unknowns as though they were merely imaginary.
I agree, except for the idea that your statement does not also apply to classical systems. The goal of science is to simplify, which can have a certain "hand washing" element, we just need to strive not to deceive ourselves.

Quantum computers can easily justify this position. How else can a quantum computer do calculations in seconds that a standard computer with as many registers as there are particles in the Universe could not do in the life of the Universe? I even once seen this argument used as evidence for many worlds.
Well, quantum computers can be explained without invoking many worlds, but I see your point-- there is something about the quantum degrees of freedom that are more accessible to computation than classical degrees of freedom. But it's more an issue of accessibility than counting degrees of freedom-- a classical computer has a mind-boggling number of degrees of freedom (here I refer to Avogadro's number issues), but only a tiny fraction of them are actually involved in doing the computation.

Saying "expectation values" mistaken for "deterministic behavior" is woefully inappropriate when as a matter of fact, an independent of ANY real system, define a deterministic TOY model which is by definition entirely and completely deterministic and we can still PROVE that the model has unpredictable behavior.
Well, if you are saying that deterministic models are generally essentially toy models, then I completely agree. Of course, I think stochastic models are toy models too. I don't think there is anything in modern physics that is not a toy model. We should not deceive ourselves-- we are children playing with toys, a few thousand years of civilization has not changed that.
Hence claiming that determinism in such systems was merely mistaken "expectation values" is DEAD wrong. It was the "expectation values" that proved stochastic behavior when the toy model was explicitly restricted to deterministic mechanics. So your claim is backwards and "expectation values" NEVER amounted to any evidence of determinism.
I'm not following, I'm the one saying that tracking expectation values and imagining they are real things never amounted to evidence for determinism. I'm also the one saying that having uncertainties and limitations never amounted to evidence for stochasticity. There is no such thing as evidence for determinism or stochasticity, because those are both attributes of models, so all we could ever do is judge whether or not those models were serving our purposes.
Now I am getting a little more about your perspective so I will articulate my own. It is not the laws that are fundamental. I think there really are symmetries in nature and laws are written to provide a perspective of those symmetries. For instance we could just as easily say the gravitational constant G varies relative to the depth of a gravitational field, but Einstein chose to say the apparent mass varied, which is physically equivalent. Yet regardless of what form or what perspective the laws are written from the symmetries are ALWAYS the same. Then theories are merely provisional specifications of those symmetries and their domains of applicability. So the only category errors that occur is when a symmetry specification is slightly off the mark or the domain of applicability has been misidentified. Symmetries do not require us to specify what nature is "actually" doing, though guessing can help better specify the symmetries.
I agree with that perspective, I would just back off a little from claiming the symmetries are "in nature." I think they are "in the way we think about nature." I pretty much believe that if there was such a thing as "nature herself", she would be quite bemused by pretty much everything we think is "in" her, sort of how we would be bemused if we knew what our dog thinks we are. It amounts to what we see as important, or what a dog sees as important, compared to nature, who doesn't think anything is more important than anything else, it all just is. Life, death, taxes, symmetries-- what does nature care? We hold these templates over her, and say "this pleases my thought process, it works for me." That's all, it comes from us.

Because it is derivable from ANY system with "fundamental" stochastic behavior or not.
I think this is the fundamental source of our disconnect-- you are equating "fundamentally stochastic behavior", which I view as an attribute of a theory, with "fundamental stochastic behavior," which I view as an unsupportable claim on reality. So we're not really disagreeing once that distinction is made, except that I also feel that way about any concept of fundamental determinism.
I do not think that simply proclaiming that is all we are after, therefore should not search more deeply for what nature might actually be doing, is helpful in getting more accurate and complete "replacements" for predicting what nature is going to do in more detailed circumstances.
I certainly never said we shouldn't search deeper, I'm saying we should expect it to be "models all the way down."

So no, I will not accept some foundational anti-stance as a substitute for working through the apparent consequences of alternative foundational stances. The foundational stance itself needs no more justification or claims of absolute validity than the contributions to the "replacement" models predictive value.
Yes, I am not disagreeing there. The only foundational anti-stance I take is the rejection of the idea that we seek what is "fundamental". Fundamentality is a direction, not a destination, so the best we can aspire to is "more fundamental."
 
  • #56
I see that this debate has stemmed from a slight distortion of perspective more than any 'real' disagreement. When I think in terms of nature verses our model I see it as a necessary distinction because often useful insight does not come from within the framework of the model. What nature "actually" is cannot be summed up so easily even it we could know and such thinking is normally more like just another perspective of it.

I do not see blending these distinctions between model and nature, where we think in terms of model extensions only, as having the conceptual latitude needed for the kinds of insights it will fairly likely take for QG. That the model centric perspective is in fact ultimately valid before and after any such achievements does not moot the value of the model/nature distinction or thinking in terms of nature outside the models. Some of the terms you used fell into one category for me and what I perceive as traditional, such as fundamental. Hence when it was used in reference to the model itself it entailed consequences that was a bit outrageous. Stochasticity does blur that line in that it is a fundamental limitation on models that all models must posses irrespective whether the system being modeled has that property or not.
 
  • #57
my_wan said:
I see that this debate has stemmed from a slight distortion of perspective more than any 'real' disagreement.
Yes, the same has occurred to me.
When I think in terms of nature verses our model I see it as a necessary distinction because often useful insight does not come from within the framework of the model. What nature "actually" is cannot be summed up so easily even it we could know and such thinking is normally more like just another perspective of it.
Yes, and I think it is actively our goal not to try to understand nature completely, we wouldn't use science if we wanted a complete understanding (for example, we get a very different kind of understanding by "living a life", where we understand by loving, feeling, hurting, striving, reaching, dying). The goal of science is a very particular slice of reality-- relating to objective predictive power.
Stochasticity does blur that line in that it is a fundamental limitation on models that all models must posses irrespective whether the system being modeled has that property or not.
Yes, that is the crux of the matter, we agree.
 
<h2>1. What is a weak measurement?</h2><p>A weak measurement is a type of quantum measurement that involves making a small disturbance to a quantum system, allowing for the measurement of properties that are typically difficult to observe in traditional measurements.</p><h2>2. How do weak measurements relate to randomness?</h2><p>Weak measurements can provide insight into the inherent randomness of quantum systems by allowing for the observation of subtle fluctuations and uncertainties in the system.</p><h2>3. Can weak measurements prove that randomness is not inherent?</h2><p>No, weak measurements cannot definitively prove that randomness is not inherent in quantum systems. While they can provide evidence against the existence of inherent randomness, it is ultimately a philosophical and theoretical question that cannot be definitively answered.</p><h2>4. What are some limitations of weak measurements?</h2><p>Weak measurements are subject to various limitations, such as the potential for measurement errors and the fact that they can only provide statistical information about a system rather than precise measurements of individual particles.</p><h2>5. How are weak measurements used in scientific research?</h2><p>Weak measurements have been used in various scientific studies, particularly in the field of quantum mechanics, to gain a better understanding of the behavior and properties of quantum systems. They have also been used in practical applications, such as in quantum computing and cryptography.</p>

1. What is a weak measurement?

A weak measurement is a type of quantum measurement that involves making a small disturbance to a quantum system, allowing for the measurement of properties that are typically difficult to observe in traditional measurements.

2. How do weak measurements relate to randomness?

Weak measurements can provide insight into the inherent randomness of quantum systems by allowing for the observation of subtle fluctuations and uncertainties in the system.

3. Can weak measurements prove that randomness is not inherent?

No, weak measurements cannot definitively prove that randomness is not inherent in quantum systems. While they can provide evidence against the existence of inherent randomness, it is ultimately a philosophical and theoretical question that cannot be definitively answered.

4. What are some limitations of weak measurements?

Weak measurements are subject to various limitations, such as the potential for measurement errors and the fact that they can only provide statistical information about a system rather than precise measurements of individual particles.

5. How are weak measurements used in scientific research?

Weak measurements have been used in various scientific studies, particularly in the field of quantum mechanics, to gain a better understanding of the behavior and properties of quantum systems. They have also been used in practical applications, such as in quantum computing and cryptography.

Similar threads

  • Quantum Physics
Replies
4
Views
704
  • Quantum Physics
Replies
1
Views
1K
Replies
8
Views
1K
Replies
1
Views
2K
  • Quantum Physics
Replies
3
Views
613
  • Quantum Physics
Replies
5
Views
1K
Replies
16
Views
2K
  • Quantum Physics
2
Replies
36
Views
1K
  • Quantum Interpretations and Foundations
4
Replies
106
Views
12K
  • Quantum Physics
2
Replies
35
Views
3K
Back
Top