I Why randomness means incomplete understanding

  • I
  • Thread starter Thread starter A. Neumaier
  • Start date Start date
  • Tags Tags
    Means Randomness
A. Neumaier
Science Advisor
Insights Author
Messages
8,699
Reaction score
4,771
TL;DR Summary
We lack a fundamental understanding of the measurement process in quantum mechanics.
Summary: We lack a fundamental understanding of the measurement process in quantum mechanics.

vanhees71 said:
why it is considered a problem that nature seems to be "irreducibly probabilistic/random" in the precise sense defined by QT?
julcab12 said:
Randomness is just the absence of knowledge on what it will happen when we will do something we don't have the complete control on it.
akvadrako said:
computers can't create randomness – they always need an outside source. We can't write a RAND() function based off other primitive operations.
Actually ever programming language has such a function, and it is executed to everyone's satisfaction.
vanhees71 said:
That's precisely what I asked! Why do you think that randomness is "just the absence of knowledge"? Why shouldn't nature behave randomly in a way as described by QT?
Suppose we want to create a device that faithfully simulates some aspect of Nature. To do so, we need to know enough about the working of this aspect so that we know how to build the simulation. Being able to create a detailed blueprint for a perfect simulation means that we understood this aspect. Not being able to do this implies lack of understanding.

Suppose now that the task is to create a faithful miniuniverse, a dynamical system with a state space sufficiently complex that there are internal detectors which make measurements working according to the Rules of Quantum Mechanics. Then we need to know in sufficient detail how those many-particle systems called detectors get their measurement results, based solely on the dynamical law of the miniuniverse, and hence independent of the notion of measurement.

The traditional interpretations are way too vague to allow such a blueprint to be made, even in principle. The reason is that on their basis one cannot describe how the dynamics of the complete system, the miniuniverse, implies that the detectors get their particular measurement values. This is impossible because there is no known conceptual relationship between the assumed irreducible randomness (if any) applied on the level of the complete system and the assumed irreducible randomness applied on the level of the detector results.

The current knowledge only allows a piecemeal partial understanding, as the understanding of measurement processes is only a heuristic mix of quantum mechanics, classical mechanics, and ad hoc simplifying assumptions not justified by theory but only by practice, applied separately to models of particles, materials, and components.

This shows that we lack a fundamental understanding of the measurement process in quantum mechanics.
 
Last edited:
  • Like
Likes Jamister, Wes Tausend, GshowenXj and 2 others
Physics news on Phys.org
The only way to deterministically predict the future is if you know x and p.

Now you have to answer the following question. Is the uncertainty principle about lack of knowledge or lack of simultaneous existence of these quantities.

Quantum physics says that they don't exist simultaneously, so the unpredictability is not due to incomplete knowledge. If you assume that a particle has to pass through one or the other of the slits of a double-slit experiment, then logically you would have to conclude that the number of particles arriving at a point x is the sum of particles arriving by way 1 and by way 2. This means there is no interference pattern. So when there is an interference pattern, you cannot say that the particle passes through one of the slits. So you cannot say "there is a p, but we don't know it." Assuming that it passes through one of the slits or the other will lead to logically wrong conclusions. So p and x don't exist at the same time, and the foundation for a deterministic description is removed. (not due to lack of knowledge, or incomplete knowledge)

randomness/statistical behavior is not due to incompleteness in our knowledge.
 
  • Like
Likes DanielMB, Ygggdrasil and PeroK
You didn't define exactly what is "perfect" about the "perfect simulator".
Even if we fully understood the measurement process, that would not imply that we would be able to create a simulator that would be able to predict the exact result of a specific proposed measurement.

We could (in the future) fully understand how Nature works and thus build that simulator, but the simulator would operate from whatever starting point we gave it. We would still be unable to "download" the universe to it - simply because we can't create a complete-enough copy of it.

What we would be able to do is demonstrate the same kind of apparent "randomness" that we see when QM measurements are made in our own "real", non-simulated world.
 
PrashantGokaraju said:
The only way to deterministically predict the future is if you know x and p.

...

So p and x don't exist at the same time, and the foundation for a deterministic description is removed. (not due to lack of knowledge, or incomplete knowledge)
No, you don't need to know x and p unless you are using the wrong model.
The right model would fully recognize the rules of QM - and also know how a measurement result becomes real.
Since the real world knows how to make a decision without fully determined p and x, so should the perfect model.
 
That seems to be as meaningless as saying we can know what is x(t + ε) without knowing v(t) or without v existing.
 
Perfekt simulator = a simulator that matches the specifications to a sufficient degree. I detailed the specifications required for the mini universe. We don't need to have the power to build the simulator. My point is that we don't even have the knowledge to make a blueprint whose implementation would work.
 
.Scott said:
No, you don't need to know x and p unless you are using the wrong model.
The right model would fully recognize the rules of QM - and also know how a measurement result becomes real.
Since the real world knows how to make a decision without fully determined p and x, so should the perfect model.
The real world knows how to make a decision, but the point is that its decision is a probabilistic decision.
 
A. Neumaier said:
The reason is that on their basis one cannot describe how the dynamics of the complete system, the miniuniverse, implies that the detectors get their particular measurement values.

My understanding is you can't infer probabilities re/ a sequence of internal detection events based solely on the dynamics of the miniuniverse. But if you also had information about the initial state of the miniuniverse (I.e. not just a Hamiltonian/action but also a density matrix), I think you could in principle assign probabilities to possible alternative sequences of internal detection events, provided the alternative sequences decohere with one another. The internal detection/measurement events would become internal correlation events.
 
PrashantGokaraju said:
The real world knows how to make a decision, but the point is that its decision is a probabilistic decision.
In that case, "the point" has not been demonstrated. We describe it with probabilities. But it always comes out to a specific result.
 
  • #10
.Scott said:
In that case, "the point" has not been demonstrated. We describe it with probabilities. But it always comes out to a specific result.
I think the issue is this. These results have to be consistent with everything we know about physics. Even if x and p don't exist simultaneously, they can both be measured in different situations, and these measurements have to consistent with what we know about the meaning of energy, momentum etc. This is why quantum mechanics cannot be defined independently of classical mechanics. To make predictions from quantum mechanics, you must use the classical idea of x and p, hamilton's equations etc. So it cannot be the "wrong model". H = P2/2m + V(X) must be used in shrodinger's equation, which comes from classical mechanics. This way everything is consistent.
 
  • #11
PrashantGokaraju said:
These results have to be consistent with everything we know about physics. Even if x and p don't exist simultaneously, they can both be measured in different situations, and these measurements have to consistent with what we know about the meaning of energy, momentum etc. This is why quantum mechanics cannot be defined independently of classical mechanics. To make predictions from quantum mechanics, you must use the classical idea of x and p, hamiltons equations etc. So it cannot be the "wrong model"
All that means is that both the classical model and QM do not qualify as the "right model". There is something else going on. QM is consistent with this "right model", but there are some other non-local rules we haven't worked out yet.
 
  • #12
Morbert said:
My understanding is you can't infer probabilities re/ a sequence of internal detection events based solely on the dynamics of the miniuniverse. But if you also had information about the initial state of the miniuniverse (I.e. not just a Hamiltonian/action but also a density matrix), I think you could in principle assign probabilities to possible alternative sequences of internal detection events, provided the alternative sequences decohere with one another. The internal detection/measurement events would become internal correlation events.
Of course every dynamical systm has initial conditions. But in quantum mechanics, there is no principle that would tell you what are internal detection events, hence no in principle way of saying what values these would produce. For a simulation, yo'd need to specify the stochastic dynamics for the whole miniuniverse, and deduce that the individual detection events behave according to Born's rule. This is equivalent to solving the measurement problem.
 
Last edited:
  • Like
Likes Stephen Tashi
  • #13
A. Neumaier said:
Of course every dynamical systm has initial conditions. But in quantum mechanics, there is no principle that would tell you what are internal detection events, hence no in principle way of saying what values these would produce. For a simulation, yo'd need to specify the stochastic dynamics for the whole miniuniverse, and deduce ghat the individual detection events behave according to Born's rule. This is equivalent to solving the measurement problem.

You might have to bear with me if I'm saying anything obvious, but consider a miniverse in some initial state ##|\Psi\rangle##. If you want to know the probability that a a time-ordered sequence of events ##(\epsilon_1,\epsilon_2,\dots,\epsilon_n)## occurs within the miniverse, you would compute
$$||\Pi^{\epsilon_n}_{t_n}\dots\Pi^{\epsilon_2}_{t_2}\Pi^{\epsilon_1}_{t_1}|\Psi\rangle||^2$$
where ##\Pi^{\epsilon_i}_{t_i}## are the relevant projection operators and ##t_i## are the times at which they occur. And provided your alternative possible sequences of events decohere (e.g. if they constitute alternative quasiclassical histories), you don't have to invoke some external observer to talk about these events happening. There would be no a priori special category of "detection events", though you could presumably describe these events as measuring any quantities that end up correlating with them.
 
  • #14
A. Neumaier said:
Actually ever programming language has such a function, and it is executed to everyone's satisfaction.

This is not really true. You can create a pseudo-random generator, but it needs to be seeded with an outside source of randomness, otherwise you'll get deterministic behavior. And even that isn't good enough for cryptography, where every bit should be from some outside source, otherwise you'll create patterns that open new avenues for attack. This is why modern processors include an RDRAND instruction which is based on electrical white noise or other quantum sources.
 
  • Like
Likes Locrian, Lish Lash, DanielMB and 2 others
  • #15
A. Neumaier said:
Summary: We lack a fundamental understanding of the measurement process in quantum mechanics.

Actually ever programming language has such a function, and it is executed to everyone's satisfaction.

Suppose we want to create a device that faithfully simulates some aspect of Nature. To do so, we need to know enough about the working of this aspect so that we know how to build the simulation. Being able to create a detailed blueprint for a perfect simulation means that we understood this aspect. Not being able to do this implies lack of understanding.

Suppose now that the task is to create a faithful miniuniverse, a dynamical system with a state space sufficiently complex that there are internal detectors which make measurements working according to the Rules of Quantum Mechanics. Then we need to know in sufficient detail how those many-particle systems called detectors get their measurement results, based solely on the dynamical law of the miniuniverse, and hence independent of the notion of measurement.

The traditional interpretations are way too vague to allow such a blueprint to be made, even in principle. The reason is that on their basis one cannot describe how the dynamics of the complete system, the miniuniverse, implies that the detectors get their particular measurement values. This is impossible because there is no known conceptual relationship between the assumed irreducible randomness (if any) applied on the level of the complete system and the assumed irreducible randomness applied on the level of the detector results.

The current knowledge only allows a piecemeal partial understanding, as the understanding of measurement processes is only a heuristic mix of quantum mechanics, classical mechanics, and ad hoc simplifying assumptions not justified by theory but only by practice, applied separately to models of particles, materials, and components.

This shows that we lack a fundamental understanding of the measurement process in quantum mechanics.
Well, the experimentalists know obviously very well how to describe their detectors, because otherwise they couldn't successfully measure what they measure in particle physics experiments. Big part of the development of detectors consist of doing just what you describe above, namely a computer simulation of an experiment, usually using Monte-Carlo methods, i.e., all you need to know are the very detection probabilities for particles/photons you want to measure.
 
  • #16
vanhees71 said:
Well, the experimentalists know obviously very well how to describe their detectors, because otherwise they couldn't successfully measure what they measure in particle physics experiments. Big part of the development of detectors consist of doing just what you describe above, namely a computer simulation of an experiment, usually using Monte-Carlo methods, i.e., all you need to know are the very detection probabilities for particles/photons you want to measure.
The problem is not on the experimental side but on the theoretical side.

Theorists predicting standard model properties simulate the behavior of a handful particles and their detection, based on few particle quantum physics together with mostly classical physics for the equipment. This is far from simulating by the rules of quantum theory a quantum miniuniverse within which some quantum detector measures some quantum particle properties. How to do the latter even in blueprint fashion, (i.e., assuming unrestricted computer power, speed, and memory) is terra incognita.
 
  • #17
Well, all of condensed matter physics is also quite well understood theoretically. It's of course never a first-principle standard-model calculating leading to the correct predictions of material properties but effective models, partially derivable from them, partially guessed in phenomenological models with the aide of empirical guiding. It doesn't even make sense to simulate a detector from the first principles of relativistic QFT, because it doesn't use the relevant degrees of freedom to begin with.
 
  • #18
A. Neumaier said:
The problem is not on the experimental side but on the theoretical side.

Theorists predicting standard model properties simulate the behavior of a handful particles and their detection, based on few particle quantum physics together with mostly classical physics for the equipment. This is far from simulating by the rules of quantum theory a quantum miniuniverse within which some quantum detector measures some quantum particle properties. How to do the latter even in blueprint fashion, (i.e., assuming unrestricted computer power, speed, and memory) is terra incognita.
Well, in a sense, pure Everett approach should be subject to simulation. You should, for example, find a superposition of detectors with definite readings, if it all works as adherents hope.
 
  • #19
akvadrako said:
This is not really true. You can create a pseudo-random generator, but it needs to be seeded with an outside source of randomness, otherwise you'll get deterministic behavior. And even that isn't good enough for cryptography, where every bit should be from some outside source, otherwise you'll create patterns that open new avenues for attack. This is why modern processors include an RDRAND instruction which is based on electrical white noise or other quantum sources.
This criteria for "random" has problems. Ruling out repeatable series means that any "random" series that is recorded for future replays could not be considered random. That would include running the random generator first, recording the numbers, and then using them. Being able to repeat the series should not rule it out from being "random". A definition of "random" that has advantages is if it is not possible to compress the series at all. With that definition, most computer pseudo-random number generators are not random, regardless of how they are seeded.
 
  • #20
akvadrako said:
This is not really true. You can create a pseudo-random generator, but it needs to be seeded with an outside source of randomness, otherwise you'll get deterministic behavior. And even that isn't good enough for cryptography, where every bit should be from some outside source, otherwise you'll create patterns that open new avenues for attack. This is why modern processors include an RDRAND instruction which is based on electrical white noise or other quantum sources.

That reminds me of this implementation of an external random source: www.youtube.com/watch?v=1cUUfMeOijg
where they claim to be using a video camera watching a lot of lava lamps to generate random keys.

Any thoughts about this -- is it serious technology, or is it just their PR people doing a weird flex?
 
  • #21
Morbert said:
You might have to bear with me if I'm saying anything obvious, but consider a miniverse in some initial state ##|\Psi\rangle##. If you want to know the probability that a a time-ordered sequence of events ##(\epsilon_1,\epsilon_2,\dots,\epsilon_n)## occurs within the miniverse, you would compute
$$||\Pi^{\epsilon_n}_{t_n}\dots\Pi^{\epsilon_2}_{t_2}\Pi^{\epsilon_1}_{t_1}|\Psi\rangle||^2$$
where ##\Pi^{\epsilon_i}_{t_i}## are the relevant projection operators and ##t_i## are the times at which they occur. And provided your alternative possible sequences of events decohere (e.g. if they constitute alternative quasiclassical histories), you don't have to invoke some external observer to talk about these events happening. There would be no a priori special category of "detection events", though you could presumably describe these events as measuring any quantities that end up correlating with them.
This still does not say what an event is, and how a quantum detector in the miniverse would recognize it. What does it mean for a quantum detector to record an event associated with some ##\Pi^{\epsilon}##? When does it happen?
 
  • #22
PAllen said:
Well, in a sense, pure Everett approach should be subject to simulation. You should, for example, find a superposition of detectors with definite readings, if it all works as adherents hope.
Again, what is missing is a definition of what is a reading of a detector. A real measurement takes time - when is the reading a measurement?
 
  • #23
A. Neumaier said:
But in quantum mechanics, there is no principle that would tell you what are internal detection events, hence no in principle way of saying what values these would produce.

A. Neumaier said:
Again, what is missing is a definition of what is a reading of a detector. A real measurement takes time - when is the reading a measurement?

I've never understood how the major interpretations of quantum mechanics view a day in life. We judge that certain macroscopic events definitely do happen. Is the occurrence of a macroscopic event to be interpreted as an outcome of a set of measurements ?

How much of the theoretical problem is unique to quantum mechanics? If we consider a macroscopic event like "I go to the grocery store" then, in any sort of mechanics, is it safe to assume that such an event has an (unambiguous) definition? As time passes both "I" and the grocery store may gain or lose a few atoms and still remain macroscopically the "same", at least in the opinion of the "I".

vanhees71 said:
Well, the experimentalists know obviously very well how to describe their detectors, because otherwise they couldn't successfully measure what they measure in particle physics experiments.

The experimenters don't actually know to describe their detectors in atom-by-atom detail. The macroscopic experimenter can give a macroscopic description of a detector.

If we use a moderate dose of pure mathematics, we can simply assert that there is a subset of S of the set of all "real" trajectories of the universe through its possible states and this set S consists exactly of those where "I go to the grocery store". But a heavy does of pure mathematics may question whether this approach does define a set. After all, it assumes the macroscopic judgement "I go to the grocery store" is unambiguous . So there is circular procedure used in defining the set.
 
  • #24
A. Neumaier said:
Again, what is missing is a definition of what is a reading of a detector. A real measurement takes time - when is the reading a measurement?
Well, if you're an Everett purist, you don't care. You assume that if you have a state sufficient to describe your mini-world, and evolve it per the Schrodinger equation, you are done. Reading out classical results from your simulation may involve problems of definition, but running the simulation, in principle, does not.

(just FYI - I am not an Everett purist, just presenting what I understand of that interpretation).
 
  • Like
Likes akvadrako
  • #25
A. Neumaier said:
This still does not say what an event is, and how a quantum detector in the miniverse would recognize it. What does it mean for a quantum detector to record an event associated with some ##\Pi^{\epsilon}##? When does it happen?

An event which occurs at some time ##t## is a property that obtains at time ##t##, under a consistent quantum description of the system. A detector in the miniverse measures an event if the history of the detector is correlated with the occurrence of the event. Measurement itself isn't the event. It's the correlation between histories of events.
 
  • #26
Morbert said:
An event which occurs at some time ##t## is a property that obtains at time ##t##, under a consistent quantum description of the system. A detector in the miniverse measures an event if the history of the detector is correlated with the occurrence of the event. Measurement itself isn't the event. It's the correlation between histories of events.
So everything measures everything, to some extent, since zero correlations are the exception. And all events happen all the time simultaneously. A very strange miniverse! What then guarantees the coherence of behavior of the objects? How to select one reasonable scenario from the chaos where everything happens?
 
  • #27
akvadrako said:
This is not really true. You can create a pseudo-random generator, but it needs to be seeded with an outside source of randomness, otherwise you'll get deterministic behavior.
Good point, that is correct.
FactChecker said:
A definition of "random" that has advantages is if it is not possible to compress the series at all. With that definition, most computer pseudo-random number generators are not random, regardless of how they are seeded.
Yes. Shannon's information entropy can be used the evaluate the "randomness". A stream of data that has maximum information entropy (which is 8 if each symbol is coded with 8 bits) is random. If the entropy is less than 8, it's not random. I should also add that here I assume that the sample size, i.e. the length of the stream of data, is sufficiently long to make an accurate calculation of the entropy.
 
Last edited:
  • #28
Stephen Tashi said:
The experimenters don't actually know to describe their detectors in atom-by-atom detail. The macroscopic experimenter can give a macroscopic description of a detector.

If we use a moderate dose of pure mathematics, we can simply assert that there is a subset of S of the set of all "real" trajectories of the universe through its possible states and this set S consists exactly of those where "I go to the grocery store". But a heavy does of pure mathematics may question whether this approach does define a set. After all, it assumes the macroscopic judgement "I go to the grocery store" is unambiguous . So there is circular procedure used in defining the set.
Of course you cannot describe macroscopic systems in all microscopic detail. As the application of the standard many-body treatments show the macroscopic "relevant observables" tend to behave in a way well-described by classical physics.

To the contrary it's pretty difficult to observe the quantum behavior of macroscopic observables, though this is possible today thanks to refined technique; these days the have counted single phonons of a nano-mechanical oscillator:

https://arxiv.org/abs/1902.04681https://doi.org/10.1038/s41586-019-1386-x
 
  • #29
A. Neumaier said:
And all events happen all the time simultaneously. A very strange miniverse!

Not all events happen, simultaneously or otherwise. Quantum mechanics will assign probabilities to mutually exclusive sequences of events. If one sequence happens, the others won't. Quantum mechanics won't tell you which sequence actually happens, of course. But that's to be expected if the world really is probabilistic.
 
  • #30
Morbert said:
Not all events happen, simultaneously or otherwise. Quantum mechanics will assign probabilities to mutually exclusive sequences of events. If one sequence happens, the others won't. Quantum mechanics won't tell you which sequence actually happens, of course. But that's to be expected if the world really is probabilistic.
But then how does the simulation proceed in such a way that each simulated detector knows which value it has to display at which time (so that an event happens), while only propagating the wave function of the total system?

It now seems that in the simulation according to your recipe, nothing happens at all.
 
  • Like
Likes OCR
  • #31
vanhees71 said:
Of course you cannot describe macroscopic systems in all microscopic detail.
I think that you are speaking in a practical sense - i.e. such a task is too complex to be done in practice. However, what I am suggesting is that, in addition to being impractical, the task may be theoretically impossible. For example, to define theoretically and microscopically the macroscopic event "I go to the grocery store" presupposes there is some algorithm that can look at a subsequence of microscopic events and declare that it represents "I go to the grocery store".
 
  • Like
Likes FactChecker
  • #32
Stephen Tashi said:
I think that you are speaking in a practical sense - i.e. such a task is too complex to be done in practice. However, what I am suggesting is that, in addition to being impractical, the task may be theoretically impossible. For example, to define theoretically and microscopically the macroscopic event "I go to the grocery store" presupposes there is some algorithm that can look at a subsequence of microscopic events and declare that it represents "I go to the grocery store".
Why would you think this is theoretically impossible? The only reason I can think of is that you believe there is no such subsequence - that is that such a statement has no microscopic underpinning. More specifically, that you believe the microscopic representation of this, by itself, does not represent the macroscopic event; that there is something fundamentally missing, in principle, from any microscopic description. This would imply that if you took all the atoms and states involved, they would not implement the macroscopic event unless you added something else. Do you have any suggestion of what that something else is?
 
  • #33
Hi,

Why randomness means incomplete understanding: it is a point of view that stems, in my opinion, from the fact that quantum mechanics is an essentially predictive theory. It's not an explanatory theory, hence a large number of interpretations. A scientific explanation is traditionally defined as a causal assignment, but quantum mechanics challenges our commonsense picture of causality. For example by implying that some things happen at random, with no apparent cause.

Heisenberg : We have to relearn what understanding really means.

/Patrick
 
  • #34
It is important to be clear about the concepts. Quantum theory is completely causal, even in a strong sense: Knowing the state at time ##t_0## and knowing the Hamiltonian of the system, you know the state at any time ##t>t_0##.

The difference to classical physics is that QT is indeterministic, i.e., having prepared a system in a minimum-entropy state (i.e., a pure state) doesn't imply that all observables take determined values. All the tests of QT indicate that this is not due to some incompleteness of possible knowledge but an inherent feature of nature.
 
  • #35
vanhees71 said:
It is important to be clear about the concepts. Quantum theory is completely causal, even in a strong sense: Knowing the state at time ##t_0## and knowing the Hamiltonian of the system, you know the state at any time ##t>t_0##.

The difference to classical physics is that QT is indeterministic, i.e., having prepared a system in a minimum-entropy state (i.e., a pure state) doesn't imply that all observables take determined values. All the tests of QT indicate that this is not due to some incompleteness of possible knowledge but an inherent feature of nature.
Elsewhere you said that quantum mechanics is solely about predicting experiments. In that sense it predicts nothing in the microscopic domain in a causal fashion. This was the view of Heisenberg at the time he wrote his paper about the uncertainty relation [referred to as equation (1) below], directly opposing what you wrote above:
Werner Heisenberg said:
Da nun der statistische Charakter der Quantentheorie so eng an die Ungenauigkeit aller Wahrnehmung geknüpft ist, könnte man zu der Vermutung verleitet werden, daß sich hinter der wahrgenommenen statistischen Welt noch eine ,,wirkliche'' Welt verberge, in der das Kausalgesetz gilt. Aber solche Spekulationen scheinen uns, das betonen wir ausdrücklich, unfruchtbar und sinnlos. Die Physik soll nur den Zusammenhang der Wahrnehmungen formal beschreiben. Vielmehr kann man den wahren Sachverhalt viel besser so charakterisieren: Weil alle Experimente den Gesetzen der Quantenmechanik und damlt der Gleichung (1) unterworfen sind, so wird durch die Quantenmechanik die Ungültigkeit des Kausalgesetzes definitiv festgestellt.
(Maybe someone can find an English translation.)
 
  • #36
Sure, QT is about predicting the outcome of experiments, as any theory in physics. I've no clue how you come to the conclusion that it predicts nothing in the microscopic domain in a causal fashion. In fact it does precisely this. If that was not the case, we'd be in need of a new theory, and if there were an observation, for which QT fails to predict the outcome accurately we maybe already had a hint, how to modify it.

I don't think that Heisenberg is a good source concerning discussions about the interpretation. He's one of the main culprits leading to all this fuss about this topic, and the above quote again shows that he didn't understand Bohr's very important correction of his flawed view on the uncertainty relation in his first paper, which he published without first discussing it with Bohr: The uncertainty is not due to the "measurability" of observables but due to the "preparationability" of systems. The last sentence is also very revealing: Heisenberg also fails to clearly distinguish between causality and determinism. Though he is right, more today than in his time after all the investigations following Bell's important insights, in saying that it's highly speculative to think that there may be a "hidden determinism" (again to refer to "causality" is wrong though).

As long as there is not a clear contradiction between QT and observations, I'd say it's indeed highly speculative to think that there may be a deterministic, necessarily non-local, (hidden-variable?) theory behind the probabilistic nature of the quantum description.
 
  • #37
A. Neumaier said:
It now seems that in the simulation according to your recipe, nothing happens at all.
Oh, interesting!. . . a simulation that simulates nothing ?? . :DD

That couldn't even simulate a no bel prize. . . .
lmao.gif


.
 
  • #38
vanhees71 said:
Bohr's very important correction
So let me quote Bohr (Nature 1928) on causality:
Niels Bohr said:
This postulate implies a renunciation as regards the causal space-time co-ordination of atomic processes. [...] there can be no question of causality in the ordinary sense of the word. [...] we learn from the quantum theory that the appropriateness of our usual causal space-time description depends entirely upon the small value of the quantum of action as compared to the actions involved in ordinary sense perceptions. [...] the radical departure from the causal description of Nature met with in radiation phenomena, to which we have referred above in connexion with the excitation of spectra. [...] Thus we see at once that no causal connexion can be obtained between observations leading to the fixation of a stationary state and earlier observations on the behaviour of the separate particles in the atom.
Probably you'll reply that you don't think that Bohr is a good source concerning discussions about the interpretation. Nobody is, since nobody has your views. Your views about interpretation are in many respects a minority view.
vanhees71 said:
I don't think that Heisenberg is a good source concerning discussions about the interpretation.
Since you speak by argument from authority, let me emphasize that I consider Heisenberg to be a more important authority than you. His views are not antiquated at all.
 
  • #39
OCR said:
Oh, interesting!. . . a simulation that simulates nothing ?? . :DD

That couldn't even simulate a nobel prize. . .
It simulates a wave function, but no events. Thus one gets a huge amount of data, but nothing happens.
 
Last edited:
  • #40
A. Neumaier said:
So let me quote Bohr (Nature 1928) on causality:

Probably you'll reply that you don't think that Bohr is a good source concerning discussions about the interpretation. Nobody is, since nobody has your views. Your views about interpretation are in many respects a minority view.

Since you speak by argument from authority, let me emphasize that I consider Heisenberg to be a more important authority than you. His views are not antiquated at all.
That's your decision to follow "authorities". I don't claim to be an authority in any respect, but I don't think that my view on interpretation is a minority view. In my environment I don't know any physicist who doesn't share this view, i.e., does not follow the "orthodox interpretation" of QT. That may be due to the fact that our topic of theoretical-physics research (relativistic heavy-ion collisions) is very close to phenomenology and experiments, i.e., we have contact with real-world physics in the lab rather than overly philosophical speculations.

That said, I indeed consider Bohr a better source for discussions on interpretation than Heisenberg, though I don't share your enthusiasm for his writings about the subject which tend to be more confusing than necessary, but Heisenberg tops him in this respect.

Among the best writings on interpretation is the "Prologue" in the book

J. Schwinger, Quantum Mechanics, Symbolism for atomic measurements, Springer Verlag
 
  • #41
vanhees71 said:
In my environment I don't know any physicist who doesn't share this view
Everyone shares the math of quantum physics and the intuition for how to apply it.

But ask anyone about the details about how they interpret things and one finds huge differences. In fact one tends to find more views than people asked, since the same persons' views are different when asked in different contexts.
 
Last edited:
  • #42
OCR said:
That couldn't even simulate a no bel prize. . .

The space in "no bel" was not a simulation. . . it was deliberate. . :oldbiggrin:

.

 
  • #43
OCR said:
The space in "no bel" was not a simulation. . . it was deliberate. . :oldbiggrin:
This revealed a limitation in my biological OCR routine !-)
 
  • Like
  • Haha
Likes vortextor and OCR
  • #44
PAllen said:
Why would you think this is theoretically impossible?
Perhaps it's a delicate mathematical point. To define "I go to the grocery store" requires that "I" do it. But "I" am not a well defined subset of events in the universe. (E.g. atoms come and go from "me", but "I" remain, in the judgement of myself, "the same".)

Macroscopic events are judged by macroscopic beings. Attempts to define them in micrscopic detail are circular since both the event and the definer of the event are not microscopically defined. A definition of "I go to the grocery store" in terms of microscopic events that "I" judge to be correct would be one where "I" agreed that the definition worked. But "I" am not microscopically defined - unless "I" succeed in defining "I" microscopically. That attempt at self-definition is utterly circular.

I don't know whether @A. Neumaier is saying something along these lines, but it seems relevant to the objection that known dynamical laws do not model the macroscopic events of measurements being taken.
 
  • #45
Stephen Tashi said:
I don't know whether @A. Neumaier is saying something along these lines
No. You are nitpicking.

To work on the theoretical level, one doesn't need to specify a macroscopic object (such as 'You') to the last detail, knowing precisely which atoms belong to it. One just needs an approximate model that captures the relevant features. (All our models in physics are approximate!) To define "You go to the grocery store" it is enough to have a stick model of you with movable joints, knowledge of the location of all joints and the door of grocery store, and a lattice quantum model of the material of which the sticks and joints are made, to be able to work from first principles.
 
  • Like
Likes vanhees71
  • #46
A. Neumaier said:
No. You are nitpicking.

To work on the theoretical level, one doesn't need to specify a macroscopic object (such as 'You') to the last detail, knowing precisely which atoms belong to it. One just needs an approximate model that captures the relevant features.

I agree I'm nitpicking, but notice that "one just needs" implies a judgement by a macroscopic being concerning whether needs are met.

The assertion that a system of dynamical laws doesn't model the macroscopic events of measurements being taken is an example of the familiar saying "It's hard to prove a negative case". People can respond along the lines of "Of course it does. We let X be the subset of microscopic events that define 'Bob takes a a measurement' and there you have it."

If we grant that macroscopic events can be defined as subsets of microscopic events in a dynamical system, then we cannot object that the system does not represent macroscopic measurements. We can only object that the system does not tell us "naturally" or by some convenient general definition which microscopic events represent macroscopic measurements.

Your assertion that "randomness mean incomplete understanding" is more subtle than the above points - and apparently not specific to macroscopic events. However, some responses say, in effect, "What's the problem? We define 'Bob takes a measurement' as a set of microscopic events and run a simulation where we draw simulated random numbers at appropriate times and there you have it."

I don't understand your reply to this type of argument. You issue challenges like " This still does not say what an event is, and how a quantum detector in the miniverse would recognize it.". This seems to require that a proposed model include a general definition of "what an event is". Is that your requirement? You would object to people defining the events in a model on case-by-case basis? Perhaps your objection is that there is no general rule telling where to draw the random numbers.
 
  • #47
Stephen Tashi said:
"one just needs" implies a judgement by a macroscopic being concerning whether needs are met.
This is needed for all of science. With your argument you should stop being interested in it.
 
  • Like
Likes vanhees71
  • #48
Stephen Tashi said:
You issue challenges like " This still does not say what an event is, and how a quantum detector in the miniverse would recognize it.". This seems to require that a proposed model include a general definition of "what an event is". Is that your requirement? You would object to people defining the events in a model on case-by-case basis?
If there is no formal meaning to the notion of events in terms of wave functions, one cannot simulate events by simulating wave functions. Since what happens is constituted of events, nothing happens in such a simulation.

Therefore a formal notion of event is needed to be able to simulate what happens. Of course it cannot depend on a case by case basis, since what happens in reality also happens without us having made up cases. Special cases can be distinguished on the basis of the general notion, just as special molecules make sense only with a concept of a molecule.
 
Last edited:
  • Like
Likes vanhees71 and Stephen Tashi
  • #49
vanhees71 said:
Sure, QT is about predicting the outcome of experiments, as any theory in physics.
Absolutely not. The prediction of quantum mechanics, in general, is not based on knowledge of the past, as far as measurement is concerned.

What would be the usefulness of a predictive theory that would not require any measures?

/Patrick
 
  • #50
A. Neumaier said:
But then how does the simulation proceed in such a way that each simulated detector knows which value it has to display at which time (so that an event happens), while only propagating the wave function of the total system?

It now seems that in the simulation according to your recipe, nothing happens at all.

It simulates a wave function, but no events.

You are free to evolve the wavefunction, but that doesn't commit you to a statement like "nothing happens".

You could also, for example, decompose the wavefunction into a basis of mutually exclusive sequences of events ##|\Psi\rangle = \sum_\alpha C_\alpha|\Psi\rangle## and compute the aforementioned probabilities ##||C_\alpha |\Psi\rangle||^2## or, more generally, ##\mathbf{Tr}[C_\alpha\rho C_\alpha^\dagger]##.

Quantum mechanics let's you use whichever treatment of the system is most suitable for your purposes.
 
Back
Top