A Bold New Take on Quantum Theory

  • #1
10,776
3,637
New Scientist recently published an article entitled 'A Bold New Take on Quantum Theory'. I found interesting

Unfortunately, it is behind a paywall, but I will give my precis.

How QM, which only predicts probabilities, gives rise to the solid, well-defined world around us is still a mystery.

The author suspects the answer could lie with Boltzmann and thermodynamics. Those quantum possibilities are never lost; instead, they are mixed so thoroughly into the 'cracks of reality' that we can’t see them.

We never (added by me - that should be rarely, e.g. the behaviour of liquid helium) see QM's odd effects in the classical, everyday world. So, what is happening?

When physicists considered this question over the years, they often considered measurements. No matter where an electron was before being detected (added by me - assuming it is anywhere - we have no idea what is going on before measurement), we only see it in one place once it is measured. Somehow, measurement snaps the wave-like cloud of possibilities to a well-defined position. This has been shown time and again in experiments. The process seems random and instantaneous, but physicists like me aren’t satisfied since nothing else acts this way.

Debate around how to interpret this weirdness has been raging for more than 100 years. In the 1920s, great thinkers like John von Neumann settled on the idea that when a measurement is made, the wave function “collapses” into a single outcome, deleting the other possibilities. But this explanation, which came to be known as the Copenhagen interpretation, is far from the only reading of the situation. (Added by me - I think the author may have what the great von Neumann thought a bit off the mark - I will leave the reader to investigate what he thought - Bohr would be a better person to mention, IMHO). The many-worlds interpretation says that every possible outcome of a measurement happens in other worlds we can’t access. Physicist David Bohm’s interpretation says the other possibilities never existed – they were only illusions created by our lack of information.

To make things more complicated, it has been clear since experiments in the 1970s that measurements don’t just happen on lab benches. Even stray air molecules hitting electrons can “measure” them and destroy their 'quantumness'. This process is called decoherence, which explains why we don’t see quantum effects at everyday scales. Once something gets big enough, too many other objects fly around that can “measure” it and upset its delicate quantum properties. But the same question still applies: how precisely does this process happen?

Quantum entanglement is a physical phenomenon that occurs when pairs or groups of particles are generated or interact in ways such that the quantum state of each particle cannot be described independently.

In the face of new evidence, physicists are starting to view the cosmos not as made up of disparate layers but as a quantum whole linked by entanglement.

In the 2000s, physicists Robin Blume-Kohout, then at the California Institute of Technology, and Wojciech Żurek at Los Alamos National Laboratory in New Mexico took the idea of decoherence one step further. They argued that all the information in a system, including the quantum kind, spreads into the surrounding environment during this process. This quantum information includes the system’s superposition. But it also accounts for other intrinsically quantum features, like the bizarre, long-range “entanglement” that appears to allow instantaneous interaction between two quantum objects.

The pair claim that only certain types of information are easy to access after this spreading process, namely, the classical variety. The quantum information is there; it is just practically impossible to see. They named this idea quantum Darwinism in analogy to the evolution of living things. In this reckoning, the environment around a quantum object “selects” for the classical information, akin to how an environment – in a very different meaning of the word – selects for long necks in giraffes.

This framework is a powerful way of describing the interactions between a quantum system and its environment. However, it has only recently had a rigorous description of the physical process that makes the selection happen. This is where our group comes in with the idea of quantum thermodynamics.

Every step in the process can be explained using thermodynamics. And we are interested in what happens to the vanishing quantum states, which tends not to be focused on in quantum Darwinism. In a nutshell, we think this quantum information gets spread out between the object and the detector. And this spreading-out process mirrors the way things mix according to thermodynamics.

Now, historically, thermodynamics and quantum mechanics don’t go well together. The conventional idea of quantum measurements appears to break the laws of thermodynamics. These laws, which are sacrosanct to physicists, say that energy can’t be created or destroyed and that the universe becomes more disordered over time. The textbook description of a measurement seems to violate all of this. What’s more, it involves deleting information: when the particle goes from being in two places at once to only one, details about the second position seem to be destroyed. This violates the conservation of information, a principle upheld by every other law of physics.

These problems were easy to sweep under the rug for decades since we couldn’t probe in detail the exact interactions between quantum objects and the thing doing the measuring. It was easy to imagine that the problem was a by-product of inaccurate modelling of the measuring “device”. Though experiments have improved, the discrepancies have become harder to hide.

The strength of our idea is that it can’t help but obey the rules of thermodynamics since these are built in from the start. “Any measurement model should be in keeping with the rest of the laws of physics,” says Sophie Engineer, who is part of our team and works between the University of Bristol and Heriot-Watt University, UK.

Inspired by experiments showing entanglement over time, not just space, physicist Vlatko Vedral is reconsidering how we think of time in quantum mechanics. The new approach treats space and time as part of one entity and could help us unravel black holes and make quantum time travel possible.

At the heart of our idea is a thermodynamic process that Boltzmann studied called equilibration. Our group loves coffee, so we imagine this process by picturing a splash of milk poured into a cup of coffee. At first, the milk is a separate blob, but as the various particles randomly move around, it rapidly spreads and mixes with the coffee. Once the milk and coffee particles are fully mixed, it is extremely unlikely that all the milk particles will spontaneously gather up into a blob again. Eventually, the coffee-milk mixture settles into an equilibrium – we say that it equilibrates.
The laws of thermodynamics, however, say that given long enough, the milk and coffee will spontaneously separate back into the original, unmixed state. We would never see this happen because it would take far longer than the universe's age. But we do see it happen in much simpler setups.

We recently learned that something similar happens in the quantum world, too. In a 2018 study, Jörg Schmiedmayer and his colleagues at the Vienna University of Technology (TU Wien) in Austria showed that quantum decoherence can also undo itself. They observed a few thousand ultra-cold atoms in a box and saw how the atoms’ positions became less correlated with each other through random collisions. The amount of correlation eventually reached a low “equilibrium” value. But, after a few milliseconds, the correlation returned to almost its initial value.

This was a brilliant result. Decoherence should destroy these correlations, so seeing them spontaneously reappear indicates that it isn’t deleting information, just scrambling or hiding it. Findings like this inspired my colleagues, Maximilian Lock and Marcus Huber, also at TU Wien, to wonder if equilibration could also underpin quantum measurements. Along with Emanuel Schwarzhans, formerly at TU Wien, and Felix Binder at Trinity College Dublin in Ireland, they collected their ideas in a framework they dubbed the measurement-equilibration hypothesis (MEH).

MEH describes measurement as a process where a quantum system interacts with a measuring device. A “device” could be anything that interacts with the quantum object, not just what we would typically think of as a measuring device. This spreads information into the device until an information equilibrium is reached between the system and the device. The bigger the device, the more places there are for the quantum information to hide, making it harder to get that information back – but never impossible.

How would this work in practice? Let’s take the simple example of a particle in a cloud of many locations simultaneously. Before a detector measures that particle’s position, there is information about the potential places it could have been detected. When the detector comes into contact with the particle, these pieces of information mix into the particles of the detector. We think this spreading process somehow “broadcasts” information from the system, making the information about its classical position available to read but its “two-places-at-once” information harder to spot.

What are the implications?
The mathematics behind this process is complicated, so the first two papers on the framework, still in peer review, are heavy on calculations. First, my colleagues showed that equilibration between a quantum system and a detector can make the system look classical while only hiding the quantum behaviour, not destroying it. But a small enough detector would still allow quantum effects to peek through. The next paper, led by an Engineer and the first to which I contributed, takes the first step towards connecting this to experiments by looking at how best to extract information here.

Eventually, we would like to test our ideas in a lab, and thankfully Schmiedmayer is keen to work with our group to make this happen. With a set-up like Schmiedmayer’s 2018 experiment, we could potentially watch the measurement process happen in a small system, then watch it un-happen, Lock reckons. “We could then maybe show that, as the system gets larger, the ‘un-happening’ gets less likely,” he says. If we saw this, it would be evidence that MEH is on the right track. “That would be an extremely happy day,” says Lock.

So far, we have remained agnostic about what this idea means for any philosophical interpretations of quantum mechanics. But our ideas do brush up against these concepts. For instance, MEH explains what happens to all the measurement outcomes you don’t see – the other “worlds” of the many-worlds idea. They are all still here in our world; we can’t control the quantum system fine enough to observe them. “If we could grab hold of every single electron and control them in whatever manner we wanted, we wouldn’t be asking ourselves why the particle went left or right,” says Lock. “The idea of measurement becomes moot.”

This would remove much of the supposed mystique from wave function collapse since measurement only seems mysterious when we overlook how difficult it is in practice. As Lock puts it, it is about asking: “How do I, an inaccurate, ape-sized lump-thing, try to access something as finely detailed as the spin of an electron?”

It would also rule out the idea that collapse is a physical process that deletes information and that there is some harsh transition between classical and quantum realities. “Nobody forces you to make the classical world different from the quantum one,” says Schmiedmayer. “All you can say is that, in the classical world, the complexity is too big. I just can’t see the quantum part.”

There is another possible implication of our ideas. If you take Boltzmann’s ideas about equilibration to their extremes, you can imagine the whole universe equilibrating. If this is the case, some have speculated that long after the last stars die out in the ludicrously distant future, random fluctuations away from equilibrium will happen, resulting in sentient beings spontaneously and very briefly flickering into existence. This thought experiment, known as the “Boltzmann brain”, suggests that equilibration isn’t the end of the story for a big, dynamic system.

What happens if we push our idea to such an extreme? If we could keep track of every single subatomic particle in a detector and its environment, MEH says that you could, in principle, find all the hidden quantum information. This would almost be akin to seeing a particle in two places at once again, even after it has been measured. It is certainly an outlandish idea, and I sometimes can’t help wondering what Boltzmann would make of it if he were still around.

Well, that's the end of the precis. I found it interesting and hope others do too.

Thanks
Bill
 
Last edited:
  • Informative
  • Like
  • Skeptical
Likes gentzen, PeroK, vrdoc79 and 3 others
Physics news on Phys.org
  • #2
bhobba said:
it is behind a paywall
Even so, a link would be helpful; at least some people here might have access through an academic institution. Also, there might be a preprint on arxiv.org.
 
  • Like
Likes pines-demon and bhobba
  • #3
A link to the article here
https://www.newscientist.com/articl...ntum-theory-could-reveal-how-reality-emerges/

I am not subscribed to New Scientist so I cannot scroll to the link to the paper.

EDIT: A discussion from the author here: https://quantumfrontiers.com/2023/09/10/can-thermodynamics-resolve-the-measurement-problem/

There is a conference in Dublin July. https://physicsworld.com/event/emer...rspectives-on-measurements-in-quantum-theory/
 
Last edited:
  • Like
  • Skeptical
Likes PeroK and bhobba
  • #4
pinball1970 said:
I am not subscribed to New Scientist so I cannot scroll to the link to the paper.

Peter is right - I should have posted the link even though it is just part of the article. Don't worry; my precis captures what the rest of the article says.

The links you gave seem interesting. Thanks for posting.

Lots to read - too little time while the cricket is on here in Aus. Like the mathematician Hardy, I am a confessed cricket tragic.

Thanks
Bill
 
  • #5
Nice to read this as a bedtime story, but it would be nice to read some maths, so if there's publication, please post the links here. Thank you!
 
  • Like
Likes pines-demon, mattt and bhobba
  • #6
dextercioby said:
Nice to read this as a bedtime story, but it would be nice to read some maths, so if there's publication, please post the links here. Thank you!

So would I. I searched and searched .........

But as luck would have it, I will be doing a post about a book a mathematician (Mat Leifer) recommended that is virtually all math on interpretational issues. It is supposedly cited extensively, but even his university (Chapman University) did not have a copy. He eventually got one, but it was not easy. He recommends it almost as mandatory reading. Amazon had it, so I bought it:

https://www.amazon.com.au/gp/product/3662137356?tag=pfamazon01-20

Thanks
Bill
 
  • #7
dextercioby said:
Nice to read this as a bedtime story, but it would be nice to read some maths, so if there's publication, please post the links here. Thank you!
bhobba said:
So would I. I searched and searched .........
What is wrong with https://arxiv.org/abs/2302.11253 ("Quantum measurements and equilibration: the emergence of objective reality via entropy maximisation" by Emanuel Schwarzhans, Felix C. Binder, Marcus Huber, and Maximilian P. E. Lock)?
 
  • Like
  • Informative
Likes pines-demon, PeroK and bhobba
  • #8
gentzen said:
What is wrong with https://arxiv.org/abs/2302.11253 ("Quantum measurements and equilibration: the emergence of objective reality via entropy maximisation" by Emanuel Schwarzhans, Felix C. Binder, Marcus Huber, and Maximilian P. E. Lock)?

Good find.

Thanks
Bill
 
  • #9
quantum darwinism dressed in thermodynamics.
 
  • #10
bhobba said:
New Scientist recently published an article entitled 'A Bold New Take on Quantum Theory'. I found interesting

Unfortunately, it is behind a paywall, but I will give my precis.

How QM, which only predicts probabilities, gives rise to the solid, well-defined world around us is still a mystery.

The author suspects the answer could lie with Boltzmann and thermodynamics. Those quantum possibilities are never lost; instead, they are mixed so thoroughly into the 'cracks of reality' that we can’t see them.

We never (added by me - that should be rarely, e.g. the behaviour of liquid helium) see QM's odd effects in the classical, everyday world. So, what is happening?

When physicists considered this question over the years, they often considered measurements. No matter where an electron was before being detected (added by me - assuming it is anywhere - we have no idea what is going on before measurement), we only see it in one place once it is measured. Somehow, measurement snaps the wave-like cloud of possibilities to a well-defined position. This has been shown time and again in experiments. The process seems random and instantaneous, but physicists like me aren’t satisfied since nothing else acts this way.

Debate around how to interpret this weirdness has been raging for more than 100 years. In the 1920s, great thinkers like John von Neumann settled on the idea that when a measurement is made, the wave function “collapses” into a single outcome, deleting the other possibilities. But this explanation, which came to be known as the Copenhagen interpretation, is far from the only reading of the situation. (Added by me - I think the author may have what the great von Neumann thought a bit off the mark - I will leave the reader to investigate what he thought - Bohr would be a better person to mention, IMHO). The many-worlds interpretation says that every possible outcome of a measurement happens in other worlds we can’t access. Physicist David Bohm’s interpretation says the other possibilities never existed – they were only illusions created by our lack of information.

To make things more complicated, it has been clear since experiments in the 1970s that measurements don’t just happen on lab benches. Even stray air molecules hitting electrons can “measure” them and destroy their 'quantumness'. This process is called decoherence, which explains why we don’t see quantum effects at everyday scales. Once something gets big enough, too many other objects fly around that can “measure” it and upset its delicate quantum properties. But the same question still applies: how precisely does this process happen?

Quantum entanglement is a physical phenomenon that occurs when pairs or groups of particles are generated or interact in ways such that the quantum state of each particle cannot be described independently.

In the face of new evidence, physicists are starting to view the cosmos not as made up of disparate layers but as a quantum whole linked by entanglement.

In the 2000s, physicists Robin Blume-Kohout, then at the California Institute of Technology, and Wojciech Żurek at Los Alamos National Laboratory in New Mexico took the idea of decoherence one step further. They argued that all the information in a system, including the quantum kind, spreads into the surrounding environment during this process. This quantum information includes the system’s superposition. But it also accounts for other intrinsically quantum features, like the bizarre, long-range “entanglement” that appears to allow instantaneous interaction between two quantum objects.

The pair claim that only certain types of information are easy to access after this spreading process, namely, the classical variety. The quantum information is there; it is just practically impossible to see. They named this idea quantum Darwinism in analogy to the evolution of living things. In this reckoning, the environment around a quantum object “selects” for the classical information, akin to how an environment – in a very different meaning of the word – selects for long necks in giraffes.

This framework is a powerful way of describing the interactions between a quantum system and its environment. However, it has only recently had a rigorous description of the physical process that makes the selection happen. This is where our group comes in with the idea of quantum thermodynamics.

Every step in the process can be explained using thermodynamics. And we are interested in what happens to the vanishing quantum states, which tends not to be focused on in quantum Darwinism. In a nutshell, we think this quantum information gets spread out between the object and the detector. And this spreading-out process mirrors the way things mix according to thermodynamics.

Now, historically, thermodynamics and quantum mechanics don’t go well together. The conventional idea of quantum measurements appears to break the laws of thermodynamics. These laws, which are sacrosanct to physicists, say that energy can’t be created or destroyed and that the universe becomes more disordered over time. The textbook description of a measurement seems to violate all of this. What’s more, it involves deleting information: when the particle goes from being in two places at once to only one, details about the second position seem to be destroyed. This violates the conservation of information, a principle upheld by every other law of physics.

These problems were easy to sweep under the rug for decades since we couldn’t probe in detail the exact interactions between quantum objects and the thing doing the measuring. It was easy to imagine that the problem was a by-product of inaccurate modelling of the measuring “device”. Though experiments have improved, the discrepancies have become harder to hide.

The strength of our idea is that it can’t help but obey the rules of thermodynamics since these are built in from the start. “Any measurement model should be in keeping with the rest of the laws of physics,” says Sophie Engineer, who is part of our team and works between the University of Bristol and Heriot-Watt University, UK.

Inspired by experiments showing entanglement over time, not just space, physicist Vlatko Vedral is reconsidering how we think of time in quantum mechanics. The new approach treats space and time as part of one entity and could help us unravel black holes and make quantum time travel possible.

At the heart of our idea is a thermodynamic process that Boltzmann studied called equilibration. Our group loves coffee, so we imagine this process by picturing a splash of milk poured into a cup of coffee. At first, the milk is a separate blob, but as the various particles randomly move around, it rapidly spreads and mixes with the coffee. Once the milk and coffee particles are fully mixed, it is extremely unlikely that all the milk particles will spontaneously gather up into a blob again. Eventually, the coffee-milk mixture settles into an equilibrium – we say that it equilibrates.
The laws of thermodynamics, however, say that given long enough, the milk and coffee will spontaneously separate back into the original, unmixed state. We would never see this happen because it would take far longer than the universe's age. But we do see it happen in much simpler setups.

We recently learned that something similar happens in the quantum world, too. In a 2018 study, Jörg Schmiedmayer and his colleagues at the Vienna University of Technology (TU Wien) in Austria showed that quantum decoherence can also undo itself. They observed a few thousand ultra-cold atoms in a box and saw how the atoms’ positions became less correlated with each other through random collisions. The amount of correlation eventually reached a low “equilibrium” value. But, after a few milliseconds, the correlation returned to almost its initial value.

This was a brilliant result. Decoherence should destroy these correlations, so seeing them spontaneously reappear indicates that it isn’t deleting information, just scrambling or hiding it. Findings like this inspired my colleagues, Maximilian Lock and Marcus Huber, also at TU Wien, to wonder if equilibration could also underpin quantum measurements. Along with Emanuel Schwarzhans, formerly at TU Wien, and Felix Binder at Trinity College Dublin in Ireland, they collected their ideas in a framework they dubbed the measurement-equilibration hypothesis (MEH).

MEH describes measurement as a process where a quantum system interacts with a measuring device. A “device” could be anything that interacts with the quantum object, not just what we would typically think of as a measuring device. This spreads information into the device until an information equilibrium is reached between the system and the device. The bigger the device, the more places there are for the quantum information to hide, making it harder to get that information back – but never impossible.

How would this work in practice? Let’s take the simple example of a particle in a cloud of many locations simultaneously. Before a detector measures that particle’s position, there is information about the potential places it could have been detected. When the detector comes into contact with the particle, these pieces of information mix into the particles of the detector. We think this spreading process somehow “broadcasts” information from the system, making the information about its classical position available to read but its “two-places-at-once” information harder to spot.

What are the implications?
The mathematics behind this process is complicated, so the first two papers on the framework, still in peer review, are heavy on calculations. First, my colleagues showed that equilibration between a quantum system and a detector can make the system look classical while only hiding the quantum behaviour, not destroying it. But a small enough detector would still allow quantum effects to peek through. The next paper, led by an Engineer and the first to which I contributed, takes the first step towards connecting this to experiments by looking at how best to extract information here.

Eventually, we would like to test our ideas in a lab, and thankfully Schmiedmayer is keen to work with our group to make this happen. With a set-up like Schmiedmayer’s 2018 experiment, we could potentially watch the measurement process happen in a small system, then watch it un-happen, Lock reckons. “We could then maybe show that, as the system gets larger, the ‘un-happening’ gets less likely,” he says. If we saw this, it would be evidence that MEH is on the right track. “That would be an extremely happy day,” says Lock.

So far, we have remained agnostic about what this idea means for any philosophical interpretations of quantum mechanics. But our ideas do brush up against these concepts. For instance, MEH explains what happens to all the measurement outcomes you don’t see – the other “worlds” of the many-worlds idea. They are all still here in our world; we can’t control the quantum system fine enough to observe them. “If we could grab hold of every single electron and control them in whatever manner we wanted, we wouldn’t be asking ourselves why the particle went left or right,” says Lock. “The idea of measurement becomes moot.”

This would remove much of the supposed mystique from wave function collapse since measurement only seems mysterious when we overlook how difficult it is in practice. As Lock puts it, it is about asking: “How do I, an inaccurate, ape-sized lump-thing, try to access something as finely detailed as the spin of an electron?”

It would also rule out the idea that collapse is a physical process that deletes information and that there is some harsh transition between classical and quantum realities. “Nobody forces you to make the classical world different from the quantum one,” says Schmiedmayer. “All you can say is that, in the classical world, the complexity is too big. I just can’t see the quantum part.”

There is another possible implication of our ideas. If you take Boltzmann’s ideas about equilibration to their extremes, you can imagine the whole universe equilibrating. If this is the case, some have speculated that long after the last stars die out in the ludicrously distant future, random fluctuations away from equilibrium will happen, resulting in sentient beings spontaneously and very briefly flickering into existence. This thought experiment, known as the “Boltzmann brain”, suggests that equilibration isn’t the end of the story for a big, dynamic system.

What happens if we push our idea to such an extreme? If we could keep track of every single subatomic particle in a detector and its environment, MEH says that you could, in principle, find all the hidden quantum information. This would almost be akin to seeing a particle in two places at once again, even after it has been measured. It is certainly an outlandish idea, and I sometimes can’t help wondering what Boltzmann would make of it if he were still around.

Well, that's the end of the precis. I found it interesting and hope others do too.

Thanks
Bill
I signed up just to leave this comment. Thank you so much for such a astonishingly clear explanation. I'm in awe at your ability to synthesize in "layman" terms. I'm a life long cosmology/physics hobbyist and I could follow every word of your explanation. Your hypotheses do not need to resort to any "strange science" as many other theories nowadays.

The simplicity of the concept is occam-razor level. I really hope you get to experiment asap. This could radically alter our understanding of the universe.

I've always thought that the many worlds interpretation didn't sound exactly economical in terms of the amount of generated information at every instant. MEH sounds like we are not introducing (or generating) information exponentially like in a multiverse. Your theory works with the current information of the system!
 
  • Like
Likes DrChinese and gentzen
  • #11
Stay around a while and contribute.

Thanks
Bill
 
  • Like
Likes DrChinese and gentzen
  • #12
@vrdoc79 welcome to PF!

One request: please don't block quote an entire post when responding; it makes it harder to read what you actually posted since we have to scroll past a long quote to do so. If you are responding to a particular statement, you can just quote that. If you are just commenting on an entire post, you can just refer to it by post number (the one you quoted would be post #1 in this thread, or the OP for "original post"), or you can tag the user who posted it, @bhobba in this case, if it's clear enough which post by that user you mean.
 
  • Like
Likes dlgoff, gentzen and bhobba
  • #13
bhobba said:
New Scientist recently published an article entitled 'A Bold New Take on Quantum Theory'. I found interesting
This is the sort of pop science that never leads to any understanding.
bhobba said:
No matter where an electron was before being detected (added by me - assuming it is anywhere - we have no idea what is going on before measurement), we only see it in one place once it is measured.
This is the same pop-science nonsense that is preached eternally.. We know precisely what is going on: the state of the system evolves according to the Hamiltonian. It's only the classical prejudice that position (of a particle) is (and indeed must be) the fundamental quantity. In QM, the state (vector) is the fundamental object.

The equivalent would be not to accept the concept of electronic financial transactions - and insisting that somewhere physical coinage or paper money must be changing hands somewhere. And saying that electronic finance is a mystery to which we have not yet a solution to how it can possibly work!
bhobba said:
But it also accounts for other intrinsically quantum features, like the bizarre, long-range “entanglement” that appears to allow instantaneous interaction between two quantum objects.
I concede that quantum entanglement is extraordinary. Whether it's "bizarre" depends on an a prior philosphy of how nature must be. Newton's law of gravity was bizarre; the curvature of spacetime is bizarre; the Hydrogen atom is bizarre; evolution is bizarre; that the universe exists is the bizarrest of all. Why pick on quantum entanglement?
bhobba said:
Now, historically, thermodynamics and quantum mechanics don’t go well together.
This I don't believe. Quantum statistical mechanics and thermodynamics is well-established, is it not?
bhobba said:
The conventional idea of quantum measurements appears to break the laws of thermodynamics. These laws, which are sacrosanct to physicists, say that energy can’t be created or destroyed and that the universe becomes more disordered over time. The textbook description of a measurement seems to violate all of this.
This seems like a wild exaggeration. Initially, statistical thermodynamics was rejected by some classical physicists because the laws were only statistical and not "sacrosanct". In any case, nothing is sacrosanct (except the need for theory to agree with experiment).
bhobba said:
These problems were easy to sweep under the rug for decades since we couldn’t probe in detail the exact interactions between quantum objects and the thing doing the measuring.
This agains sounds like a classical prejudice towards an inevitable local realism.
bhobba said:
Inspired by experiments showing entanglement over time, not just space, physicist Vlatko Vedral is reconsidering how we think of time in quantum mechanics. The new approach treats space and time as part of one entity and could help us unravel black holes and make quantum time travel possible.
Just as we seem to be getting closer to the real point of the reasearch: how QM, entanglement and spacetime are related, we recoil at the mundaneness of real science and throw in the some science fiction about time travel!
bhobba said:
The laws of thermodynamics, however, say that given long enough, the milk and coffee will spontaneously separate back into the original, unmixed state. We would never see this happen because it would take far longer than the universe's age. But we do see it happen in much simpler setups.
I predict the Bolzmann brain ...
bhobba said:
We recently learned that something similar happens in the quantum world, too. In a 2018 study, Jörg Schmiedmayer and his colleagues at the Vienna University of Technology (TU Wien) in Austria showed that quantum decoherence can also undo itself. They observed a few thousand ultra-cold atoms in a box and saw how the atoms’ positions became less correlated with each other through random collisions. The amount of correlation eventually reached a low “equilibrium” value. But, after a few milliseconds, the correlation returned to almost its initial value.

This was a brilliant result. Decoherence should destroy these correlations, so seeing them spontaneously reappear indicates that it isn’t deleting information, just scrambling or hiding it.
Findings like this inspired my colleagues, Maximilian Lock and Marcus Huber, also at TU Wien, to wonder if equilibration could also underpin quantum measurements. Along with Emanuel Schwarzhans, formerly at TU Wien, and Felix Binder at Trinity College Dublin in Ireland, they collected their ideas in a framework they dubbed the measurement-equilibration hypothesis (MEH).

MEH describes measurement as a process where a quantum system interacts with a measuring device. A “device” could be anything that interacts with the quantum object, not just what we would typically think of as a measuring device. This spreads information into the device until an information equilibrium is reached between the system and the device. The bigger the device, the more places there are for the quantum information to hide, making it harder to get that information back – but never impossible.
There must be more to this than the untrackable degrees of freedom associated with decoherence. This as stated is nothing new.
bhobba said:
How would this work in practice? Let’s take the simple example of a particle in a cloud of many locations simultaneously.
It's not in many locations. The particle is described by its state, not its position. This is again the fundamental inability to move away from classical thinking.
bhobba said:
Before a detector measures that particle’s position, there is information about the potential places it could have been detected. When the detector comes into contact with the particle, these pieces of information mix into the particles of the detector. We think this spreading process somehow “broadcasts” information from the system, making the information about its classical position available to read but its “two-places-at-once” information harder to spot.
Again, there must be more to this than the untrackable degrees of freedom. I'm sure even David Lyndlay's popular book (Where Does the Weirdness Go) covers this point. That a measurement is not a simple process, but a complicated interaction and amplification of a microscopic interaction into a macroscopic display of some description.
bhobba said:
“If we could grab hold of every single electron and control them in whatever manner we wanted, we wouldn’t be asking ourselves why the particle went left or right,” says Lock.
The Uncertainty Principle applies here. If nature is described by a state (and not a definite position and trajectory), then this thought experiment cannot be quantum mechanical!
bhobba said:
It would also rule out the idea that collapse is a physical process that deletes information and that there is some harsh transition between classical and quantum realities. “Nobody forces you to make the classical world different from the quantum one,” says Schmiedmayer. “All you can say is that, in the classical world, the complexity is too big. I just can’t see the quantum part.”
This, again, is nothing new.
bhobba said:
There is another possible implication of our ideas. If you take Boltzmann’s ideas about equilibration to their extremes, you can imagine the whole universe equilibrating. If this is the case, some have speculated that long after the last stars die out in the ludicrously distant future, random fluctuations away from equilibrium will happen, resulting in sentient beings spontaneously and very briefly flickering into existence. This thought experiment, known as the “Boltzmann brain”, suggests that equilibration isn’t the end of the story for a big, dynamic system.
I knew they'd get to the Boltzmann brain!
bhobba said:
What happens if we push our idea to such an extreme? If we could keep track of every single subatomic particle in a detector and its environment, MEH says that you could, in principle, find all the hidden quantum information.
Again this is promoting a classical, local reality. Instead, I suspect they have some middle-ground between a collapse to a single reality and the MWI. In other words, a resolution of sorts to the measurement problem. My issue is that the whole article suggests this is classical, local reality riding in on a white horse to save the world from quantum weirdness.
bhobba said:
Well, that's the end of the precis. I found it interesting and hope others do too.
IMO, this sort of article takes the reader away from any understanding of QM by playing on their classical prejudices at every stage. Sorry!
 
  • Like
  • Skeptical
Likes DrChinese, gentzen, DrClaude and 1 other person
  • #14
vrdoc79 said:
I signed up just to leave this comment. Thank you so much for such a astonishingly clear explanation. I'm in awe at your ability to synthesize in "layman" terms. I'm a life long cosmology/physics hobbyist and I could follow every word of your explanation. Your hypotheses do not need to resort to any "strange science" as many other theories nowadays.

The simplicity of the concept is occam-razor level. I really hope you get to experiment asap. This could radically alter our understanding of the universe.
Perhaps the reason you can follow every word is that the article leads you away from quantum mechanical thinking towards the security of the classical, macroscopic world? As you may gather from my comments above, if you want to understand QM you ought to forget almost everything you read in this article. Sorry!
 
  • Like
Likes martinbn
  • #15
PeroK said:
We know precisely what is going on: the state of the system evolves according to the Hamiltonian. It's only the classical prejudice that position (of a particle) is (and indeed must be) the fundamental quantity. In QM, the state (vector) is the fundamental object.
Your preaching sounds very Everettianish to me. Adrian Kent's response to John Preskill's rendering of that sermon comes to my mind:
Adrian Kent said:
John — I’m not crazy enough to try to persuade anyone of the case against Everett on a blog, but you raise a good sociological question, and maybe I can help a bit with your puzzlement.

First, I think it’s important to stress that there is essentially no agreement on what people mean by “the Everett interpretation” or “the many-worlds interpretation”. Everett was scathingly abusive about attempts by DeWitt and others to flesh out his ideas. Lots of people have tried since (sometimes sympathetically, sometimes pointing out how unsatisfactory the result is): for example Bell, Albert and Loewer, Zurek, Gell-Mann and Hartle, Deutsch (at least twice in very different ways), Wallace, Vaidman, Geroch, Barbour, Papineau all give different and mostly pairwise inconsistent versions. Most of them are presumably wrong about something important, and they’re all trying to solve essentially the same problem, so it maybe shouldn’t be so surprising a priori if all of them are wrong about something important.

To flesh that out a bit, consider a couple of questions. Do we need to add extra assumptions ...

PeroK said:
The equivalent would be not to accept the concept of electronic financial transactions - and insisting that somewhere physical coinage or paper money must be changing hands somewhere. And saying that electronic finance is a mystery to which we have not yet a solution to how it can possibly work!
In the end, there must be goods that can be obtained with the help of that money. This is independent of whether the money is represented by physical coinage, paper money, or electronic bookkeeping. And understanding the trust and related stuff required for "money to achieve its purpose" is not trivial, easy, or "objectively clear cut".

PeroK said:
I concede that quantum entanglement is extraordinary. Whether it's "bizarre" depends on an a prior philosphy of how nature must be. Newton's law of gravity was bizarre; the curvature of spacetime is bizarre; the Hydrogen atom is bizarre; evolution is bizarre; that the universe exists is the bizarrest of all. Why pick on quantum entanglement?
Why do you write that "Hydrogen atom is bizarre; evolution is bizarre", but "gravity was bizarre"? I don't agree that it "depends on an a prior philosphy of how nature must be". Many people are not fully happy with their understanding of it yet, even people who have no problem with the math. Just like many people were not happy about their understanding of Maxwell's equations and "point charges" or "spatially extended rigid body" electrons. They got much happier when Einstein cleared up part of the riddle, and again slightly happier when QFT slowly made progress with the remaining "problems".
 
  • #16
gentzen said:
Why do you write that "Hydrogen atom is bizarre; evolution is bizarre", but "gravity was bizarre"? I don't agree that it "depends on an a prior philosphy of how nature must be". Many people are not fully happy with their understanding of it yet, even people who have no problem with the math. Just like many people were not happy about their understanding of Maxwell's equations and "point charges" or "spatially extended rigid body" electrons. They got much happier when Einstein cleared up part of the riddle, and again slightly happier when QFT slowly made progress with the remaining "problems".
I'm not going to be drawn into an argument on my views on the original article. Instead, you could make the case that the New Scientist piece is an accurate presentation of quantum theory and these new ideas around decoherence.
 
  • #17
PeroK said:
This is the sort of pop science that never leads to any understanding
PeroK said:
I'm not going to be drawn into an argument on my views on the original article. Instead, you could make the case that the New Scientist piece is an accurate presentation of quantum theory and these new ideas around decoherence.
You commented on an article that reported about actual research. Maybe the article was not very good, or maybe New Scientist in general has destroyed its reputation in recent years so badly that the quality of individual articles is no longer important anyway.

But all of this is no excuse of writing a reply where it remains unclear to me whether you were even aware that this article was not pure pop science. And because you intermixed that reply with statements that sounded like Everettian preaching to me, I reacted „skeptical“ to your post, and left a reply explaining what I didn‘t like.
 
  • Sad
Likes PeroK
  • #18
bhobba said:
“If we could grab hold of every single electron and control them in whatever manner we wanted, we wouldn’t be asking ourselves why the particle went left or right,” says Lock.
There is not the slightest theoretical* or experimental evidence for this statement, which is actually an assertion of realism in QM. There is a large body of work that both directly and indirectly contradicts this. I did not see a corresponding statement like that in the arxiv paper itself, but I am not sure from the back and forth if that was the paper the New Scientist article is based on. Lock is one of the authors of that paper though, so I assume he believes this (and that it is not contradicted in that paper anywhere).

The arxiv paper does not address entanglement** or quantum nonlocality*** in the measurement process. As everyone here probably knows my thoughts: These subjects are absolutely mandatory for discussion in any paper purporting to "...take the first step in formalising the hypothesis that quantum measurements
are instead driven by the natural tendency of closed systems to maximize entropy...
".


*To date, although there are nonlocal interpretations that posit this.

**Perfect correlations in many entangled system experiments imply that the measurement interaction/apparatus/decoherence/equilibrium process answers precisely nothing in terms of the outcome (because the outcomes are based solely on a future context, settings only). Otherwise you must have the measurement apparati (and its settings) initially entangled too in order to get identical outcomes by distant observers.

***Because if there is to be realism, then nonlocality is a mandatory consequence (post-Bell). But a paper built around thermodynamic ideas (and equilibrium interactions between the observed and observer) doesn't seem like one that falls into the "quantum nonlocality" bucket. If it did, that would be helpful to know.
 
  • Informative
Likes PeroK
  • #19
bhobba said:
Decoherence should destroy these correlations
This claim (I assume it's originally the article's claim, not yours) is not correct in two ways. First, decoherence is unitary, so it can't "destroy" anything. All it can do by itself is spread entanglement among more degrees of freedom. That spreading is what decreases the correlations between particular degrees of freedom. But it doesn't "destroy" them, it just decreases them.

Second, in order for the correlations to decrease to the point where they are truly negligible, the spreading has to continue to larger and larger numbers of degrees of freedom, with no stopping point. But in the experiment described, that doesn't happen; the "decoherence" spreading stops at the number of ultra-cold atoms in the experiment (a few thousand). That stopping is why the "decoherence" is seen to be "undone".

I put "decoherence" and other terms in scare-quotes just now because, due to the stopping, what is being observed in this experiment is not actually decoherence as that term is defined in the literature. There is no interaction during this experiment between the ultra-cold atoms and anything else, so they are an isolated system--and an isolated system that can be tracked with sufficient accuracy doesn't have decoherence. The whole point is that the system has to be not isolated--it has to be interacting with an environment.

Another way of putting it is that, as the phrase "can be tracked with sufficient accuracy" above indicates, the experimenters were able to keep track of enough of the degrees of freedom in the experiment; but true decoherence requires spreading entanglement among a very large number of degrees of freedom that cannot be tracked (such as photons escaping to infinity, or huge numbers of atoms in a macroscopic object that can't be measured individually). That is what "environment" means in the decoherence literature.

In short, this looks like another example of an overblown claim about what a new kind of experimental technology actually proves.
 
  • Like
Likes bhobba, marcusl, DrChinese and 2 others
  • #20
PeterDonis said:
This claim (I assume it's originally the article's claim, not yours) is not correct in two ways. First, decoherence is unitary, so it can't "destroy" anything. All it can do by itself is spread entanglement among more degrees of freedom. That spreading is what decreases the correlations between particular degrees of freedom. But it doesn't "destroy" them, it just decreases them. ...
This 2011 reference may not be exactly supporting what you are saying, but it is the general idea. There is entanglement of 2 macroscopic ensembles where there is dissipation of the entanglement. That entanglement is then demonstrated at a later time.

Entanglement generated by dissipation and steady state entanglement of two macroscopic objects
 
  • #21
I'm not qualified to even read threads like this, but...
bhobba said:
MEH describes measurement as a process where a quantum system interacts with a measuring device. A “device” could be anything that interacts with the quantum object, not just what we would typically think of as a measuring device. This spreads information into the device until an information equilibrium is reached between the system and the device.

What does this (I bolded in the quote) really mean? How does information "spread into" anything? This reminds me of the invisible "caloric" flowing from hot to cold bodies.
 
  • #22
gmax137 said:
How does information "spread into" anything?
Through interactions that spread entanglement over more and more degrees of freedom.
 
  • Informative
Likes gmax137
  • #23
Using "information" to explain things in physics has always been anti-illuminating to me. Generally, information can be formulated a number of different ways. When we're doing physics, we can just say what we mean without the added baggage of information. Fortunately, the arxiv paper shared in post 7 actually looks interesting, and "information" appears to do very little work in the paper.

EDIT: I felt after posting I was a little too critical, especially for an article appearing in a publication for a more general audience. @bhobba's precis is an enjoyable read if I keep my urge to dig in to the details at bay.
 
  • Like
Likes PeroK
  • #24
DrChinese said:
There is not the slightest theoretical* or experimental evidence for this statement, which is actually an assertion of realism in QM. There is a large body of work that both directly and indirectly contradicts this. I did not see a corresponding statement like that in the arxiv paper itself, but I am not sure from the back and forth if that was the paper the New Scientist article is based on. Lock is one of the authors of that paper though, so I assume he believes this (and that it is not contradicted in that paper anywhere).
The quote by Lock is a stronger statement than what the paper argues, so I find the quote strange.

The paper argues that a typical account of measurement relies on a time-controlled application of a von-Neumann-type Hamiltonian to obtain a state with a spectrum broadcast structure (SBS), when what is really desired is the obtaining of a state with a SBS (albeit approximately) as an equilibrium state (which von-Neumann-type Hamiltonians, applied in a time-uncontrolled manner, fail to do).

Even when this state is obtained as an equilibrium state via an improved model Hamiltonian, the state still only asserts objective/intersubjective agreement between observers observing an outcome, not a definite outcome itself.

IMO the paper is novel and interesting, but ultimately not a challenge to established paradigms. Lock's quote does sound like a challenge.
 
  • Like
Likes DrChinese and gentzen
  • #25
I would just like to remind people about two different issues with the quantum measurement problem, and only one of them is addressed by decoherence.

The first problem is that of entanglement. It is the issue that is addressed by decoherence. The second problem is the "and/or" or collapse issue. That one isn't addressed by decoherence, and it is one of the reasons why I've come to peace with a MWI like view on things.

First, entanglement and how decoherence "resolves" the issue. Entanglement is the quantum-mechanical fact/property/.... that the state is describing "the whole" (the whole universe if you want), and cannot describe a state to a subsystem. The quantum-mechanical description of the state of a universe with 3 particles is a 3-particle quantum state, and individual particles don't have their own state. Even though individual measurement outcomes of the individual particles are possible of course, the quantum state of the universe entangles the states of these particles. That comes about, because you can have superpositions of product states (product states are particular quantum states where individual systems DO have their own state), and even if you start out in a product state, interactions between parts of the system result in superpositions of product states (which is what we call entanglement).

It leads to all kinds of "weird" situations if we have entangled systems, and it causes "weird" correlations between measurements to pop up, things that seem impossible when we try to assign well-defined states to "sub systems".

However, too much entanglement kills entanglement in a way, and these correlations disappear when our entangled system entangles even further with non-observed degrees of freedom (with other systems we don't look at).

The funny weird "quantum correlations" seem to disappear when our entangled subsystems interact with many other degrees of freedom (say, "the environment"). In reality, the entanglement even gets much stronger and becomes even 'irreversible' FAPP, but the correlations between the two subsystems now disappear at first sight, because they become higher-order correlations with environmental degrees of freedom we don't observe, and hence, the observed correlations "average out".
Mathematically, this comes about by what one calls the reduced density matrix, where one doesn't take into account the "environmental degrees of freedom".

This effect of entangling with environmental degrees of freedom over which one takes the average because one doesn't look at them, and the effect this has of making "weird quantum correlations" between the subsystems average out, is what decoherence is about.

In other words, decoherence tells you that you don't have to take into account anymore the funny quantum correlations of entangled subsystems, when both (or one of them) has entangled enough with unobserved degrees of freedom of the environment.

In other words, we can think of a two-sub-system quantum-mechanical evolution as the following:

1) "initial product state"
##| A 1> \otimes | B 2 >##

Both subsystems have "their own state" because we're in a product state. Subsystem 1 is in state A, and subsystem 2 is in state B.

2) "interaction and entanglement"

## | C 1 > \otimes | D 2> + | E 1 > \otimes | F 2> ##
This is the moment where "weird results" can be found due to quantum correlations. We cannot say that subsystem 1 is in state C nor in state E. System 1 is in state C "together with system 2 being in state D" and system 1 is in state E "together with system 2 being in state F". If we look upon system 1 in another basis, the correlations with system 2 seem to change. That's the "weirdness".

3) "decoherence"

## | K 1> \otimes | L 2> \otimes |environment Q> + | M 1> \otimes | N 2> \otimes |environment R> ##

If we would repeat the measurements of 2), we wouldn't find weird quantum correlations anymore, it looks now as if our system is in a *classical statistical mixture* of states (K,L) and states (M,N) without any interference. It looks like system 1 is in state K with a certain probability, and is in state M with the complementary probability. And there's a 1-1 correlation between the state of system 1 and the state of system 2.

So it SEEMS that decoherence has resolved the "entangled quantum state to statistical mixture" riddle.

Except that it didn't because it is a circular argument. The only thing that decoherence solves, is the "weird quantum correlations" issue if we DO apply the Born rule, that is to say, if we APPLY the "quantum state to statistical mixture" transition. That is, if we "collapse the wavefunction and use the Born probabilities to do so". In doing so, thanks to decoherence, the WEIRD CORRELATIONS are gone. But this doesn't explain the quantum state to statistical mixture transition in itself, because we USED this to show that the correlations were gone.

So the aspect that isn't solved by decoherence is this:

The initial quantum superposition of states ends up in one of the terms, with a certain probability, given by the Born rule. THAT isn't solved. It can't be solved this way, because it is a non-unitary process. This remains a fundamental interpretational issue. It is the and/or problem.

The state ends up (with unitary evolution) into ## a |\psi_1> + b |\psi_2> ## and "after measurement" we end up with probability ##|a|^2## to be in state ##|\psi_1>## and with probability ##|b|^2## in state ##|\psi_2>##.

Or: the quantum state ## a |\psi_1> + b |\psi_2> ## corresponds to a statistical mixture of ##|a|^2## of ##|\psi_1>## and ##|b|^2## of ##|\psi_2>##.

That transition is not explained by decoherence. The only thing decoherence tells us, is that the interference terms are gone. But out of decoherence still comes the superposition ## a |\psi_1> + b |\psi_2> ##. And there's no unitary way to get this to end up in only ## a |\psi_1> ## "with probability ##|a|^2##.

It is only if we *accept* that the wave function gives rise to a statistical mixture, that we can conclude that this statistical mixture doesn't have weird correlations anymore. But the fact itself that we transit from a wave function to a statistical mixture, by itself, is not explained by decoherence, because we USE it.

All MWI-like interpretations maintain, in some way, that all of the terms in the wave function "remain in existence", but that it is an OBSERVER STATE that gets correlated with a system state. As such, you still get several observer states, correlated with different system states. And in the end, what happens is that *your experience* has simply chance ##|a|^2## to "be" the observer state that correlated with ## a |\psi_1> ##, and that it is "your twin" that ended up with ## b |\psi_2> ##.
 
  • #26
bhobba said:
MEH describes measurement as a process where a quantum system interacts with a measuring device. A “device” could be anything that interacts with the quantum object, not just what we would typically think of as a measuring device. This spreads information into the device until an information equilibrium is reached between the system and the device. The bigger the device, the more places there are for the quantum information to hide, making it harder to get that information back – but never impossible.

"One is then led to the question, what exactly can constitute an observer? [30]. The Measurement-Equilibration Hypothesis allows us to constrain this question, giving the answer that an observer is whatever system interacts with the system of interest such that the result equilibrates to a state that approximates objective state structure asymptotically."
-- https://arxiv.org/abs/2302.11253
This notion of observer kind of makes sense and conceptually to me pretty much is the idea of the Bohr CI, where "the whole macroscopic environment" is the "observer", if you only include enough of the environment in an asymptotic sense. But this still leaves us with a terrible fine tuning problem as I see, as the tomopgrahic processes to fine the hamiltonians just wont work. I think building explanations of these fine-tuned, to a finite observer - inaccessible things, is not an explanation at all in my view. It's a twist of the same old decoherence ideas, not sure what is new?

/Fredrik
 

Similar threads

  • Quantum Interpretations and Foundations
Replies
2
Views
782
  • Quantum Interpretations and Foundations
Replies
6
Views
527
  • Quantum Interpretations and Foundations
Replies
25
Views
2K
  • Quantum Interpretations and Foundations
Replies
19
Views
658
  • Quantum Interpretations and Foundations
2
Replies
37
Views
1K
  • Quantum Interpretations and Foundations
11
Replies
376
Views
10K
  • Quantum Interpretations and Foundations
2
Replies
37
Views
2K
  • Quantum Interpretations and Foundations
Replies
7
Views
713
  • Quantum Interpretations and Foundations
2
Replies
57
Views
2K
  • Quantum Interpretations and Foundations
Replies
15
Views
2K
Back
Top