't Hooft: an original train of thought

  • Thread starter marcus
  • Start date
  • Tags
    Train
In summary: The authors of this paper have an interesting approach, but it seems like it is a little too early in the development of quantum mechanics to be making such broad claims. I am looking forward to seeing more research in this area to see if this approach holds up. In summary, Gerard 't Hooft has written a paper that proposes a deterministic basis for quantum mechanics by addressing the problem of the positivity of the Hamiltonian. He suggests that information loss may be the key to explaining this constraint and discusses the concept of equivalence classes in our world. References to related works by other authors are also provided. Additionally, there is a mention of another paper by Olaf Dreyer that offers a different perspective on probabilities in quantum mechanics.
  • #1
marcus
Science Advisor
Gold Member
Dearly Missed
24,775
792
On page 3 of this new paper of Gerard 't Hooft it is written:

This paper was written while these facts were being discovered, so that it represents an original train of thought, which may actually be useful for the reader.

Here is a paper that contemplates a deterministic basis for Quantum Mechanics and aspires to help "demystify" quantum mechanics.

http://arxiv.org/abs/quant-ph/0604008
The mathematical basis for deterministic quantum mechanics
Gerard 't Hooft
15 pages, 3 figures
ITP-UU-06/14, SPIN-06/12

Here is a quote from the conclusions:
When we attempt to regard quantum mechanics as a deterministic system, we have to face the problem of the positivity of the Hamiltonian, as was concluded earlier in Refs [1][2][3]. There, also, the suspicion was raised that information loss is essential for the resolution of this problem. In this paper, the mathematical procedures have been worked out further, and we note that the deterministic models that we seek must have short limit cycles, obeying Eq. (7.1). Short limit cycles can easily be obtained in cellular automaton models with information loss, but the problem is to establish the addition rule (7.4), which suggests the large equivalence classes defined by Eq. (4.2). We think that the observations made in this paper are an important step towards the demystification of quantum mechanics.

Here is the abstract:
"If there exists a classical, i.e. deterministic theory underlying quantum mechanics, an explanation must be found of the fact that the Hamiltonian, which is defined to be the operator that generates evolution in time, is bounded from below. The mechanism that can produce exactly such a constraint is identified in this paper. It is the fact that not all classical data are registered in the quantum description. Large sets of values of these data are assumed to be indistinguishable, forming equivalence classes. It is argued that this should be attributed to information loss, such as what one might suspect to happen during the formation and annihilation of virtual black holes.
The nature of the equivalence classes is further elucidated, as it follows from the positivity of the Hamiltonian. Our world is assumed to consist of a very large number of subsystems that may be regarded as approximately independent, or weakly interacting with one another. As long as two (or more) sectors of our world are treated as being independent, they all must be demanded to be restricted to positive energy states only. What follows from these considerations is a unique definition of energy in the quantum system in terms of the periodicity of the limit cycles of the deterministic model."
 
Last edited:
Physics news on Phys.org
  • #2
The references clickable,

Regards, Hans




[1] G. ’t Hooft, Class. Quant. Grav. 16 (1999) 3263, http://arxiv.org/abs/gr-qc/9903084" [Broken]; id., Quantum
Mechanics and determinism, in Proceedings of the Eighth Int. Conf. on ”Particles,
Strings and Cosmology, Univ. of North Carolina, Chapel Hill, Apr. 10-15, 2001, P.
Frampton and J. Ng, Eds., Rinton Press, Princeton, pp. 275 - 285, http://arxiv.org/abs/hep-th/0105105" [Broken].

[2] H. Th. Elze, Deterministic models of quantum fields, http://tw.arxiv.org/abs/gr-qc/0512016".

[3] M. Blasone, P. Jizba and H. Kleinert, Annals of Physics 320 (2005) 468, quant-
ph/0504200
; id., Braz. J. Phys. 35 (2005) 497, http://tw.arxiv.org/abs/quant-ph/0504047".

[4] M. Gardner, Scientific American, 223 (4), 120; (5), 118; (6), 114 (1970).

[5] S. Nobbenhuis, Categorizing Different Approaches to the Cosmological Constant
Problem, http://arxiv.org/abs/gr-qc/0411093" [Broken], Foundations of Physics, to be pub.

[6] G. ’t Hooft and S. Nobbenhuis, Invariance under complex transformations, and its
relevance to the cosmological constant problem, http://arxiv.org/abs/gr-qc/0602076" [Broken] , Class. Quant. Grav.,
to be publ.

[7] H. Th. Elze, Quantum fields, cosmological constant and symmetry doubling, "[URL [Broken]
th/0510267[/URL].
 
Last edited by a moderator:
  • #3
Oh, cool! I was just reading his "Quantum Mechanics and Determinism" again yesterday. I do like what he's trying to do in general, but in his older works I didn't see much beyond the two facts: (1) an oscillator is mathematically related to something moving in a circle, (2) for QFT, toss out the negative energy states. I'll go read this new paper though -- maybe there's some good stuff there.

I also like this topological approach to describing QM:
http://xxx.lanl.gov/abs/gr-qc/9503046
but it also isn't quite there yet.
 
  • #4
i like this paper , thank u.
i think that the probability theory may be changed .
i don't understand why the observors appear in the whole theory,what they can do? just observe the universe?they have ideas,but they can't change the results of physics! why?
 
Last edited:
  • #5
Dreyer

Hi there,

for those of you interested in the topic, I read yesterday a paper by Olaf Dreyer, which I personally find one of the best papers I have read in a long time. Its very readable though I didn't quite get all details. I can really recommend it.



B.

"[URL [Broken] Probabilities in Quantum Mechanics
Authors: Olaf Dreyer[/URL]
http://xxx.lanl.gov/abs/quant-ph/0603202

The transition from the quantum to the classical is governed by randomizing devices (RD), i.e., dynamical systems that are very sensitive to the environment. We show that, in the presence of RDs, the usual arguments based on the linearity of quantum mechanics that lead to the measurement problem do not apply. RDs are the source of probabilities in quantum mechanics. Hence, the reason for probabilities in quantum mechanics is the same as the reason for probabilities in other parts of physics, namely our ignorance of the state of the environment. This should not be confused with decoherence. The environment here plays several, equally important roles: it is the dump for energy and entropy of the RD, it puts the RD close to its transition point and it is the reason for probabilities in quantum mechanics. We show that, even though the state of the environment is unknown, the probabilities can be calculated and are given by the Born rule. We then discuss what this view of quantum mechanics means for the search of a quantum theory of gravity.
 
Last edited by a moderator:
  • #6
I agree it is unsound to assume linearity in QM treatments. As I recall, Loll et al posed a similar objection in another fairly recent paper.
 
  • #7
Thanks for calling that paper to attention Hossi. It's one of the clearest expositions of this idea (Zurek and others call it envariance) I've seen, with quite a few new perspectives I hadn't seen before.

Particularly in the MWI it clearly seems like envariance is completely insufficient to get probabilities. From this perspective it looks a lot more feasible that this could be done...
 
  • #8
hossi said:
Hi there,

for those of you interested in the topic, I read yesterday a paper by Olaf Dreyer, which I personally find one of the best papers I have read in a long time. Its very readable though I didn't quite get all details. I can really recommend it.



B.
Dreyer seems to assert that the measurement problem could be solved by assuming an *extreme* sensitivity of the behaviour of the measurement apparatus on the environment. Concretely, he seems to suggest that if
e_1 (e_2) is such that |a> |N> |e_1> (|b> |N> |e_2>) evolves into |a> |A> |e'_1> (|b> |B> |e'_2>), then -say- it cannot be that |a> |N> |e_2> evolves into |a> |A> |e''_2>, since in such case (x|a> + y|b>) |N> |e_2> would evolve into x|a> |A> |e''_2> + y|b> |B> |e'_2> and we still have the problem of superposition macroscopic states. Or, in such case, measurement would not occur (?)
If I understood this correctly, I do not see why such claim would be true. Reasonably, one would expect a certain robustness of the measurement process on the environmental conditions (that the process of measurement itself requires an environment is an entirely different issue -one I agree with). After all |e_1> and |e_2> would be almost identical from the macroscopic point of view. For example, if I were to have a source emitting spin up particles in the z-direction and I would put my Stern Gerlach apparatus along the z-axis then (i) I would assume almost all particles to be registered (if source and apparatus were sufficiently close together) (ii) almost all particles would register spin up despite of some small random environmental noise. The same would of course be true for spin down states, and hence for superpositions. If I would abandon (i) or (ii) then what can I claim about my original sample ??

The trick in the decoherence approach is to trace out the degrees of freedom corresponding to the environment, which is what makes ``effective´´ evolution of the system+measurement apparatus non unitary (apart from very large recurrence times), something which kills of the non diagonal density matrix elements (at least in the preferred basis). Where is the non unitary mechanism in this approach ?? Formula 26 seems to suggest this approach is unitary, moreover is the measurement problem not precisely the fact that
Unitary(system tensor rest) is generically not a state to which a definite value can be attributed, in either Unitary(system tensor rest) is not in O (unless you try to solve this by such high sensitivity upon environmental conditions and still recover the born rule). Concerning the latter remark, symmetry groups are usually not that large of course.

Anyway, comments welcome ...

Cheers,

Careful
 
  • #9
Careful, this is different from decoherence. As far as I understand he does not say that the meassurement process is extremely sensitive but that the outcome of the meassurement is extremely sensitive.
The process is robust to microscopic variations, except at the degrees of freedom it meassures and in half the cases the microscopic variations in these degrees would tip it one side, and in the other half in the other direction.
He does refer to Zurek and co for the non 50/50 case which is unfortunate since that is of course precisely where the possible pitfalls of this approach lie.
 
  • #10
** Careful, this is different from decoherence.**

Of course it is, I never claimed so. :grumpy: I was merely recalling the reasons for decoherence which is instructive, to understand why or why not this scheme could ``work´´.

** As far as I understand he does not say that the meassurement process is extremely sensitive but that the outcome of the meassurement is extremely sensitive. **

Sensitive to the environmental state, so what do you think I meant by a sensitive process ? :eek:

** The process is robust to microscopic variations, except at the degrees of freedom it meassures and in half the cases the microscopic variations in these degrees would tip it one side, and in the other half in the other direction. **

So, it is not rubust to microscopic variations - and of course everyone understands this example of the pendulum.

Look, f-h I am perfectly willing to discuss rationally about this approach (which I only heard of through this paper) - you have not discussed one of my objections so far. I encourage any attempt to say something about the measurement problem, but I see some pitfalls in this line of thought and perhaps somebody can clarify these issues for me.

Cheers,

Careful
 
  • #11
I agree that there are pitfalls, I don't really have an opinion yet.

From my understanding it is completely independent of decoherence though. Maybe not. That's what I meant.

Anyway, I think the point is that if you prepare a superposition you get a different enviromental microstate then if you do the individual states. From this point of view preparation + meassurement, the whole process is making a macroscopic state depend on a microscopic state.
In this sense the fact that preparation+detection happen are robust, but aspects of what precisely will happen are not except in special cases (like the all up all down experiments).

From this perspective the superposition argument does not apply since you would need to start with a macroscopic superposition to finish with one.

So if the old objection does not apply there is hope that maybe one can get meassurement with it's probabilities from unitary mechanics. I'm very much unconvinced that they have demonstrated this already. The last time I looked at these arguments it was in context of MWI where I believe that they don't work, here maybe they could? Who knows...
 
  • #12
Careful said:
would evolve into x|a> |A> |e''_2> + y|b> |B> |e'_2> and we still have the problem of superposition macroscopic states. Or, in such case, measurement would not occur (?)

Hi Careful, thanks for the comment, it comes close to the problem I had with the approach. Let me first tell you what I like about it: a) the topic in general b) the idea of having very unstable initial conditions (caused by the environment) that eventually do only allow to detect pure states. I also find the description with the randoming devices useful and the paper is very well written.

What I don't understand is: if one part of the superposition kicks the measuring device in a certain state that is detected, what happens to the rest of the superposition and why? I think the fact that measurement does occur is kind of an initial assumption (or only look at those cases when it occurs), so that does not work. E.g. what happens for a typical EPR situation, one detector on the left, one on the right or so - I don't think the approach says anything about that problem, instead it's useful for the description of the measuring device (at least, that's what I take out of it).

I would be thankful for any inspiration since it's a topic I am (for whatever absurd reasons) currently interested in.



B.
 
  • #13
**
What I don't understand is: if one part of the superposition kicks the measuring device in a certain state that is detected, what happens to the rest of the superposition and why? **

I think the paper suggests that the superposition couples to a different environment which will drive the composite system in one of the ``measurement states´´. But indeed, in this philosophy it seems that many states, when coupled to the wrong environments will go undetected. The suggestion here is that you cannot ``do that´´ ie. the preparation of the state will create the ``right´´ environment (at least that is how I understand it), but this argument seems very unplausible to me (this comes close to non-local conspiracy theories). :smile:

**E.g. what happens for a typical EPR situation, one detector on the left, one on the right or so - I don't think the approach says anything about that problem, instead it's useful for the description of the measuring device (at least, that's what I take out of it). **

Right, it does not add anything to the problem of extreme non local collapses (the detector mechanism as seen here is entirely local).

**
I would be thankful for any inspiration since it's a topic I am (for whatever absurd reasons) currently interested in. **

You don't have to take my word for it, but my intuition tells me the measurement process must be quite stable (the localized microscopic impacts of particles on measurement apparati are quite violent) and pretty independent of the history of preparing the sample which is to be measured (hence no special environmental conditions which are predetermined for detection :smile: ). Of course, this brings us back to EPR experiments, but that discussion would be too long and my position is to keep an open mind (as long as experiments aren't conclusive).

Cheers,

Careful
 
  • #14
Careful said:
The suggestion here is that you cannot ``do that´´ ie. the preparation of the state will create the ``right´´ environment (at least that is how I understand it), but this argument seems very unplausible to me (this comes close to non-local conspiracy theories). :smile:

:cool: Thanks indeed, that's how I also interpreted it. I agree that its weird but it goes well with some thoughts I have had. Maybe it will turn out to be useful at some point.



B.
 
  • #15
This topic should really be in the quantum physics forum.
 
  • #16
**
From my understanding it is completely independent of decoherence though. Maybe not. That's what I meant. **

Hmm, one would expect some similarities (but I would have to think further about it)

**
Anyway, I think the point is that if you prepare a superposition you get a different enviromental microstate then if you do the individual states. From this point of view preparation + meassurement, the whole process is making a macroscopic state depend on a microscopic state. **

Ok, label the preparation of sample a (b) by A (B) requiring apparati setups AA (BB). Now, after the sample has been prepared AA and BB are removed and one could legitmately consider separating a and b from the laboratory in which they were prepared (clean the boxes in which a and b are sitting :wink:) and bring them into a totally independent new laboratory 30 miles removed from the first one where the final experiment is done. Now, in order to maintain this picture, one would need to claim that the ``environmental trace´´ is left in the boxes carrying the particles, or that the particles themselves are carrying tags remembering the detailed history of its environmental embedding (even we humans tend to forget such things after 20 years). Obviously, you can wipe out this environmental trace even more by putting the particles in new boxes in different laboratories again and again (this will also change the value of the tags taking into account that those can only carry a finite amount of information)... prior to the final experiment. So, no I do not think the specific details of the environment *after* preparation of the sample does play any significant part in the outcome of the measurement, or even more exotic, in deciding whether something shall be measured or not.


**
In this sense the fact that preparation+detection happen are robust, but aspects of what precisely will happen are not except in special cases (like the all up all down experiments). **

A small comment here, why would the spin measurement be more robust than anything else? Actually, we do NEVER measure the value of the spin observable directly - always of the position observable (and in good Stern Gerlach experiments for sufficiently heavy particles such as silver atoms these are pretty accurate too).

Cheers,

Careful
 
Last edited:
  • #17
marcus said:
On page 3 of this new paper of Gerard 't Hooft it is written:

This paper was written while these facts were being discovered, so that it represents an original train of thought, which may actually be useful for the reader.

Here is a paper that contemplates a deterministic basis for Quantum Mechanics and aspires to help "demystify" quantum mechanics.
It is nice to see that I am not the only "demystifier" of QM. :biggrin:
 

1. What is 't Hooft's original train of thought?

't Hooft's original train of thought refers to the groundbreaking work of Dutch theoretical physicist Gerard 't Hooft, who proposed a new theory known as the holographic principle. This theory suggests that our three-dimensional universe may actually be a projection of information stored on a two-dimensional surface.

2. How did 't Hooft come up with this idea?

't Hooft's original train of thought was inspired by his work in quantum mechanics and black hole thermodynamics. He noticed that the laws of physics could be described in terms of two-dimensional information, which led him to propose the holographic principle.

3. What is the significance of 't Hooft's original train of thought?

't Hooft's original train of thought has had a major impact on theoretical physics and our understanding of the universe. It has opened up new avenues of research and has the potential to unite the theories of quantum mechanics and general relativity.

4. Has 't Hooft's original train of thought been proven?

While there is still ongoing research and debate surrounding the holographic principle, some evidence has been found to support 't Hooft's theory. For example, the entropy of black holes can be derived using the holographic principle.

5. How does 't Hooft's original train of thought relate to other theories in physics?

The holographic principle is closely related to other theories such as string theory and the AdS/CFT correspondence. It also has implications for our understanding of space, time, and the fundamental nature of reality.

Similar threads

  • Quantum Interpretations and Foundations
Replies
20
Views
1K
  • Beyond the Standard Models
5
Replies
163
Views
22K
  • Beyond the Standard Models
3
Replies
73
Views
20K
  • Quantum Interpretations and Foundations
2
Replies
37
Views
1K
  • Beyond the Standard Models
Replies
3
Views
3K
  • Beyond the Standard Models
Replies
14
Views
3K
Replies
1
Views
2K
  • Quantum Interpretations and Foundations
Replies
1
Views
425
  • Beyond the Standard Models
Replies
2
Views
3K
  • Beyond the Standard Models
Replies
2
Views
3K
Back
Top