A Hardy's approach to quantum gravity and QM interpretation

  • #31
martinbn said:
I might be wrong, but I think your analogy is not good. The analogy would be that the boundary is the sphere at all time not just ##t=0## i.e. ##S^2\times\mathbb R## and the bulk is the ball at all times. In other words the boundary is the cylinder and the bulk is the full solid cylinder.
The AdS/CFT correspondence states that two theories have the same number of degrees of freedom. Loosely speaking, the number of degrees of freedom is the same as the amount of Cauchy data. Since Cauchy data is given at a fixed time (say ##t=0##), it should by clear why I think that my analogy is good.
 
Physics news on Phys.org
  • #32
martinbn said:
given a solution on the boundary there is one in the bulk
That's simply not true in classical electrodynamics. There are many different solutions in the bulk that have the same behavior at the boundary. For instance, let ##\rho(r)## be a static spherically symmetric charge density with the property
$$\rho(r)=0 \;\; {\rm for} \;\; r\geq R$$
and a fixed total charge, say
$$\int_0^R \rho(r) 4\pi r^2dr=100$$
(I put ##dr## at the right for you). Different configurations ##\rho(r)## with those properties correspond to different solutions in the bulk that all have the same behavior at the boundary ##r=R##.
 
Last edited:
  • #33
Demystifier said:
The AdS/CFT correspondence states that two theories have the same number of degrees of freedom. Loosely speaking, the number of degrees of freedom is the same as the amount of Cauchy data. Since Cauchy data is given at a fixed time (say ##t=0##), it should by clear why I think that my analogy is good.
But in your analogy you take a fixed time piece of the boundary and the whole bulk. Why not a fixed time slice of the bulk? That's where the Cauchy data lives anyway. What you describe is not a space and its boundary, but a space and part of a the boundary. I suppose my misunderstanding comes from not knowing what the conjecture is, and until I learn more about it it will not be clear to me, but it is very unlikely for me the be able to find a readable source.
Demystifier said:
That's simply not true in classical electrodynamics...
No, you are combining the two different analogies. The one in the post to you had nothing to do with EM. It was just an example of a boundary. For that you will need some elliptic equations, otherwise there are plenty of wave examples with fixed boundary (you gave one in the beginning). The EM analogy, in the post to PAllen, has boundary the initial hypersurface.
 
  • #34
I'm guessing no one else apart from the OP has of yet read the paper, which is why this thread is getting derailed, starting here:
There is already one good suggestion for quantum gravity - AdS/CFT ...
Hardy's paper is not about AdS/CFT, although he does state in section 17.1 that time evolution of states on spacelike hypersurfaces isn't possible given an indefinite causal structure, and that neither overlapping coordinate patches (or their conformal counterparts), nor gauge fixing can solve this problem.

To get this thread back on track, I'll try to simply illustrate what Hardy is proposing (since I've actually been working on a similar idea myself for different reasons), namely the proposal of a constructivist framework, which is essentially a new conceptual research methodology for theorists for the construction of (physical) theories based on problems with existing theories.

Hardy illustrates this by using a historical example, namely how Einstein tackled the problem of relativistic gravity and how he formulated a physical theory thereof (GR). Hardy distills a few key steps in this reasoning process and generalizes it for his constructive methodology. Somewhat confusingly he calls his constructive methodology an interpretation ("Constructive Interpretation") of QM but it is no such thing as he immediately admits, instead arguing that the interpretative issue of QM will possible resolve itself in some deeper theory.

Summarized, the problem facing Einstein was to find a deeper theory wherein both Newtonian gravity and SR field theories, most prominently Maxwell's electrodynamics, are different limiting cases. Einstein, unlike how most theoretical physicists do today, did this by way of philosophically reasoning about the conceptually conflicting principles underlying the old theories, identifying which are necessary and then through reformulation try to bring them in harmony under one unified conceptual framework consisting of only necessary ingredients. It is only when this step is finished that the mathematics of the theory is modified specifically by replacing the older mathematical formalism with more appropriate mathematics.

This is Hardy's constructive framework:
A. Defining the problem:
Newton Gravity ← Relativistic Gravity → SR Field Theories
B. Philosophical clarification, identification and simplification of the necessary principles and properties:
1. Equivalence principle
2. No global inertial reference frame
3. General coordinates
4. Local physics
5. Laws expressed by field equations
6. Local tensor fields based on tangent space
7. Principle of general covariance
C. Modification of the mathematics of the theory:
I. Prescription: turning SR field equations into GR field equations
II. Addendum: The Einstein field equations
III. Interpretation: geometric interpretation follows naturally from diffeomorphism invariance

This constructive framework is as Hardy says completely general, i.e. it is a theory independent constructive methodology, or more explicitly it doesn't limit itself to any particular theory or formulation of that theory. Instead the framework can, in principle, be used to solve any fundamental problem in physics through the process of analogy. Hardy illustrates this by way of example, i.e. by using the framework to tackle the problem of quantum gravity:

A. Defining the problem:
GR ← Quantum Gravity → SR QFTs
B. Philosophical clarification, identification and simplification of the necessary principles and properties:
1. Dynamical causal structure (from GR) and indefiniteness (from QT)
2. Indefinite causal structure
3. Compositional space
4. Formalism locality
5. Laws given by correspondence map
6. Boundary mediated compositional description
7. Principle of general compositionality
C. Modification of the mathematics of the theory:
I. Prescription: turning QFT calculations into QG calculations
II. Addendum: new physicality conditions for Quantum Gravity
III. Interpretation: will also follow naturally (?)

The particular form of QFT that he utilizes in this example is in his own Operator Tensor QFT formalism (NB: as far as I can see, largely an application of Penrose diagrammatic notation); it goes without saying that this is mathematically equivalent to standard QFT, but the point is:
1) psychologically, it might represent a more natural setting for deriving C.I-III based on the conceptual issues B.1-7
2) in the context of mathematics itself, the correct mathematics needed for an extension to actually carry out C.I and C.II might even already exist.

In any case, I myself am convinced that the adherence to some kind of methodology like this one is necessary to actually make great progress in the practice of theoretical physics today, which has been dominated by overt purely technical reasoning - since the days of Feynman until this very day. Purely technical reasoning has been successful in creating relativistic QFT and the SM, but seems to be hopeless in going beyond them, which is clearly reflected in the now decadeslong stagnation of the field of theoretical physics, where the situation has run amok.

In my opinion, such conceptual frameworks or methodologies, if even partially successful should even be taken a step further, namely not just a framework for one problem, but an entire research programme approaching all fundamental problems. This also shouldn't be done from the single point of view of one theory given some problem, but manifestly opportunistically from the pluralistic point of view of all available competing theories given some problem; this would then enable a direct hierarchical classification and discovery of the interrelationships between (all) physical theories and their possible extensions, in the same spirit as the 8 possible kinematical groups for a uniform and isotropic universe discovered by Bacry and Levy-Leblond.
 
Last edited:
  • Like
Likes Fra and Lord Crc
  • #35
MathematicalPhysicist said:
The whole edifice, loads of calculations and nothing is truly rigorously mathematically justified.

Perhaps you haven't been following Urs's excellent series eg:
https://www.physicsforums.com/insights/newideaofquantumfieldtheory-interactingquantumfields/

As can be seen it can be done in the style you alluded to - the question is does it appeal? Sometimes being non rigorous can be illuminating. The so called zeta function ζ(s) = 1/1^s + 1/2^s + 1/3^s ... has a non-rigorous derivation for s = -k where k is a integer (its used in zeta function regularization and calculation of the Casmir force for instance):

∑(-1)^k*ζ(-k)*x^k/k! = ∑∑ n^k*(-x)^k/k! = ∑ ∑(-nx)^k/k! = ∑e^(-nx). Let S = ∑e^(-nx). e^xS = 1 + S so S = 1/e^x - 1 = 1/x*x/e^x - 1. But one of the definitions of the so called Bernoulli numbers Bk, is x/(e^x - 1) = ∑Bk*x^k/k! or taking the1/x into the sum S = ∑ B(k+1)*x^k/(k+1)! after changing the summation index so you still have powers of x^k. Thus you have ∑(-1)^k*ζ(-k)*x^k/k! = ∑ B(k+1)*x^k/(k+1)!. Equating the coefficients of the power of x^k you have ζ(-k) = (-1)^k*B(k+1)/k+1.

This result implies the bizarre identities ζ(0) = 1+1+1+1... = -1/2 and ζ(1) = 1+2+3+4... = -1/12.

Why such 'silly' results. Well a more rigorous derivation uses contour integration. The infinity you get in summing such things comes from a pole 1/s-1 that appears in the equation when written in a certain form ie where we find ζ(s) - 1/(s-1) is a perfectly well behaved function for all s (you can use the Euler-Maclauren formula to show this if you want - but assume s>1 and use analytic continuation - otherwise the term is an infinite integral). There is a trick using contour integration with what's called a Hankel cut that avoids that infinity - it rears its ugly head at s=1 but its not there otherwise and you get the finite answers.

Rigorous - more difficult - but sound - non-rigorous - not as difficult - but it has issues such as changing the order of the summation in my derivation - why can you do that?

Thanks
Bill
 
Last edited:
  • Like
Likes odietrich
  • #36
@bhobba I agree that the modern desire for mathematical rigor is not necessarily warranted or even actually all that useful in theoretical physics as it can be in mathematics; I could go on at great lengths about this, but this isn't neither the time nor place for that discussion.

btw seeing the zeta function, did you already hear of Atiyah's purported proof? This is the most excited I have been about pure mathematics (compared to physics) since at least a decade.
 
  • Like
Likes bhobba
  • #37
Auto-Didact said:
btw seeing the zeta function, did you already hear of Atiyah's purported proof? This is the most excited I have been about pure mathematics (compared to physics) since at least a decade.

Saw that one and am exited as well. We will need to wait and see if its verified. This is where 'pure math' comes into its own. Terry Tao will likely write about it in his blog at some point - that's when I will look more carefully at it.

What I gave was simply an example of the kind of math physicists use and what's required to make it rigorous, and in doing that one increases their understanding what's going on in the first place. In physics a hand-wavy demonstration like I posted is fine, but a few words that can only be justified by being rigorous is often of value - but unless you are interested no need for the detail. What interests me pure math wise waxes and wanes as I think is true for many guys, like me, trained in math but were seduced by physics.

Thanks
Bill
 
  • #38
I agree with the essence of the below and its right in line my approach as well.

Auto-Didact said:
Einstein, unlike how most theoretical physicists do today, did this by way of philosophically reasoning about the conceptually conflicting principles underlying the old theories, identifying which are necessary and then through reformulation try to bring them in harmony under one unified conceptual framework consisting of only necessary ingredients. It is only when this step is finished that the mathematics of the theory is modified specifically by replacing the older mathematical formalism with more appropriate mathematics.
...
This constructive framework is as Hardy says completely general, i.e. it is a theory independent constructive methodology, or more explicitly it doesn't limit itself to any particular theory or formulation of that theory. Instead the framework can, in principle, be used to solve any fundamental problem in physics through the process of analogy. Hardy illustrates this by way of example, i.e. by using the framework to tackle the problem of quantum gravity:
...
In my opinion, such conceptual frameworks or methodologies, if even partially successful should even be taken a step further, namely not just a framework for one problem, but an entire research programme approaching all fundamental problems.

This is how is see this briefly:

The specific "conflicting principles" and the "philosophical reasoning" we need to do here are not random philosophy but that relevant to the logic of science, and the logic of inference. Scientific knowledge as compared to other random beliefs is about backing up your beliefs by documented evidence, next step is to quantify this, and here we are immediately getting into foundations of probability theory. Ie. we rationally hold a belief because it is more likely when "counting evidence" of the various options.

All of this is no news, but what i mean by taking this to the next level, is to face both the physical contraints matter systems have on this inferences. And what influence the inferences we made have on stabilising matter - Note the striking similarity here to the feedback we have between matter and geometry. Note also how such abstractions exist also on finance market. Market expectations, no matter how "wrong" as per a certain perspcetive, can actually stabilize things, and thus explain things.

This logic is ruling not only expectations on the future based on dynamical laws. It also applies to our knowledge of dynamical law itself. Failing to see this leads eventually to the cosmological fallacy as smolin coined it. This is what i label logic of inference. It is the common roots of both logic of science and the mathematics of probability theory. Thinking about this, brings us back also to the roots of mathematics because we have concepts like "counting evidence". We need a model for this, that respects the physical constraints. Ie. the counting is executed withing the complexions of an observer, so we can not make headless use of fictive infinite and uncountable systems here, unless we really tame them - this is so paramount that handwaving here is not an option imo.

As we now all predictions of QM and QFT take the form of "expectations". But we rarely think of the dynamical laws as expectations, at least not to the extent we should. This is just one of the pathological symptom i see.

From skimming Hardys paper i am not sure how far he has come, but his description of physical law was suspicuous to me.

/Fredrik
 
  • Like
Likes Auto-Didact

Similar threads

  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 24 ·
Replies
24
Views
6K
  • · Replies 60 ·
3
Replies
60
Views
7K
  • · Replies 19 ·
Replies
19
Views
3K
Replies
16
Views
6K
Replies
26
Views
5K
  • · Replies 10 ·
Replies
10
Views
3K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 13 ·
Replies
13
Views
3K