Constructive QFT - current status

  • A
  • Thread starter Auto-Didact
  • Start date
  • Tags
    Current Qft
In summary: This is incorrect. Field theories in lower dimensions can be constructed because they are renormalizable, which is something that QM and GR are not. This lack of a rigorous formulation does not imply that their physical accuracy at small length scales is inaccurate.
  • #1
Auto-Didact
751
562
I haven't been up to date on the state of the art of the field for quite some years now; the contemporaneity of my knowledge ends with the review by Rivasseau, 2000. A quick gander at the topic over at the n-Cat Lab shows that practically nothing has changed.

Is anyone working in the field here more up to date on the current state of the art willing to address whether there have been major progress to the problems as listed in Rivasseau's review? More explicitly, has there been a full constructive formulation of QFT in 4 dimensions? And if not, how far away are we projected to be?
 
Physics news on Phys.org
  • #2
Auto-Didact said:
has there been a full constructive formulation of QFT in 4 dimensions? And if not, how far away are we projected to be?
No. We are as far away as it takes to make an unforeseen breakthrough.
 
  • Sad
  • Like
Likes bhobba and vanhees71
  • #3
Thanks... I will try not to cry myself to sleep later :cry:
 
  • Like
Likes Keith_McClary and vanhees71
  • #4
There has been some progress (more field theories constructed in curved background in lower dimensions) however nothing of major note to somebody not deeply interested in the field.

However it's often the case that the issue is presented as something "deep" about the four dimensional case, where as that is not so much true as it is the difficulties related to being "only" renormalizable and requiring coupling constant renormalization.

Many of the techniques in constructive field theory involve slicing the Path Integral into several subintegrals, each confined to a certain four-momenta band. One then performs perturbation theory to a certain order at each length scale via a form of functional integration by parts with a nonperturbative remainder. We can perform renormalization on the perturbative part to render them finite and then bound the nonperturbative part, thus we are able to analytically bound the approach to the continuum limit and prove that it is finite. Typically we need to go further into perturbation theory as the length scale goes to zero (Energy goes to infinity) in order to bound the nonperturbative part.

However there are three problems.

If the theory is only renormalizable the divergences within perturbation theory itself make the whole expansion difficult to control. The nonperturbative remainder will be divergent and for this reason one has to essentially always obtain optimal estimates on every aspect of the functional integrals, non-optimal bounds will mask the very precise cancellations that permit the existence of a continuum limit. In superrenormalizable theories we can make incredibly non-optimal and crude bounds and still prove convergence.

Secondly most techniques operate via estimates against Gaussian integrals/the free theory. This is very easy to do when the coupling constant is simply a number ##\lambda## with the free case being given by the ##\lambda = 0## case. We can prove estimates bounding things in terms of polynomials in ##\lambda##, demonstrate continuity in ##\lambda## of bounds etc. However when ##\lambda## itself has divergences that balance those in the integral all of this goes out the window.
Also coupling constant renormalization introduces overlapping divergences and we in addition have renormalons when summing the perturbative series which affect estimating the perturbative part of these expansions.

Third when the theory has "special features" that need to be preserved by cutoffs. For example the Gross-Neveu model is in a certain sense "as difficult" as ##\phi^{4}_{4}##, but the later has a positivity of the interacting term that needs to be preserved or otherwise estimates will be insufficiently tight.

We have very poor control and constructive results even in ##d = 3## for just renormalizable theories or ones which require coupling constant renorms.

In ##d = 4## all theories are like this and they have a very special structure that needs to be preserved, Gauge symmetry. Note though that anything like this in ##d = 3## would be beyond current methods.
 
  • Like
Likes bhobba and eys_physics
  • #5
In my opinion, the whole program of search for a mathematically rigorous continuous field theory is fundamentally misguided. The continuous field theories (such as the Standard Model) that we have are just effective theories that at very small distances must be replaced by completely different theories.
 
  • Like
Likes vanhees71
  • #6
Demystifier said:
In my opinion, the whole program of search for a mathematically rigorous continuous field theory is fundamentally misguided. The continuous field theories (such as the Standard Model) that we have are just effective theories that at very small distances must be replaced by completely different theories.
Why would their physical inaccuracy at small length scales imply they have no rigorous formulation?

For example non-relativistic QM and General Relativity are both incorrect in certain regimes but have a mathematically rigorous formulation.

Why can field theories in lower dimensions be constructed?
 
  • Like
Likes bhobba, vanhees71, dextercioby and 1 other person
  • #7
DarMM said:
Why would their physical inaccuracy at small length scales imply they have no rigorous formulation?
It wouldn't. It just implies that we don't so strongly need such a rigorous formulation, even if it exists.

DarMM said:
For example non-relativistic QM and General Relativity are both incorrect in certain regimes but have a mathematically rigorous formulation.
It's indeed nice when a theory has a rigorous formulation, but if that theory is not fundamental, then it's not such a big problem if it hasn't a rigorous formulation.

DarMM said:
Why can field theories in lower dimensions be constructed?
Well, maybe field theories in 4 dimensions also have a rigorous formulation awaiting to be discovered. But if someone desperately searches for it beacuse he thinks that it must exist for otherwise the Nature would be inconsistent, I think it's wrong.
 
Last edited:
  • #8
Isn't this basically an argument against looking for mathematical rigour for any physical theory?
 
  • Like
Likes bhobba, dextercioby and weirdoguy
  • #9
Demystifier said:
It's indeed nice when a theory has a rigorous formulation, but if that theory is not fundamental, then it's not such a big problem if it hasn't a rigorous formulation.

Rigor is conceptually important for quantum mechanics, as a relativistic quantum theory would prove the wave function is not real :oldbiggrin:
 
  • #10
DarMM said:
Isn't this basically an argument against looking for mathematical rigour for any physical theory?
No. Rigour is desirable, but not a must. I am against a rigour for its own sake when it destroys some more important properties of the theory. An example is a rigorous result in statistical mechanics that there are no mathematical phase transitions in finite systems. This conflicts with the initial goal to describe the physical phase transitions (such as freezing of water) that obviously exist in finite systems. Another example is the Haag's theorem.
 
  • Like
Likes Auto-Didact
  • #11
Demystifier said:
No. Rigour is desirable, but not a must. I am against a rigour for its own sake when it destroys some more important properties of the theory. An example is a rigorous result in statistical mechanics that there are no mathematical phase transitions in finite systems. This conflicts with the initial goal to describe the physical phase transitions (such as freezing of water) that obviously exist in finite systems. Another example is the Haag's theorem.
I don't understand I have to say. What does Haag's theorem "destroy"?

Same for the statistical mechanical example, to me that just shows that sharp separation between phases is an infinite volume idealization. I would have found that interesting rather than considering it to "destroy" a goal of statistical mechanics.
 
  • #12
DarMM said:
I don't understand I have to say. What does Haag's theorem "destroy"?

Same for the statistical mechanical example, to me that just shows that sharp separation between phases is an infinite volume idealization. I would have found that interesting rather than considering it to "destroy" a goal of statistical mechanics.
Well, when a rigorous result is interpreted that way, it's perfectly welcomed. But in my experience, some mathematical physicists tend to make less reasonable conclusions from some rigorous theorems. They just not have a good intuition about which of the assumptions (from which the theorem was derived) should be questioned.
 
  • #13
Well to my mind having a rigorous formulation of the theory means you understand the theory better and can begin to apply more mathematical methods to it. Also you may find counterexamples to folk wisdom that show the theory to be richer and more complex than its naive formulations.
For example the Gross-Neveu model is perturbatively non-renormalizable but is actually completely well-defined and renormalizable non-perturbatively.
 
  • Like
Likes dextercioby
  • #14
DarMM said:
Well to my mind having a rigorous formulation of the theory means you understand the theory better
That's because your mind has a well developed philosophical component, so you understand what you are doing at a deeper meta-level. Not all mathematical physicists have that. :oldbiggrin:
 
  • #15
Demystifier said:
No. Rigour is desirable, but not a must. I am against a rigour for its own sake when it destroys some more important properties of the theory. An example is a rigorous result in statistical mechanics that there are no mathematical phase transitions in finite systems.

But this is also an intuitive result. Are you against intuitive and rigorous results?

Demystifier said:
This conflicts with the initial goal to describe the physical phase transitions (such as freezing of water) that obviously exist in finite systems.

So how do you describe physical phase transitions? Do you discard classical statistical mechanics and thermodynamics, since they take the unphysical infinite systen limit?
 
Last edited:
  • #16
atyy said:
But this is also an intuitive result. Are you against intuitive and rigorous results?
Well, it's not intuitive to me. Can you explain why is that intuitive to you?

atyy said:
So how do you describe physical phase transitions? Do you discard classical statistical mechanics and thermodynamics, since they take the unphysical infinite systen limit?
One can do it non-rigorously which, in a sense, is an approach in which the system is both infinite and non-infinite. (This is somewhat analogous to intuitive calculus before Cauchy, where ##dx## is both zero and non-zero.) Essentially, one first computes intensive quantities (pressure, temperature, concentration, ...) in the limit ##N\rightarrow 0##, but then computes extensive quantities (energy, number of particles, ...) in a finite volume ##V##. The goal of mathematical physicists is to explain why such an ill defined procedure gives correct results, but "ordinary" physicists can just use their naive intuition to work that way in practice.
 
  • #17
Demystifier said:
Well, it's not intuitive to me. Can you explain why is that intuitive to you?

A phase boundary is non-analytic behaviour. If one writes the partition function with a finite number of particles, the function is analytic, and it is hard to see how one gets the non-analytic behaviour. If one allows the limit to infinity to be taken, then it seems the non-analytic behaviour could be possible.

Eg. David Tong's notes: http://www.damtp.cam.ac.uk/user/tong/sft.html (lecture 1, p13)
 
Last edited:
  • Love
Likes Demystifier
  • #18
Demystifier said:
Well, it's not intuitive to me. Can you explain why is that intuitive to you?
What is intuitive to someone depends on knowledge of and experience with certain prototypical concepts within a branch of mathematics: this changes with time, focus and exposure.
atyy said:
A phase boundary is non-analytic behaviour.
Actually this is often due to an approximation of some mathematically yet unresolved boundary layer; the approximation then tangentially or asymptotically matches the actual analytic process with an unknown, often non-visible limited range of validity.

In such cases, the mathematical method of approximation itself then is the cause for any defective mathematical properties of the effectively derived function ascribed to the described phenomenon and the general inconsistency thereof with - e.g. non-generalizability to - the actually sought after function.

No amount of rigour or precision can alleviate such a problem because the problem is completely mathematical yet through a premature misinterpretation invalidly gets projected onto the physics leading to endless misunderstandings and paradoxes; sounding familiar yet?

This why constructive mathematical demonstrations are so crucially important: once the mathematical foundations of a physical theory crumbles, all the secondary structures that were built on top of it come crashing down as hopelessly inadequate and deeply misguided; if lucky the theory can survive as a limiting case.
 
  • Like
Likes julcab12
  • #19
atyy said:
A phase boundary is non-analytic behaviour. If one writes the partition function with a finite number of particles, the function is analytic, and it is hard to see how one gets the non-analytic behaviour. If one allows the limit to infinity to be taken, then it seems the non-analytic behaviour could be possible.

Eg. David Tong's notes: http://www.damtp.cam.ac.uk/user/tong/sft.html (lecture 1, p13)
Thanks, now it's intuitive to me too. :smile:
That's yet another demonstration that Tong's lectures are really great.
 
  • #20
Demystifier said:
An example is a rigorous result in statistical mechanics that there are no mathematical phase transitions in finite systems.
Doesn't this show that there is no isomorphism between the model expressed in the language of mathematics and the physical phenomenology?

A model can lead to over-specification (e.g : Gödel metric/Closed timelike curves) or to underspecification to describe physical phenomenology.

/Patrick
 
  • #21
I think it's just that actual transitions between phases aren't sharp for real systems. Having sharp transitions simplifies many treatments, but is technically an infinite volume limit idealisation. However since we have analytic control over that limit we can bound the errors we are making when we treat real systems in this manner and see that it is virtually irrelevant.

That's another point of rigorous constructions, if we had a rigorius continuum limit for Yang-Mills we could bound the systematic errors in Lattice simulations exactly.
 
  • Like
Likes bhobba
  • #22
DarMM said:
I think it's just that actual transitions between phases aren't sharp for real systems. Having sharp transitions simplifies many treatments, but is technically an infinite volume limit idealisation. However since we have analytic control over that limit we can bound the errors we are making when we treat real systems in this manner and see that it is virtually irrelevant.

That's another point of rigorous constructions, if we had a rigorius continuum limit for Yang-Mills we could bound the systematic errors in Lattice simulations exactly.

Unless we have something bizarre like this ?

https://arxiv.org/abs/1502.04573"The standard approach of trying to gain insight into such models by solving numerically for larger and larger lattice sizes is doomed to failure; the system could display all the features of a gapless model, with the gap of the finite system decreasing monotonically with increasing size. Then, at some some threshold size, it may suddenly switch to having a large gap."
 
  • Like
Likes bhobba and DarMM
  • #23
Demystifier said:
In my opinion, the whole program of search for a mathematically rigorous continuous field theory is fundamentally misguided. The continuous field theories (such as the Standard Model) that we have are just effective theories that at very small distances must be replaced by completely different theories.
Well, I think the physical validity and comprehensibility of theories have little to nothing to do with the possibility to find a mathematically rigorous formulation.

E.g., Newtonian classical mechanics is a mathematically well-defined theory with well-understood and mathematically interesting theorems and proofs. Nature doesn't like it though in a sense: It's only an approximation valid under well-understood circumstances. The limits of applicability are defined by relativity as well as (non-relativistic and of course also relativistic) Q(F)T.

That can also be said about classical electrodynamics as long as only a classical continuum treatment of the (charged) matter is concerned. The interacting theory for point-particles and the electromagnetic field is mathematically not well defined and only approximate descriptions are possible (with the Landau-Lifshitz approximation to the Abraham-Lorentz-Dirac equation the most convincing one, but with not too strong empirical justification though it seems to work well enough for accelerator physicists to construct accurate enough accelerators).

GR is also a mathematically well-defined theory, but it's physically for sure incomplete. Our ignorance is manifest by the unavoidable singularities in the solutions for non-trivial cases (the universe, black holes).

Non-relativistic quantum mechanics seems to be well-undertood and rigorously formulated mathematically but of course it has its limits of applicability as soon as relativistic situations are reached.

Finally, as discussed in this thread, the Standard Model of elementary particle physics is mathematically not fully understood, but this is for a rather unphysical case anyway, namely the infinite-volume limit (where strictly speaking even the perturbative formulation is inconsistent due to Haag's theorem). Treated in the right way as an effective field theory it's the most successful theory ever, including high-precision calculations for some fundamental quantities (g-2 for electrons, Lamb shifts of hydrogen(-like) atoms/ions, quantum-optics experiments concerning the very foundations aka Bell tests; just now also a demonstration of the EPR paradox in its original form about position and momentum: https://doi.org/10.1103/PhysRevLett.123.060403 ; guess, why the "violation of the HUP" is no true contradiction to the HUP here ;-)), to be discussed in a separate thread).

Of course, from a academic perspective a mathematically well-defined interacting QFT in (1+3) dimensions would be desirable, maybe also shedding light on the physics. Usually deep mathematical problems seem to have also interesting meanings for the understanding of the physics described by them.

It's also the other way around: Sloppy physicists' math can contain interesting mathematical context. E.g., Dirac's ##\delta## distribution (however already defined and used by Sommerfeld around 1910) triggered the development of an entire new field of mathematics, functional analysis.

Sommerfeld called this, in reference to Leibniz, the "prestabilized harmony between maths and physics" ;-)).
 
  • Like
Likes Demystifier, DarMM, weirdoguy and 1 other person
  • #24
vanhees71 said:
Finally, as discussed in this thread, the Standard Model of elementary particle physics is mathematically not fully understood, but this is for a rather unphysical case anyway, namely the infinite-volume limit (where strictly speaking even the perturbative formulation is inconsistent due to Haag's theorem)
A slight correction, I would say "where the normal derivation of the perturbative formalism does not hold". Even in the infinite volume limit the perturbative series is the correct expansion, it just has to be derived differently from how it's done in textbooks.
 
  • #25
Ok, what different derivation do you have in mind? Are there papers/books understandable to the usual mortal QFT practitioner?
 
  • Like
Likes Demystifier
  • #26
vanhees71 said:
Ok, what different derivation do you have in mind? Are there papers/books understandable to the usual mortal QFT practitioner?
To be frank, no. It would require a good deal of advanced operator and measure theory and the end result for you would just be "Oh perturbation theory is fine".

Haag's theorem just implies the usual derivation using the unitary time evolution operator in the interacting picture isn't valid.
 
  • Like
  • Haha
Likes Auto-Didact and vanhees71
  • #27
DarMM said:
Haag's theorem just implies the usual derivation using the unitary time evolution operator in the interacting picture isn't valid.

Why does the wrong derivation work (I think I've read that it reproduces the right derivation term by term)?

I've also heard that Fell's theorem explains why the wrong derivation works. Is there any substance to that?
 
  • Like
Likes vanhees71
  • #28
atyy said:
Unless we have something bizarre like this ?

https://arxiv.org/abs/1502.04573"The standard approach of trying to gain insight into such models by solving numerically for larger and larger lattice sizes is doomed to failure; the system could display all the features of a gapless model, with the gap of the finite system decreasing monotonically with increasing size. Then, at some some threshold size, it may suddenly switch to having a large gap."
Undecidability is irrelevant in practice. Many interesting systems of diophantine equations are known to have solutions or to be unsolvable although the general problem is undecidable.
 
  • Like
Likes bhobba, Demystifier and Auto-Didact
  • #29
atyy said:
Why does the wrong derivation work (I think I've read that it reproduces the right derivation term by term)?

I've also heard that Fell's theorem explains why the wrong derivation works. Is there any substance to that?
Let's just look at the wrong derivation. I'll use the path integral approach where Haag's theorem becomes Nelson's theorem since it is easier to discuss.

We have the path integral:
$$\int{\mathcal{O}\left(\phi\right)d\nu}$$

We then separate the interacting measure into two components, the free measure ##d\mu## and an exponential ##e^{-S_{I}}## to get:
$$\int{\mathcal{O}\left(\phi\right)e^{-S_{I}}d\mu}$$

If we expand the exponential we then get an asymptotic series:
\begin{align*}
\int{\mathcal{O}\left(\phi\right)d\nu} & \approx \int{\mathcal{O}\left(\phi\right)d\mu} \\
& + \int{\mathcal{O}\left(\phi\right)\left(-S_{I}\right)d\mu} \\
& + \dots
\end{align*}

This asymptotic relation is valid in the continuum, it's simply that in the continuum ##d\nu \neq e^{-S_{I}}d\mu##, i.e. it's not the free measure times a function. That's Nelson's theorem, the path integral version of Haag's theorem. So that part of the derivation doesn't work.

However the derivation holds at every finite lattice spacing and thus the asymptotic relation holds at all lattice spacings as well. You can take the continuum limit on both sides of the relation and show it continues to hold in that limit and thus the perturbative series is valid in the continuum.

So one can consider the usual derivation to be a short hand. Introduce a cutoff, then expand the measure, get the asymptotic relation and take the continuum limit on both sides. You just can't use that expansion method directly in the continuum. If you want to prove the relation directly in the continuum there are other methods but they are much more mathematically involved.

Haag/Nelson's theorem just tells you the free and interacting theory are disjoint in the continuum. It doesn't however change the fact that the terms in the expansion in the coupling constant of the interacting theory's moments can be calculated with the free (Gaussian in path integral) theory.

So expanding the moments:
$$\mathcal{W}\left(x_1,...,x_n,\lambda\right) \rightarrow \sum_{n}\lambda^{n}\mathcal{G}_{n}\left(x_1,...,x_n\right)$$

The ##\mathcal{G}_{n}\left(x_1,...,x_n\right)## functions can be computed from Fock space/the Gaussian measure.

There is one side effect of the fact that they are disjoint that shows up when using the free theory to compute the terms. The need to renormalize the terms.

The perturbative series ends up being only asymptotic of course, not convergent. Though that happens in NRQM as well. In lower dimensions for some theories you can use the Borel transform to sum the series and thus existence of the interacting theory can be proved directly from perturbation theory.

In 4D but also for Yang Mills in lower dimensions there are poles in the Borel plane preventing resummation. The poles mean one has to take a contour around them to obtain the interacting theory, but there are infinite such contours introducing an ambiguity of order ##\mathcal{O}\left(e^{-\frac{1}{\lambda}}\right)##. Some are from instantons and others are from renormalons. Renormalons are finite terms resulting from coupling constant renormalization that cause the perturbative series to have an extra ##n!## growth term that leads to poles in the summed Borel series.
 
  • Like
  • Informative
Likes odietrich, bhobba, Auto-Didact and 5 others
  • #30
Another simple argument given in

A. Duncan, The conceptual framework of quantum field
theory, Oxford University Press, Oxford (2012).

uses a finite volume with periodic spatial boundary conditions and works in momentum space. Then the infinite-volume limit is taken at the very end for the transition rates.

"Regularizations" like this or "latticizing" the theory etc. physicists intuitively do in a naive way. It's of course good to know, that one can explain, why this finally works, (more) rigorously.
 
  • Like
Likes DarMM, mattt and Demystifier
  • #31
atyy said:
Why does the wrong derivation work (I think I've read that it reproduces the right derivation term by term)?
It works heuristically, but not necessarily mathematically.
DarMM said:
You just can't use that expansion method directly in the continuum. If you want to prove the relation directly in the continuum there are other methods but they are much more mathematically involved.
Being constructively inclined, I prefer those other methods, i.e. a non-perturbative analysis. I would even go as far to say that mathematically speaking a non-perturbative analysis is necessary in order to prove existence at all since perturbation theory is known to break down for many classes of problems, including many which are asymptotic, not convergent.

The failure of those doing the perturbative expansion is then essentially caused by them not realizing that they are expanding the power series by assuming ad hoc that the independent variable is fixed purely in order to be able to make an empirical comparison i.e. making a mathematically illegitimate assumption which is in a specific sense completely independent of experiment!
DarMM said:
There is one side effect of the fact that they are disjoint that shows up when using the free theory to compute the terms. The need to renormalize the terms.

The perturbative series ends up being only asymptotic of course, not convergent. Though that happens in NRQM as well. In lower dimensions for some theories you can use the Borel transform to sum the series and thus existence of the interacting theory can be proved directly from perturbation theory.

In 4D but also for Yang Mills in lower dimensions there are poles in the Borel plane preventing resummation. The poles mean one has to take a contour around them to obtain the interacting theory, but there are infinite such contours introducing an ambiguity of order ##\mathcal{O}\left(e^{-\frac{1}{\lambda}}\right)##.
The 'some theories' for which this can be proved require both linearity of the space of solutions as well as linearity of the equations; if one or both of these assumptions fail, then perturbation theory - beyond an initial small semi-accurate range of validity - will quickly fail once the independent variables aren't ad hoc assumed to be fixed anymore. In this sense, perturbation theory is obviously just a more sophisticated version of a heuristic technique such as the small angle approximation.
 
  • #32
vanhees71 said:
Another simple argument given in

A. Duncan, The conceptual framework of quantum field
theory, Oxford University Press, Oxford (2012).

uses a finite volume with periodic spatial boundary conditions and works in momentum space. Then the infinite-volume limit is taken at the very end for the transition rates.

"Regularizations" like this or "latticizing" the theory etc. physicists intuitively do in a naive way. It's of course good to know, that one can explain, why this finally works, (more) rigorously.
Using periodic boundary conditions, either before or after a Fourier transform, is an implicit importation of topological phase space analysis, which severely complicates the issue because it introduces new existence and uniqueness issues of the periods whose resolution requires the full arsenal of nondimensionalization, bifurcation theory and index theory.

As Duncan makes explicitly clear in his masterful book, there is good reason to be suspect of the ultimate validity of perturbative analysis, either as non-optimized perturbation theory in which case the regularization based on the Borel transform is generally quite fragile, or optimized perturbation theory which generally isn't useful in the context of field theory.

From applied mathematics, all of these issues are already well known - with physicists often merely introducing novel verbiage which unnecessarily complicate these purely mathematical issues and which tend to be ultimately unjustifiable (e.g. wanting to make a comparison with empirical phenomenology) - which is exactly why non-perturbative analysis was invented in the first place.

Even in physics this is old news; already during the 60s-70s, the recognition that perturbative arguments were unjustifiable lead to a split in the QFT community into field theorists and S-matricists, as Shankar describes during his time under S-matrix purist Geoff Chew at Berkeley. The immediate recognition of the extremely contingent nature of renormalization by constructive mathematicians (and constructively inclined physicists) directly led to the constructive QFT programme, which as we can still see is nowhere near completion.
 
  • Like
Likes bhobba and vanhees71
  • #33
Demystifier said:
Thanks, now it's intuitive to me too. :smile:
That's yet another demonstration that Tong's lectures are really great.

I’m enjoying the lecture in the link but it’s already confusing when he says “little things affect big things. Big things don’t affect little things”. Well, big things certainly affect big things don’t they? Else classical mechanics was useless. Obviously false. But there are no big things not made of little things. Granted. So big things affect big things but only through little things. So wouldn’t it have been better to say that the map between all things big and little is little. Or something like that. The real effect being that you have to allow for big things to affect little things as the big things interact with each other. Co-evolution of phenotypes via the genome is a pretty important example of how it happens at least in the evolution of one regime of the physical world. What precludes the possibility it is more common. To me it seems potentially relevant to the subsequent question of what causes such things as discontinuous phase changes in continuously evolving systems - like some back reaction, some big thing shaking the almost boiling pot.

Moreover, given how confusing that was I am distracted when he then goes off into the (somewhat familiar now) partition function statistics based description of the Ising model and its free energy. I am distracted by that because it already assumes a notion of entropy maximization, the natural Hamiltonian as it were, classical a-priori probability - which, as sensible as it is, seems to me a puzzle that shouldn’t be given as axiomatic if the question is how to understand what makes-up microscopic space-time. I get that we started with an entropy gradient in our universe but there seems to be this weird way that always gets invoked as natural and then taken as an input to models. I get this is super practical and not wrong but doesn’t it potentially confuse the question of how that entropy gradient is managed microscopically, how it relates to the evolution of real things in “proper time” and therefore to the puzzle of differential microscopic GR (twin age difference).
 
  • #34
The lecture is definitely more interesting now getting into universality classes.
 
  • #35
Jimster41 said:
“little things affect big things. Big things don’t affect little things”.
Today I killed a mosquito, so the second statement is clearly wrong. :oldbiggrin:
 
  • Like
Likes bhobba, Klystron and vanhees71

Similar threads

  • Quantum Physics
2
Replies
69
Views
5K
Replies
6
Views
1K
  • Quantum Physics
3
Replies
87
Views
5K
  • Quantum Interpretations and Foundations
3
Replies
91
Views
6K
  • Science and Math Textbooks
Replies
0
Views
810
  • Quantum Physics
Replies
4
Views
2K
  • Quantum Physics
Replies
11
Views
1K
  • Quantum Interpretations and Foundations
11
Replies
376
Views
12K
  • Quantum Interpretations and Foundations
Replies
6
Views
1K
  • Beyond the Standard Models
Replies
10
Views
2K
Back
Top