A QM is Feynman path ensemble - is QFT Feynman field ensemble?

  • #51
But Feynman diagrams represent very concrete particle scenarios, e.g. electron-positron annihilation on the right below:
1622128991109-png.png

Such charged particles have E~1/r^2 electric field, what translates into rho~1/r^4 energy density - why can't we ask about such field/energy distribution (its ensemble) in scenario represented by a given Feynman diagram?
Shouldn't it be approximately rho~1/r^4 around a charged particle?
 
Physics news on Phys.org
  • #52
Jarek 31 said:
Are you saying that they made combinatorics obsolete?
Combinatorics only works in the simplest situations - independent particles. Once there are interactions one needs more advanced diagrammatic methods.
 
  • Like
Likes romsofia
  • #53
Jarek 31 said:
But Feynman diagrams represent very concrete particle scenarios, e.g. electron-positron annihilation
These are virtual scenarios only; see my Insight article https://www.physicsforums.com/insights/physics-virtual-particles/ and the related articles cited there in the leading section.
 
  • Like
Likes dextercioby
  • #54
When there are interactions, we need to weight these combinations e.g. with energy - use Boltzmann distribution e.g. as in Ising model.

Virtual particles are for more subtle scenarios like representing Coulomb interaction with exchange of virtual photons.
So what about more concrete scenarios (Feynman diagrams) like annihilation of electron + positron - is there e.g. energy density rho~1/r^4 before annihilation as for electric field of a charge?
 
  • #55
Jarek 31 said:
When there are interactions, we need to weight these combinations e.g. with energy - use Boltzmann distribution e.g. as in Ising model.
I am not talking about simulations but about producing dynamics for mixed states (representing incomplete information).
Jarek 31 said:
So what about more concrete scenarios (Feynman diagrams) like annihilation of electron + positron

All Feynman diagrams refer to virtual particles only.
 
  • #56
For situations with incomplete knowledge, models need some hidden assumptions about the missing information - and the safest assumptions (combinatorially dominating) are maximizing entropy.
However, there are additional constraints e.g. from energy conservation - requiring to use weighted: Boltzmann distribution.
And this is about equilibrium like the ground state - dynamics is indeed more complicated - for example which diffusion is more natural: based on standard random walk (GRW, no localization) or maximizing entropy MERW ( https://en.wikipedia.org/wiki/Maximal_entropy_random_walk ) - leading to exactly quantum ground state stationary probability distribution from Boltzmann path ensemble (for M_ij = exp(-E_ij) called transfer matrix):
1622550067448.png


Regarding Feynman diagrams, so if electron+positron meet producing photon which flies away, what would be "virtual" about such scenario?
 
  • #57
A. Neumaier said:
Fixed mean energy is very uncommon in experimental situations. Usually the temperature or the pressure is fixed, and entropy is not maximized.
Well, thermal equilibrium is not that uncommon, and that's maximum entropy under the constraints of the situation (given mean energy and conserved particle number or conserved charge(s) lead to grand canonical ensembles with temperature and chemical potential introduced as Lagrange multipliers).
 
  • #58
vanhees71 said:
Well, thermal equilibrium is not that uncommon, and that's maximum entropy under the constraints of the situation (given mean energy and conserved particle number or conserved charge(s) lead to grand canonical ensembles with temperature and chemical potential introduced as Lagrange multipliers).
I don't know of any experiments done at fixed mean energy; thus this seems to me a fictitious situation.

At fixed temperature and volume, the Helmholtz free energy is minimized as equilibrium is approached, not the entropy maximized.
 
  • Like
Likes dextercioby
  • #59
Jarek 31 said:
For situations with incomplete knowledge, models need some hidden assumptions about the missing information - and the safest assumptions (combinatorially dominating) are maximizing entropy.
However, there are additional constraints e.g. from energy conservation - requiring to use weighted: Boltzmann distribution.
And this is about equilibrium like the ground state - dynamics is indeed more complicated - for example which diffusion is more natural: based on standard random walk (GRW, no localization) or maximizing entropy MERW ( https://en.wikipedia.org/wiki/Maximal_entropy_random_walk ) - leading to exactly quantum ground state stationary probability distribution from Boltzmann path ensemble (for M_ij = exp(-E_ij) called transfer matrix):
The hidden information that you may assume without running into troubles depends on the boundary conditions imposed. At constant temperature, a maximal entropy ensemble never gives consistent simulations.
 
  • #60
Jarek 31 said:
Regarding Feynman diagrams, so if electron+positron meet producing photon which flies away, what would be "virtual" about such scenario?
The internal particle line in the tree diagram. All internal lines represent virtual processes only. In reality, there is no ''meeting'' of electrons and positrons.
 
  • #61
A. Neumaier said:
I don't know of any experiments done at fixed mean energy; thus this seems to me a fictitious situation.

At fixed temperature and volume, the Helmholtz free energy is minimized as equilibrium is approached, not the entropy maximized.
I think, we don't need to discuss standard thermodynamics here. The approach to equilibrium as described by the Boltzmann equation leads to the maximum-entropy principle (the celebrated H-theorem). In the thermodynamic limit all ensembles are equivalent and I can use the most simple to deal with, which is (in the many-body QFT context) the grand-canonical one.
 
  • #62
A. Neumaier said:
The internal particle line in the tree diagram. All internal lines represent virtual processes only. In reality, there is no ''meeting'' of electrons and positrons.
So how e.g. PET ( https://en.wikipedia.org/wiki/Positron_emission_tomography ) works - being able to localize positions of such nonexisting "meetings" with millimeter precision - based on delays and positions of the photons?

Also let me go back to the diffusion question - divide e.g. semiconductor into lattice for simulations and ask about currents between its nodes - like if there were amperometers:
1622554012935.png

The question about these currents concerns stochastic model (dividing currents we get probabilities) - should we chose such stochastic model using GRW or MERW philosophy?
Only the latter maximizes entropy, agrees with quantum ground state, and has localization property as experiments (STM densities from http://www.phy.bme.hu/~zarand/LokalizacioWeb/Yazdani.pdf ):
1622554384186.png
 
  • #63
Jarek 31 said:
So how e.g. PET ( https://en.wikipedia.org/wiki/Positron_emission_tomography ) works - being able to localize positions of such nonexisting "meetings" with millimeter precision - based on delays and positions of the photons?
It records the asymptotic particles represented by the terminal lines, but not the virtual process. Note that mm precision is already very macroscopic; scattering processes are completed at far shorter distances.
Jarek 31 said:
should we chose such stochastic model using GRW or MERW philosophy?
I don't see any connection with the topic of the thread.
 
  • Like
Likes romsofia
  • #64
vanhees71 said:
The approach to equilibrium as described by the Boltzmann equation leads to the maximum-entropy principle (the celebrated H-theorem).
But the Boltzmann equation is valid only for very weak interactions.
vanhees71 said:
In the thermodynamic limit all ensembles are equivalent and I can use the most simple to deal with, which is (in the many-body QFT context) the grand-canonical one.
... where the approach to equilibrium is governed by a different extremal principle (the grand potential is minimized), not by max entropy. Only the final equilibrium result is the same.

This shows that the max entropy principle cannot be regarded as being fundamental.
 
  • Like
Likes dextercioby
  • #65
Sure, PET uses asymptotic effect - of very concrete Feynman diagram for annihilation - what is virtual about it?
Indeed mm is quite macroscopic, but much smaller are also measured - like these positions of electrons leaving orbital ( https://journals.aps.org/prb/abstract/10.1103/PhysRevB.80.165404 ) or recent attosecond chronoscopy ( https://www.physicsforums.com/threa...appens-during-attosecod-scale-delays.1002867/ ).

The GRW vs MERW choice of currents reminds that in quantum world we have still statistical mechanics, stochastic models, probabilities ... and that choosing them accordingly to the maximal entropy principle leads to the same equilibrium as QM - quantum ground state stationary probability distribution.
You can call it "Wick rotated QM", but it can be also seen as just doing diffusion right - accordingly to the maximal entropy principle, which is the safest choice for incomplete knowledge models.
 
  • #66
Jarek 31 said:
Sure, PET uses asymptotic effect - of very concrete Feynman diagram for annihilation - what is virtual about it?
Annihilation is a real scattering process conserving energy, hence observable. But its description in terms of Feynman diagrams (or Feynman path integrals) involves virtual (conservation of energy violating) processes depicted to lowest order by the internal lines in the tree diagram.
Jarek 31 said:
in quantum world we have still statistical mechanics, stochastic models, probabilities
Of course. But these have nothing to do with Wick rotations to imaginary time or with Feynman paths. Instead they feature real time processes described in terms of density operators rather than wave functions.
 
  • #67
A. Neumaier said:
But the Boltzmann equation is valid only for very weak interactions.

... where the approach to equilibrium is governed by a different extremal principle (the grand potential is minimized), not by max entropy. Only the final equilibrium result is the same.

This shows that the max entropy principle cannot be regarded as being fundamental.
I don't understand what you mean. The grand canonical ensemble is given by the stat. Op.
$$\hat{\rho}=\frac{1}{Z} \exp(-\beta \hat{H}+\mu \beta \hat{Q}),$$
for ##\hat{H}## the Hamiltonian and ##\hat{Q}## some conserved charge (you can also have more conserved charges and more chemical potentials, but that's of course not much different). This stat. Op. follows from the maximum entropy principle with ##U## and ##Q## as the constraints. The independent variables are ##T=1/\beta## and ##\mu## (and ##V## which you can introduce as a "quantization volume" with periodic boundary conditions).

The associated potential usually used is the grand potential
$$\Omega(V,T,\mu)=-T \ln Z,$$
fulfilling
$$\mathrm{d} \Omega = -p \mathrm{d} V + S \mathrm{d} T -N \mathrm{d} \mu.$$
Equilibrium is characterized by a minimum of ##\Omega## (corresponding to fixed ##V##, ##T##, and ##\mu##).

You can Legendre transform to other thermodynamic potentials with other "natural" independent thermodynamic quantities, which characterize equilibrium as minima (or maxima) keeping these quantities fixed.

Nevertheless the grand-canonical stat. op. is determined by the maximum-entropy principle under the given constraints, ##\langle H \rangle=U## and ##\langle Q \rangle=N## fixed.
 
  • #68
A. Neumaier said:
Annihilation is a real scattering process conserving energy, hence observable. But its description in terms of Feynman diagrams (or Feynman path integrals) involves virtual (conservation of energy violating) processes depicted to lowest order by the internal lines in the tree diagram.
Sure, perturbative expansion requires also virtual particles which are more subtle - but there is the first Feynman diagram without them - can we ask about mean/ensemble of energy density around its electron e.g. rho~1/r^4?

Regarding virtual particles, they also appears if describing topological solitons with Feynman diagrams.
Specifically, while we imagine e.g. pair creation requiring 2x511keV energy, it is in fact it is a continuous field process - with a fraction of this energy we can start such process, but then need to go back - shouldn't such "field perturbation toward e.g. pair creation" be represented as virtual process?
1622563330206.png

A. Neumaier said:
Of course. But these have nothing to do with Wick rotations to imaginary time or with Feynman paths. Instead they feature real time processes described in terms of density operators rather than wave functions.
We get the same formulas for the same (equilibrium) situations from two perspectives - also statistical physics, stochastic models being used for such incomplete knowledge situations.
 
  • #69
vanhees71 said:
I don't understand what you mean. The grand canonical ensemble is given by the stat. Op.
$$\hat{\rho}=\frac{1}{Z} \exp(-\beta \hat{H}+\mu \beta \hat{Q}),$$
for ##\hat{H}## the Hamiltonian and ##\hat{Q}## some conserved charge (you can also have more conserved charges and more chemical potentials, but that's of course not much different). This stat. Op. follows from the maximum entropy principle with ##U## and ##Q## as the constraints.
This is true as a mathematical fact but has no physical relevance.

Indeed, the maximum entropy principle was not even formulated before 1957 - more than half a century after Gibbs had established the grand canonical ensemble and used it with great success. Its derivation is completely independent of any entropy considerations.

If you are slightly out of equilibrium and know the temperature and chemical potential (which is the typical experimental situation), the dynamics does not bring you to the corresponding maximum entropy state but to the corresponding state of least grand potential.

Since real systems are never exactly in equilibrium, the natural extremal principle is the latter and not the former.
 
Last edited:
  • #70
Jarek 31 said:
there is the first Feynman diagram without them
Feynman diagrams without virtual processes have no internal lines, hence represent only noninteracting particles. Thus neither particle annihilation nor pair creation nor Coulomb scattering.
Jarek 31 said:
We get the same formulas for the same (equilibrium) situations from two perspectives - also statistical physics, stochastic models being used for such incomplete knowledge situations.
No. In equilibrium there is no time, and hence no Wick rotation. Outside of equilibrium you get a very different dynamics depending on whether or not you Wick rotate.
 
  • #71
A. Neumaier said:
This is true as a mathematical fact but has no physical relevance.

Indeed, the maximum entropy principle was not even formulated before 1957 - more than half a century after Gibbs had established the grand canonical ensemble and used it with great success. Its derivation is completely independent of any entropy considerations.

If you are slightly out of equilibrium and know the temperature and chemical potential (which is the typical experimental situation), the dynamics does not bring you to the corresponding maximum entropy state but to the corresponding state of least grand potential.

Since real systems are never exactly in equilibrium, the natural extremal principle is the latter and not the former.
It depends on the situation. If you have a system coupled to a heat bath such that everythin is kept at a given temperature and chemical potential you end up at the minimum of the gc potential.

For a closed system the thermodynamical potential is ##S(U,N,V)##, and ##S## gets maximal for the equilibrium.

In the thermodynamic limit the ensembles (microcanonical, canonial, grand-canonical) are equivalent, because the fluctuations of the non-fixed quantities are very small.

This does not invalidate the derivation of the gc stat. op. from the maximium-entropy principle, which is used in any textbook on statistical physics.
 
  • #72
vanhees71 said:
This does not invalidate the derivation of the gc stat. op. from the maximum-entropy principle, which is used in any textbook on statistical physics.
One does not need a derivation - one can assume the form of the operator on the basis of simplicity and tractability. Gibbs didn't have the principle; it is a comparatively modern addition.
vanhees71 said:
For a closed system the thermodynamical potential is S(U,N,V), and S gets maximal for the equilibrium.
Yes, but this is not the typical experimental situation.
 
  • #73
Well, after knowing the result you can always say you could have guessed this somehow, but physics is not about guesses but the derivation from some general principles, and that this specific type of statistical operators turns out to be the correct equilibrium limit is not some plausible chance but follows from the dynamics of the system and the symmetries underlying its foundations. I don't think that one can simply guess this, but it took some centuries from Bernoulli, Boltzman, Gibbs et al. to the modern information theoretical understanding (Shannon, Szilard, Landauer etc.).
 
  • #74
vanhees71 said:
that this specific type of statistical operators turns out to be the correct equilibrium limit is not some plausible chance but follows from the dynamics of the system and the symmetries underlying its foundations.
... which is independent of the maximum entropy principle.
vanhees71 said:
Well, after knowing the result you can always say you could have guessed this somehow, but physics is not about guesses but the derivation from some general principles,
Gibbs found it from general principles, not from the (at his time unknown) maximum entropy principle.
 
Last edited:
  • #75
I don't know, what you don't like about the maximum entropy principle. As a very general modern principle, based on information theory, it helped at least me to understand statistical physics (not only thermal equilibrium). How would you derive the grand-canonical statistical operator?
 
  • #76
vanhees71 said:
How would you derive the grand-canonical statistical operator?
See Section 8.2 and Definition 9.1 of my book
Everything of interest, including an intuitive understanding, follows canonically, as shown in Chapter 9.
vanhees71 said:
I don't know what you don't like about the maximum entropy principle.
A lot. It may give completely nonsensical results, and is reliably applicable only when you can already anticipate the correct result. (Indeed, Jaynes found it after the result was known for over 60 years.).

For details see Section 10.7 of the above book.
 
Last edited:
  • #77
In Chpt. 9 you just give the state ex cathetra. There's no derivation whatsoever. The MEM is much more convincing. In elementary form you can use it for a undergraduate first encounter with statistical mechanics (see, e.g., the Berkeley physics course volume on the subject the socalled "little Reif").

Concerning your criticism in 10.7, it's of course clear that you have to apply the principle to the full information. E.g., if you treat ##x## as continuous but it's in reality to meant a discrete random variable you have to describe it as such and not as continuous in the maximum entropy principle. Also if all moments of a distribution function are given, it's completely determined and you don't need the maximum entropy principle anymore to determine it.
 
  • #78
vanhees71 said:
In Chpt. 9 you just give the state ex cathetra.
Namely by an argument of simplicity. Whereas you give the maximum entropy principle ex cathedra, although the latter is known to give spurious results in many cases. This is much worse!
vanhees71 said:
you have to apply the principle to the full information.
I specified in Section 10.7 several cases of assumed full information where the result is meaningless.

The principle is reliably applicable only when you can already anticipate the correct result. (Indeed, Jaynes found it after the result was known for over 60 years.).
 
  • #79
I find the MEM very plausible. Of course it must fail, when your "given information" is incomplete or contradictory. E.g., you can take a 1D particle in quantum mechanics and give the mean position and momentum and the standard deviations of position and momentum. If the latter contradict the Heisenberg uncertainty relation, you don't get a statistical operator from MEM, which must be so, because there cannot be any quantum state with these properties.

It's also clear that the MEM depends on the choice of the prior. In statistical mechanics you also have to find the correct measure first, before you can apply the MEM. E.g., for a classical many-body system the a-priori measure is the phase-space volume, and to get a sensible canonical or grand canonical ensemble you need a Hamiltonian bounded from below, because otherwise the canonical or grand canoncal distributions following from MEM don't make sense.

I know that the physicists dealing with statistical physics are devided in two camps, the one camp liking MEM the others don't. There seems to be no way to convince either camp about the advantages or disadvantages of the method.
 
  • #80
vanhees71 said:
I find the MEM very plausible. Of course it must fail, when your "given information" is incomplete or contradictory.
Suppose you have measured a sample of 1000 values of the internal energy of a system in a canonical ensemble. This is the complete information you can get from experiment. (No amount of experimentation can give you more than a sample.) You cannot apply the MEM.

But you can compute from the measurement mean and variance of the sample and pretend that this is the complete information. Now you can apply the MEM and obtain not the canonical ensemble but one with density operator ##\exp(-## quadratic in ##H)##, with a small but nonzero quadratic term since the sample mean and variance do not agree with the mean and variance of the exact distribution. Failure on MEM, in spite of consistent information. Of course it is incomplete information, but what would be complete information?

You need to pretend that the mean alone is the complete information (although in fact you know that you have much more detailed information). Then, and only then, you get the correct distribution from the ex cathetra MEM. Thus you essentially need to assume the result to find the result.

Compare with my ex cathetra Definition 9.1. It always produces the correct distribution, even in complicated situations (see Section 10.1), since extensivity has a well-specified physical meaning.
 
Last edited:
  • #81
vanhees71 said:
for a classical many-body system the a-priori measure is the phase-space volume
If you don't know the exact number of particles, MEM gets the right result only if you use correct weights, otherwise you get the wrong entropy of mixing.

Again you need to know which formula works before you can trust the MEM.
 
  • #82
A. Neumaier said:
Suppose you have measured a sample of 1000 values of the internal energy of a system in a canonical ensemble. This is the complete information you can get from experiment. (No amount of experimentation can give you more than a sample.) You cannot apply the MEM.

But you can compute from the measurement mean and variance of the sample and pretend that this is the complete information. Now you can apply the MEM and obtain not the canonical ensemble but one with density operator ##\exp(-## quadratic in ##H)##, with a small but nonzero quadratic term since the sample mean and variance do not agree with the mean and variance of the exact distribution. Failure on MEM, in spite of consistent information. Of course it is incomplete information, but what would be complete information?

You need to pretend that the mean alone is the complete information (although in fact you know that you have much more detailed information). Then, and only then, you get the correct distribution from the ex cathetra MEM. Thus you essentially need to assume the result to find the result.

Compare with my ex cathetra Definition 9.1. It always produces the correct distribution, even in complicated situations (see Section 10.1), since extensivity has a well-specified physical meaning.
I don't know what you mean that I cannot apply the MEM. Of course I can apply the MEM, taking the average energy an determine the temperature determining this value as the expectation value of the energy, which is the internal energy by definition.

You also answer yourself the question, how to choose the "relevant observation" in this specific standard case of equilibrium thermodynamics: You determine the MEM distribution by giving the expectation values of the (relevant) extensive variables, because what you have in mind to calculate is the thermodynamic limit. As I said, MEM doesn't tell you what the correct relevant variables are. That you have to determine otherwise from the specific case you want to investigate.

I don't understand, why your definition 9.1 should be always the "correct distribution". It's the correct distribution for thermodynamics, but you just give the result without any justification from the underlying physics.
 
  • #83
A. Neumaier said:
If you don't know the exact number of particles, MEM gets the right result only if you use correct weights, otherwise you get the wrong entropy of mixing.

Again you need to know which formula works before you can trust the MEM.
Sure, if you don't take into account the indistinguishability of particles from quantum theory you get the Gibbs paradox of classical thermodynamics, but this is not a problem of MEM, which doesn't provide you with the correct counting of the "available states" but gives only the least-prejudice statistical description given the information you give it. As I said, if the information is the wrong one for your problem under consideration, MEM doesn't guarantee you the correct the mistake automatically.
 
  • #84
vanhees71 said:
I don't know what you mean that I cannot apply the MEM. Of course I can apply the MEM, taking the average energy an determine the temperature determining this value as the expectation value of the energy, which is the internal energy by definition.
Then you don't use complete information. You arbitrarily select from the complete information that you have the mean energy, and only the mean energy, as complete information, and discard everything else. You could as well arbitrarily select only the variance, and get a completely different ensemble, one that yields wrong predictions.
vanhees71 said:
As I said, MEM doesn't tell you what the correct relevant variables are.
Earlier you said that one has to use the complete information.

In fact one never needs complete information, but exactly that information that produces the correct density matrix, and to know what this is you need to know what the density operator is, up to some choice of parameters.
vanhees71 said:
As I said, MEM doesn't tell you what the correct relevant variables are.
... while my Definition 9.1 tells you that. Thus it is superior to the MEM.
vanhees71 said:
I don't understand, why your definition 9.1 should be always the "correct distribution". It's the correct distribution for thermodynamics
So you understand why it gives the right distribution: Extensivity of my entropy operator requires that the contributions to the entropy are extensive, too, and this determines the relevant operators at a given description level.
 
Last edited:
  • #85
atyy said:
Yes, in QFT the path integral is over field configurations.
It's an "integral", but in the case of fermionic fields, can the Grassmann "integral" be viewed as a kind of sum?
 
  • #86
Demystifier said:
It's an "integral", but in the case of fermionic fields, can the Grassmann "integral" be viewed as a kind of sum?
In that case, it seems not. Well, at least not in the normal way. I'm not sure if the mathematicians have some way of making it some sort of sum?
 
  • Like
Likes Demystifier
  • #87
Demystifier said:
It's an "integral", but in the case of fermionic fields, can the Grassmann "integral" be viewed as a kind of sum?
Yes, it's a sum, but with many minus signs. More like determinants.
 
  • #88
Demystifier said:
It's an "integral", but in the case of fermionic fields, can the Grassmann "integral" be viewed as a kind of sum?
In general it is an infinite sum. Only for fermionic fields on a lattice it is a finite sum.
 
Last edited:
  • Like
Likes vanhees71
  • #89
WernerQH said:
Yes, it's a sum, but with many minus signs. More like determinants.
A. Neumaier said:
In general it is an infinite sum. Only for fermionic fields on a lattice it is a finite sum.
I meant that it's not a sum like in a Riemann or Lebesgue integral - or can those sums for Grassmann integration be considered generalizations of those more common integrals?

Also, the bosonic Feynman path integral is analogous to those for classical random processes. As far as I know, there are no classical random processes that are like Grassmann integrals. So that's another way in which I think Grassman integrals don't allow the same "picture" (fluctuations of classical configurations) as bosonic fields.
 
Last edited:
  • #90
A. Neumaier said:
In general it is an infinite sum. Only for fermionic fields on a lattice it is a finite sum.
The point is that, for a fixed spacetime point, a Grassmann integral is not a sum. Now if you meant that taking contributions from all spacetime points is a sum over all spacetime points, well, that's actually not a sum but a product.
 

Similar threads

Replies
1
Views
2K
Replies
5
Views
2K
Replies
134
Views
10K
Replies
13
Views
2K
Replies
6
Views
2K
Replies
34
Views
5K
Back
Top