Rigorous Quantum Field Theory.

  • #151
DarMM said:
Okay, here is a model which is exactly solvable nonperturbatively and is under complete analytic control. Also virtually every aspect of this model is understood mathematically.

Now in the is model, the field does not transform covariantly. The reason I'm using the model is to show that even with this property removed there are still different Hilbert spaces.

Firstly, the model is commonly known as the external field problem. It involves a massive scalar quantum field interacting with an external static field.

The equations of motion are:
\left( \Box + m^{2} \right)\phi\left(x\right) = gj\left(x\right)

Now I'm actually going to start from what meopemuk calls "QFT2". The Hamiltonian of the free theory is given by:
H = \int{dk \omega(k)a^{*}(k)a(k)}
Where a^{*}(k),a(k) are the creation and annihilation operators for the Fock space particles.

In order for this to describe the local interactions with an external source, I would modify the Hamiltonian to be:
H = \int{dk \omega(k)a^{*}(k)a(k)} + \frac{g}{(2\pi)^{3/2}}\int{dk\frac{\left[a^{*}(-k) + a(k)\right]}{\sqrt{2}\omega(k)^{1/2}}\tilde{j}(k)}

So far, so good.

Now the normal mode creation and annihilation operators for this Hamiltonian are:
A(k) = a(k) + \frac{g}{(2\pi)^{3/2}}\frac{\tilde{j}(k)}{\sqrt{2}\omega(k)^{3/2}}
A short calculation will show you that these operators have different commutation relations to the usual commutation relations.

They bring the Hamiltonian into the form:
H = \int{dk E(k)A^{*}(k)A(k)},
where E(k) is a function describing the eigenspectrum of the full Hamiltonian.

Now, if you use Rayleigh-Schrödinger perturbation theory you obtain the interacting ground state as a superposition of free states:
\Omega = Z^{1/2} \sum^{\infty}_{n = 0} \frac{1}{n!} - \left(\frac{g}{(2\pi)^{3/2}}\int{\frac{\tilde{j}(k)}{\sqrt{2}\omega(k)^{3/2}}a^{*}(k)}\right)^{n} \Psi_{0}.
Where \Psi_{0} is the free vacuum.
Also Z = exp\left[\int{\frac{g^{2}}{(2\pi)^{3}}\frac{|\tilde{j}(k)|^{2}}{\sqrt{2}\omega(k)^{3}}}\right]

Now for a field weak enough that:
\frac{\tilde{j}(k)}{\omega(k)^{3/2}} \in L^{2}(\mathbb{R}^{3})
then everything is fine. I'll call this condition (1).

However if this condition is violated, by a strong external field, then we have some problems.

First of all A(k), A^{*}(k) are just creation and annihilation operators. They have a different commutation relations, but essentially I can still use them to create a Fock basis, since I can prove the exists a Hilbert space with a state annihilated by all A(k). Now this constructed Fock space always exists, no problem. Let's call this Fock space \mathcal{F}_{I}.

However, if condition (1) is violated something interesting happens. Z vanishes. Now the expansion for \Omega is a sum of terms expressing the overlap of \Omega with free states. If Z=0, then \Omega has no overlap with and hence is orthogonal to all free states. This can be shown for any interacting state.
So every single state in \mathcal{F}_{I} is completely orthognal to all states in \mathcal{F}, the free Fock space. Hence the two Hilbert spaces are disjoint.

So the Fock space for a(k),a^{*}(k) is not the same Hilbert space as the Fock space for A(k), A^{*}(k). They are still both Fock spaces, however A(k), A^{*}(k) has a different algebra, so it's the Fock representation of a new algebra. If one wanted to still use the a(k),a^{*}(k) and their algebra, you would need to use a non-Fock rep in order to be in the correct Hilbert space.

This is what a meant by my previous comment:
The interacting Hilbert space is a non-Fock representation of the usual creation and annihilation operators
or
It is a Fock representation of unusual creation and annihilation operators.


I hope this post helps.

I agree that "interacting vacuum" \Omega has zero overlap with each and every free state. However, I don't agree that this fact implies that \Omega is outside the free Fock space.

This may sound as a paradox, but my point is that "even if all components of a vector are zero, the vector itself could be non-zero". The important thing is that the number of components is infinite. So, infinite number of (virtually) zero components can add up to a non-zero total value.

In fact, here we have an uncertainty of the type "zero"x"infinity". In order to resolve this uncertainty we need to take a proper limiting procedure. I.e., we should slowly move the interaction from "weak" to "strong" regime and look not only at individual components of the \Omega vector, but also at the total sum of squares of these components (which is a measure of the overlap of \Omega with the free Fock space). Then we will see that each particular component indeed tends to zero, but the total sum of squares remains constant (if the transformation is unitary). This means that \Omega does not leave the free Fock space even if the interaction is "strong".

Eugene.
 
Physics news on Phys.org
  • #152
strangerep said:
Patience, like politeness, is a virtue.
I agree. I would love to be a gentlemen.
But I would have written something similar to Eugene's post #140, so I won't repeat it.
No need to repeat it indeed. But in this particular problem one can go further than writing a symbolic solution of the Schroedinger equation. Eugene is a great guy, I like and respect him very much; he wrote a book on QED. The only chapter missing is the coherent states.

I have "Quantum Electrodynamics" by Akhiezer-Berestetski, the last russian edition (1981). There are though many previous editions with the same treatment of time evolution in QED and in particular the coherent states. I will give some formulae from it.

S-matrix as an N-ordered operator is [1]:

S = Tψ{exp[-(1/2)∫jµ(x')Dc(x' - x'')jµ(x'')d4x'd4x''] NAexp(i∫jν(x)Aν(x)d4x)}

Here Tψ is the chronological ordering of ψ-operators, NA is the normal ordering of A-operators, and Dc is the causal Green's function of d'Alambert equation.

If the electron motion can be considered as known (j is a given function of space-time, not an operator), then the chronological ordering of ψ-operators is inessential and the S-matrix becomes:

S = exp[-(1/2)∫jµ(x')Dc(x' - x'')jµ(x'')d4x'd4x''] NAexp(i∫jν(x)Aν(x)d4x)

Here only fields Aν are operators. Thus the solution is:

|Ψ> = exp(i∫jν(x)Aν(x)d4x)|0>

If you represent the operator Aν as an expansion in "plane waves" over k, then the integral will give you the Fourier images of the current and the photon c/a operators. The solution is hence a coherent state of the photon field.

In case of our scalar massive boson field φ, we obtain a similar result for the problem solution (given first by DarMM).

Now, we can choose the current according to our physical situation. No problems with integrability arise. We have just to write explicitly the normalizing volumes in fields and currents.

[1] S. Hori, Progr. Theor. Phys., V. 7, p. 578 (1952).
 
Last edited:
  • #153
strangerep said:
You keep assuming that j(x) is an electric current, or similar. But in this problem j(x) is just a time-independent external background field. No extra assumptions concerning its nature were made in the original problem statement.

It is not only a physical current but also a time-dependent one. DarMM made a mistake. A time-independent current cannot create propagating quanta. It is a four-dimensional Fourier transform of j(x) which is involved.

I also find it quite misleading to call Ω a "physical vacuum" or "interacting ground state". It is not an eigenstate of the total Hamiltonian whatever representation you use. If the energy is uncertain, then no variable change can make it certain, and in particular zero. It is clear even without calculations. No basis vector change can modify the state itself. It is another mistake of DarMM's.

But let us see it closer:

The coherent state is (one-mode state):

|z> = exp(za+ - z*a)|0>, a|z> = z|z>.

The combination (za+ - z*a) is invariant in case of the "shift" variable change: a = A + z*, where z is a complex number. The coherent state |z*> is the A-operator vacuum. But what about the Hamiltonian?

A+A = a+a - (a+z* + az) + |z|2

It looks like the original Hamiltonian but we see that whatever z is, the term |z|2 is unavoidable. So the DarMM's variable change may not be correct - it does not contain the corresponding shifts in spectra.

a+a - (a+z* + az) = A+A - |z|2

What a rigorous QFT! We have to verify everything.

If we make the variable change: a = A + z, then the coherent state |z> is the A-operator vacuum but the spectrum is shifted anyway:

a+a - (a+z + az*) = A+A - |z|2.

It is correct since acting on the solution with H in any representation should not give zero.

In this respect I would like to precise that the Hamiltonians H(a) and H0(a) both are good to describe the exact solution. In the first case H gives the solution proceeding from the source as a boundary condition (known antenna current). In the second case we have to measure the free field state in order to define the superposition coefficients in it. In both cases the solution is expressed in terms of the eigenvectors |n> of H0(a):

|z> = exp(za+ - z*a)|0> = exp(-|z|2/2) eza+|0> = exp(-|z|2/2)∑n=0 (zn/√n!) |n>.

The Hamiltonian H0(a) solely, however, is not sufficient for that since it allows defining its eigenvectors |n> but not the coefficients in a particular filed state. It is in this sense I thought of a and A as of alike (equally good) operators.

But since the total Hamiltonian is in fact different from that of DarMM's (it is not H0(A)), the actual meaning of operators A is also different from what I supposed.

strangerep said:
... Nevertheless, the solution can be expressed in another unitarily-inequivalent Hilbert space, which is the point of all this.
Very interesting to see this solution (expressed in terms of physical vacuums solely? No A-quanta? Are they then physical if the solution never contains them?).
 
Last edited:
  • #154
meopemuk -> My understanding is that your main problem with QFT as it is thought in conventional books is the unphysical presence of bare particles. Is that correct? If that's the case, then things can be settled pretty easily. Even when dealing with QFT non-rigorously, if you have an interacting theory, it does not describe bare particles. Bare particles, with bare masses and charges, are just a mathematical, artificial, non physical construct that arises in perturbation theory. The real problem is that we do not know how to compute quantities non perturbatively, even in those cases where the QFT is defined rigorously, like phi^4 in 2D. And the thing we do is perturbation theory. So bare particles are nothing but a crude approximation to real particles. Much like Earth's orbit computed with the sun's presence exclusively is an approximation to it's real orbit. But the real problem is that we don't know how to compute things so we use calculational tricks. So a bare particle is nothing but a crude approximation to a real particle - it's the free (unphysical) theory's field quantum.

Keep also in mind that when people say they want to construct QFT rigorously it does not mean they want to extract physical quantities non-perturbatively - in most cases it's probably impossible anyways. The goal is to reach a point where things are defined and understood like, say, classical QED, which is well defined because it's given in terms of well defined mathematical objects, classical fields, and the field equations are well posed. Once you've said that, you're still left with the task of computing various quantities of interest. Not an easy task at all.

Maybe someone could enlighten me a bit, but I seem to remember that even in classical field theory, when self forces are computed, one uses some sort of renormalization? Is that correct?

A question about the dressed particle approach. You said that a good interaction term should contain at least 2 creation and 2 annihilation operators, that are normal ordered. If that's so, how would you describe in this approach the decay of particles? Something like one unstable particle decaying into 2 lighter ones? Ordinarily this is achieved with a product of three operators.

bob_for_short -> At some point you say that a Dirac delta squared is infinity and that that's what it means. I do not agree, and this is not really "up for discussion" as it is in all the books about distributions. It's established mathematical knowledge. The square of a Dirac delta is simply mathematically ill defined, just like division by zero.
 
  • #155
DrFaustus said:
Maybe someone could enlighten me a bit, but I seem to remember that even in classical field theory, when self forces are computed, one uses some sort of renormalization? Is that correct?
Yes, that is correct. The electron mass "renormalization" appeared first in the Classical Electrodynamics of one electron interacting with the EMF. Even after the mass renormalization the new equation turned out to have non-physical exact solutions.
DrFaustus said:
bob_for_short -> At some point you say that a Dirac delta squared is infinity and that that's what it means. I do not agree, and this is not really "up for discussion" as it is in all the books about distributions. It's established mathematical knowledge. The square of a Dirac delta is simply mathematically ill defined, just like division by zero.
In my experience I met two types of delta-function squared treatment. One is very well known in QFT and concerns the global energy-momentum conservation law. There δ4(0) is replaced with the product TV (total time times total volume). Then one calculates the things per unit of time and per unit of volume.

The second type gives just infinity and I get rid of it by the problem reformulation in better terms. This is exactly the type that is encountered in loops and vertices. It is just infinity. To get rid of them one subtracts the infinite contributions to the masses (electron and photon mass renormalization) and the charge (equation coupling constant). This subtraction is discarding from expressions and is not reformulation in better terms. That is why this "prescription" does not work without fail.
 
Last edited:
  • #156
DrFaustus said:
So bare particles are nothing but a crude approximation to real particles.

I don't think that bare particles can be regarded even as a "crude approximation" to real particles. They are completely different. Let's just say that infinite mass and charge (of bare particles) is not a reasonable approximation to the finite mass and charge of physical particles. Moreover, "bad" interactions between bare particles are completely different from "good" interactions in the physical world. So, the transition between the bare and dressed pictures is a complete change of the point of view, rather than a perturbative refinement.


DrFaustus said:
A question about the dressed particle approach. You said that a good interaction term should contain at least 2 creation and 2 annihilation operators, that are normal ordered. If that's so, how would you describe in this approach the decay of particles? Something like one unstable particle decaying into 2 lighter ones? Ordinarily this is achieved with a product of three operators.

That's a good question. For simplicity I was talking about simple theories (such as QED) in which basic particles are stable. However, decays can be taken into account as well. Then the classification should be adjusted slightly. In the complete classification, "good" operators also include products of 1 creation and N annihilation operators (or N creation and 1 annihilation operators), where N>1. In contrast to "bad" operators of the same structure, in these "good" operators the (free particle) energy of the 1 unstable species can be higher than the energy of the N decay products, so that the decay is energetically permitted.

In true "bad" terms a^*bc+b^*c^*a the energy of the particle "a" is always lower than the sum of energies of "b" and "c".

In addition to the mentioned classes of interactions, there is another class, which completes the full classification. If interaction a^*b does not violate conservation rules, then it describes oscillations between two particle types "a" and "b". For example, neutrino oscillations.

Eugene.
 
  • #157
meopemuk said:
I don't think that bare particles can be regarded even as a "crude approximation" to real particles. They are completely different. Let's just say that infinite mass and charge (of bare particles) is not a reasonable approximation to the finite mass and charge of physical particles.
Infinite bare values make indeed everything senseless. But at what stage we discover that we in fact deal with bare particles and where from it follows that their masses and charges are infinite? First, it is the perturbative corrections that are infinite, not the initial masses and charges. There are two ways of discarding them:

1) A simple discarding. Then the initial masses and charges remain intact and physical (observable values that in some units can be put equal to unity). But this is obviously wrong mathematically. P. Dirac was pointing out namely this weakness of mathematics.

2) Declare the initial masses and charges to be infinite in order to add the infinite corrections to them without discarding and declare the resulting sum to be physical. Then no discarding is involved but the physics is ruined: our constants that served well in the first Born approximation are declared not to be themselves! Our particles are non observable! What an absurd solution! Technically it is equivalent to discarding but now it is not mathematicians who are wrong but sorry constants. It is they who are guilty now. There is a further rubbish like dependence of bare parameters from physical ones and the cut-off. This dependence is invented just in order to repeat discarding in higher orders.

In both cases the violence of the good sense is obvious. No change of the Hilbert space helps here.
 
  • #158
Bob_for_short said:
It is not only a physical current but also a
time-dependent one. DarMM made a mistake. [...]
No he didn't. He showed one particular example, and you show a
different example. Let us therefore speak of "DarMM's example" and
"Bob's example" to avoid confusion between the two.
I am discussing DarMM's example.

I also find it quite misleading to call Ω a "physical vacuum" or "interacting
ground state". It is not an eigenstate of the total Hamiltonian whatever
representation you use.
Yes it is.

Proof:

Let
U[z] ~:=~ exp(za^* - \bar{z} a)
and
|z\rangle ~:=~ U[z]\, |0\rangle
where
a\,|0\rangle = 0 ~.

Then the transformation
a ~\to~ A = a - z
is equivalent to
a ~\to~ A = U[z]\,a \, U^{-1}[z] ~.
(Proof of the last statement is left as an exercise for the reader.)

Therefore
<br /> A \, |z\rangle ~=~ \left( U[z]\,a \, U^{-1}[z] \right) U[z]\, |0\rangle<br /> ~=~ U[z]\,a \, |0\rangle ~=~ 0 ~.<br />
I.e., A annihilates |z\rangle. Since the full Hamiltonian H is expressible
in the form A^*A, it also annihilates |z\rangle.
Thus, |z\rangle is an eigenstate of H with eigenvalue 0,
and hence qualifies as a vacuum state for H.

[Edit: Per Bob_for_short's request below...
The vector space constructed by acting on |z\rangle with
(polynomials of) the A^*(k) (and completing as usual to
get a Hilbert space) forms the solution space for the problem.]

It looks like the original Hamiltonian but we see that whatever z is, the term
|z|^2 is unavoidable. So the DarMM's variable change may not be
correct - it does not contain the corresponding shifts in spectra.
The constant shift |z|^2 is just a redefinition of the zero value of energy,
which is physically acceptable because we only measure energy differences.
 
Last edited:
  • #159
This is all that you can supply? Where is the problem solution? Is it a pure physical/interacting vacuum? Is its energy certain and equal to zero? How can you measure the energy difference if it always equals zero?
 
  • #160
meopemuk said:
I agree that "interacting vacuum" \Omega has zero overlap with each and every free state. However, I don't agree that this fact implies that \Omega is outside the free Fock space.

This may sound as a paradox, but my point is that "even if all components of a vector are zero, the vector itself could be non-zero". The important thing is that the number of components is infinite. So, infinite number of (virtually) zero components can add up to a non-zero total value.
There's a big difference between "zero" and "almost zero". An infinite number of exactly zero
components still adds up to zero, whereas an infinite number of "almost zero" components
could add up to anything.

In fact, here we have an uncertainty of the type "zero"x"infinity". In order to resolve this uncertainty we need to take a proper limiting procedure. I.e., we should slowly move the interaction from "weak" to "strong" regime and look not only at individual components of the \Omega vector, but also at the total sum of squares of these components (which is a measure of the overlap of \Omega with the free Fock space). Then we will see that each particular component indeed tends to zero, but the total sum of squares remains constant (if the transformation is unitary).
That's only true if the transformation doesn't involve an unbounded operator (in which
case it is only formally unitary).

This means that \Omega does not leave the free Fock space even if the interaction is "strong".
That depends on the details of the theory. In some cases, even the theory with an infinitesimal
coupling constant is not in the free Fock space. (DarMM's example doesn't show that, because
distinct Hilbert spaces only arise there if z(p) is not square-integrable.)

I guess this means we need a nastier example (maybe superconductivity with Bogoliubov
transforms) which show that the Hilbert spaces are distinct even for infinitesimal
couplings. (sigh)
 
  • #161
Frankly, Strangerep, did the "Rigorous QFT" of DarMM's result in complete zero?
 
  • #162
Bob_for_short said:
This is all that you can supply? Where is the problem solution?
See my edit in post #158.
 
  • #163
strangerep said:
See my edit in post #158.
Edit: Per Bob_for_short's request below...
The vector space constructed by acting on LaTeX Code: |z\\rangle with
(polynomials of) the LaTeX Code: A^*(k) (and completing as usual to
get a Hilbert space) forms the solution space for the problem.
The solution space (basis) could be useful if the solution involved at least one excited state (not vacuum). But according to you, the solution is expressed only via vacuums (it is a product of vacuums, posts #150 and #158):

|Ψ> = |0>, H|Ψ> = 0.
strangerep said:
The constant shift |z|^2 is just a redefinition of the zero value of energy, which is physically acceptable because we only measure energy differences.
And how about the total field momentum? With your discarding the shifts it is also equal to zero.

You propose to consider the flux of energy from a laser to be zero, don't you? Do you have any sense of physics?
strangerep said:
No he didn't. He showed one particular example, and you show a different example. Let us therefore speak of "DarMM's example" and "Bob's example" to avoid confusion between the two. I am discussing DarMM's example.
A time-independent current does not have any frequency in its Fourier spectrum. The DarMM's example contains a time-dependent current.

I am really disappointed. I did not expect that from you, Strangerep.
 
Last edited:
  • #164
Bob_for_short said:
The solution space could be useful if the solution involved at least one excited state (not vacuum). But according to you, the solution is expressed only via vacuums.
The state
<br /> A^*(k) \, |z\rangle<br />
is one excited state. Similarly, higher-order polynomials in the A's acting on |z>
are other excited states.

Of course the solution is expressed "only via vacuums".
A Fock space (aka "representation") is constructed by choosing a distinguished
vacuum vector, and then acting on it with creation ops to generate other states.
 
  • #165
strangerep said:
There's a big difference between "zero" and "almost zero". An infinite number of exactly zero
components still adds up to zero, whereas an infinite number of "almost zero" components
could add up to anything.

This is exactly my point. We need to understand clearly, which quantities are "exactly zero" and which are "almost zero" (or "exactly infinite" and "almost infinite").

The same uncertainty arises in the treatment of plane waves in ordinary QM. In order to get a normalized plane wave one needs to multiply \exp(ipx) by a factor 1/"square root of the volume of space". One can say that the volume of space is "exactly infinite", and that its inverse is "exactly zero". So, normalized plane waves formally do not exist (they are "exactly zero"). At least, they do not belong to the "normal" Hilbert space, and the operator of momentum is not a valid observable in this Hilbert space. This is the same kind of logic, which leads you to the conclusion that interacting states in QFT do not belong to the free Fock space, and this logic forces you to introduce a separate (orthogonal) interacting Hilbert space to accommodate the interacting states.

I am not saying that this logic is bad or wrong. I am saying that this logic seems unsatisfactory to me. Intuitively, I would like to have a theory in which both position operator and momentum operator (and their eigenfunctions) can coexist peacefully in the same space of states. Similarly, I would like to have QFT in which both non-interacting and interacting states can coexist. I think that such a theory would need to modify our (presently primitive) notions of "zero" and "infinity". I hope that non-standard analysis is a good candidate framework for such a theory. Currently this is not more that a pure hope without any supporting evidence.

Eugene.
 
  • #166
strangerep said:
There's a big difference between "zero" and "almost zero". An infinite number of exactly zero components still adds up to zero, whereas an infinite number of "almost zero" components could add up to anything.
Yes, and I had such an experience in my practice. It was an extreme sensitivity of a spectral sum to its terms (very slow convergence): infinitesimal changes of each spectral term gave a finite change of the whole spectral sum.
 
Last edited:
  • #167
meopemuk said:
I am not saying that this logic is bad or wrong. I am saying that this logic seems unsatisfactory to me. Intuitively, I would like to have a theory in which both position operator and momentum operator (and their eigenfunctions) can coexist peacefully in the same space of states. Similarly, I would like to have QFT in which both non-interacting and interacting states can coexist. I think that such a theory would need to modify our (presently primitive) notions of "zero" and "infinity". I hope that non-standard analysis is a good candidate framework for such a theory. Currently this is not more that a pure hope without any supporting evidence.
Unfortunately it isn't that easy. Any modification of the Hilbert space framework results in a theory quite different from QM and missing some of its properties. Remember a complex Hilbert space is quite a specific structure mathematically and not that flexible.

To take your plane-wave example, plane-waves aren't an element of the Hilbert space of a non-relativistic particle, so a prediction of standard Hilbert space QM would be that there are no states of definite momentum. If you modified things to allow them, then they would become a physically possible state, however nobody has ever seen a state of definite momentum, so why would we do this?

Even if you object to different Hilbert spaces for free bare particles and physical particles, different Hilbert spaces are needed for QFTs at different temperatures and this is indisputable. However I still think a different Hilbert space for bare particles and interacting particles makes perfect sense, since you cannot prepare a free bare state given a collection of physical states. It's impossible, so just like the plane-wave states I have no idea why you would modify QM to allow this.

Also I have no idea why a theory like QFT would modify our notions of infinity or why the current notions are primitive.
 
  • #168
DarMM said:
Unfortunately it isn't that easy. Any modification of the Hilbert space framework results in a theory quite different from QM and missing some of its properties. Remember a complex Hilbert space is quite a specific structure mathematically and not that flexible.

I agree that Hilbert space postulates (i.e., quantum logic) are untouchable. However, I see a chance to generalize these postulates by using non-standard analysis (see below).

DarMM said:
To take your plane-wave example, plane-waves aren't an element of the Hilbert space of a non-relativistic particle, so a prediction of standard Hilbert space QM would be that there are no states of definite momentum. If you modified things to allow them, then they would become a physically possible state, however nobody has ever seen a state of definite momentum, so why would we do this?

By the same token, nobody has ever seen states with definite position. Shall we then say that position is not a good observable too?

DarMM said:
Even if you object to different Hilbert spaces for free bare particles and physical particles, different Hilbert spaces are needed for QFTs at different temperatures and this is indisputable.

I don't know much about QFT at nonzero temperatures. I am still trying to understand simple QFT with 1-2-3 particles, where temperature does not play any role.

DarMM said:
Also I have no idea why a theory like QFT would modify our notions of infinity or why the current notions are primitive.

Are you familiar with the idea of non-standard analysis?

http://en.wikipedia.org/wiki/Non-standard_analysis

It attempts to enrich the mathematical notions of zero and infinity. In my opinion, QM/QFT can benefit from such generalized approach. My guess is that this could be a more acceptable alternative to "non-equivalent representations of CCR".

Eugene.
 
  • #169
meopemuk said:
By the same token, nobody has ever seen states with definite position. Shall we then say that position is not a good observable too?
Yes, nobody has ever seen states of definite position. However I don't understand why this would make position a bad observable. When you model what's going on in an experiment the observable we are looking at is one of the bounded projections of position and these are perfectly good observables, whose statistical spread of observations matches results.

meopemuk said:
Are you familiar with the idea of non-standard analysis?

http://en.wikipedia.org/wiki/Non-standard_analysis

It attempts to enrich the mathematical notions of zero and infinity. In my opinion, QM/QFT can benefit from such generalized approach. My guess is that this could be a more acceptable alternative to "non-equivalent representations of CCR".
Well first of all, results in non-standard analysis on Hilbert spaces produce nothing that can't really be done with the standard methods. An example is provided in the article you link to.

Also even when QM or QFT are done with non-standard analysis the conclusions reached aren't the same as yours. Take the work of C. E. Francis, he shows that you can include the momentum eigenstates in the Hilbert space, however they aren't part of the physical Hilbert space. That is the physical subspace of his nonstandard analysis Hilbert space is the Hilbert space of standard analysis.

Also I should mention that nonstandard analysis is not really that well accepted in mathematics. Not that people disagree with it, it's just not certain if it adds anything new.

From my point of view I don't see why we should get rid of different Hilbert spaces. In the finite temperature and finite density case it is necessary, for example all Fock space states have zero density so you would need a new Hilbert space. Also Glimm and Jaffe's work (and the work of others) shows that this holds even in the case of zero temperature and density when you have interactions.

I don't agree that this should be replaced by a framework based on mathematics that hasn't shown any great utility. Especially because when it is applied what you describe doesn't happen, but instead the standard framework still holds and a few calculations are speeded up somewhat.

The rôle of different Hilbert spaces has a use in spontaneous symmetry breaking, finite temperature and density and different phases associated with phase transitions. Is it really that much of a stretch to find that it happens with interactions?
 
  • #170
DarMM,

apparently you know more about non-standard analysis than I do. So I should stop arguing.

Besides, the argument about different Hilbert spaces for bare and interacting particles looks purely academic to me. All particles existing in nature are interacting. Bare particles do not exist, so their Hilbert space should not be that interesting.

Eugene.
 
  • #171
meopemuk said:
DarMM,

apparently you know more about non-standard analysis than I do. So I should stop arguing.

Besides, the argument about different Hilbert spaces for bare and interacting particles looks purely academic to me. All particles existing in nature are interacting. Bare particles do not exist, so their Hilbert space should not be that interesting.

Eugene.
Yes, true. Of course that is not the point. All I'm stating is that they have different Hilbert spaces, you can find this interesting or boring if you wish. There are a few reasons why one may find it physically interesting.
For example in QM you can prepare a anharmonic eigenstate from a harmonic system, since all QM systems live in the same Hilbert space, but this procedure cannot be carried out in QFT.

However the focus of this thread is mathematical rigour. The reason why it becomes important then is because if the interacting and bare theories live in different Hilbert spaces then perturbation theory is harder to justify and you can't appeal to the same theorems to show that it works or analytically estimate its convergence.
For example in QED when you work to order \mathcal{O}(e^{4}), you ignore the higher terms because they are "small" compared to the ones you are interested in. As a physicist this needs no further comment, however mathematically you might ask "how do you know they are smaller?". To answer this question and to show that the terms you've included are significant you have to have analytic control of perturbation theory. In QM this is easy, a few theorems of Kato and some methods from Reed and Simon and you're done. For QFT it's much harder and to do it you must keep this different Hilbert space issue under control. Again if you've no interest in rigour, no problem. However this is rigorous QFT, putting everything on a sound mathematical footing.

Also if you want to prove a quantum field theory exists, since the only theory we can solve analytically and gain analytic control over is the free bare Hilbert space, you have to start from there to construct nonperturbatively the interacting model and this different Hilbert space stuff is a crucial issue in the construction. However, once again, the mathematical existence of a QFT is a question of rigour. One you might not be interested in, however the thread is about rigorous quantum field theories.

I must say that the interest in the bare Hilbert space is not for itself, but rather it is the first solid step on the way to rigorously demonstrating things about the interacting theory.
 
  • #172
DarMM said:
Also if you want to prove a quantum field theory exists, since the only theory we can solve analytically and gain analytic control over is the free bare Hilbert space, you have to start from there to construct nonperturbatively the interacting model and this different Hilbert space stuff is a crucial issue in the construction. However, once again, the mathematical existence of a QFT is a question of rigour. One you might not be interested in, however the thread is about rigorous quantum field theories.

I must say that the interest in the bare Hilbert space is not for itself, but rather it is the first solid step on the way to rigorously demonstrating things about the interacting theory.

I think that the connection between bare and interacting theories depends very much on what kind of interaction is assumed to exist.

In most quantum field theories (for example, in QED where interaction has the form jA) it is assumed that "bad" interaction Hamiltonians (those which allow creation of multi-bare-particle states from one-bare-particle states or from vacuum) are OK. This leads to the situation in which bare vacuum and 1-particle states are different from physical vacuum and 1-particle states. This in turn leads to a host of problems associated with different Hilbert spaces needed for the the free and interacting theories, with renormalization, etc. In my opinion, this is a wrong path, and the problems we meet on this path are artificial. I think that "bad" interactions are not present in nature and their theoretical investigation is not useful.

Alternatively, if we limit ourselves to "good" interactions only, then there is no difference between bare and interacting vacuum and 1-particle states. The situation is very similar to the one we have in ordinary QM. As you said, many issues (like existence of the perturbation expansion) can be solved easily. We don't need to worry about different Hilbert spaces for the free and interacting theories. We don't need to worry about renormalization as well. By throwing out "bad" interactions we are not diminishing the predictive power of the theory. One can show that a "good" theory can reproduce exactly the S-matrix of a renormalized "bad" theory.

Eugene.
 
  • #173
meopemuk said:
This may sound as a paradox, but my point is that "even if all components of a vector are zero, the vector itself could be non-zero". The important thing is that the number of components is infinite. So, infinite number of (virtually) zero components can add up to a non-zero total value.

Eugene.

The little I know from axiomatic field theory seem to exclude exactly this. In the definition of the usual Fock space one starts from those vectors, for which only a finite number of components are different from zero, the others being exactly zero. Then one actually includes also the limit points of sequences in this space, but also for these elements the sum of the components is well defined. Hence, e.g. states of nonzero temperature, which contain an infinite number of electron hole excitations from the very beginning, do not lie in the Fock space, but form a separate Hilbert space.
 
  • #174
DrDu said:
In the definition of the usual Fock space one starts from those vectors, for which only a finite number of components are different from zero, the others being exactly zero.

Why would you make such an artificial assumption? Is there any physical reason? or just calculations become easier?
 
  • #175
I would argue like this: The Fock space is a separable Hilbert space and all separable Hilbert spaces are isomorphic. If you choose a representation with infinite dimensional vectors, then all vectors in H have to fulfill this criterion in order that the scalar product is defined.
 
  • #176
DrDu said:
The Fock space is a separable Hilbert space...


Again, why do you think that the Hilbert space of a physical system must be separable? Is there any physical reason for that? Or it simply makes your math easier?

From the point of view of physics, it seems that even the Hilbert space of one particle must be non-separable. There is an uncountable number of points in 3D space. A distinct position eigenvector can be associated with each such point. These eigenvectors would form an uncountable orthonormal basis in the 1-particle Hilbert space.

Eugene.
 
  • #177
meopemuk said:
From the point of view of physics, it seems that even the Hilbert space of one particle must be non-separable. There is an uncountable number of points in 3D space. A distinct position eigenvector can be associated with each such point. These eigenvectors would form an uncountable orthonormal basis in the 1-particle Hilbert space.
Except that you don't have a scalar-valued inner product, and therefore don't have
a Hilbert space in the strict sense of that word. Instead, I guess you're thinking of the
usual delta-distribution valued inner product, which is fine, but it's not a Hilbert space.
It's actually a rigged Hilbert space. (Oops! There's that phrase again that you don't
want to talk about. :-)

But true nonseparable Hilbert spaces are interesting too. In Kibble's work on finding
particular dressing transformations to deal with IR divergences in QED in a more
logically satisfactory way than the standard approaches, he indeed constructs a very large
nonseparable space and solves the dynamical problem therein.
 
Last edited:
  • #178
strangerep said:
Except that you don't have a scalar-valued inner product, and therefore don't have
a Hilbert space in the strict sense of that word. Instead, I guess you're thinking of the
usual delta-distribution valued inner product, which is fine, but it's not a Hilbert space.
It's actually a rigged Hilbert space. (Oops! There's that phrase again that you don't
want to talk about. :-)

I guess you are talking about the usual practice to represent the position-space wave function of a state localized at point a by the delta function

\psi_a(x) = N\delta(x-a)

where N is a normalization factor. If we adopt this rule and represent the inner product by integration, then the product of two localized states is

\langle a | b \rangle = N^2 \int dx \delta(x-a) \delta(x-b) = N^2 \delta(a-b)

Then in order to have a normalized localized state \langle a| a \rangle = 1, we need to set N = 0, which is nonsense.

I think we can avoid this controversy by choosing the "square root of the delta function" as the wave function for localized states. Then the norm of any such localized state is

\langle a | a \rangle = \int dx \sqrt{\delta(x-a)} \sqrt{\delta(x-a)} = \int dx \delta(a-x) = 1

as required in quantum mechanics.


Eugene.
 
  • #179
meopemuk said:
I guess you are talking about the usual practice to represent the position-space wave function of a state localized at point a by the delta function

\psi_a(x) = N\delta(x-a)

where N is a normalization factor. If we adopt this rule and represent the inner product by integration, then the product of two localized states is

\langle a | b \rangle = N^2 \int dx \delta(x-a) \delta(x-b) = N^2 \delta(a-b)

Then in order to have a normalized localized state \langle a| a \rangle = 1, we need to set N = 0, which is nonsense.
Yes, but in RHS we don't need the N at all. The inner product is distribution-valued.

I think we can avoid this controversy by choosing the "square root of the delta function" as the wave function for localized states. Then the norm of any such localized state is

\langle a | a \rangle = \int dx \sqrt{\delta(x-a)} \sqrt{\delta(x-a)} = \int dx \delta(a-x) = 1

as required in quantum mechanics.
Is this "square-root of delta function" thing your own idea?
If not, could you give some references, please?

What happens in the following case?

<br /> \langle a | b \rangle = \int dx \sqrt{\delta(x-a)} \sqrt{\delta(x-b)}<br /> ~=~ (?) \int dx \sqrt{\delta(x-a) \; \delta(x-b)} ~=~ (?)<br />

If a and b are unequal, then the product under the square-root sign is
exactly zero everywhere. So how does one interpret the original integral
rigorously to end up with something like \delta(a-b) ?

Edit: Oh, I just realized... you probably mean it's 0, right?
But if so, these states don't give a well-defined resolution of unity
of the form
<br /> \int\!dx |x\rangle\langle x|<br />
because each \langle a | a \rangle is only defined on a
set of measure zero, -- hence it's not good Lebesgue integral.
 
Last edited:
  • #180
strangerep said:
Is this "square-root of delta function" thing your own idea?
If not, could you give some references, please?

I haven't seen this idea in the literature.


strangerep said:
What happens in the following case?

<br /> \langle a | b \rangle = \int dx \sqrt{\delta(x-a)} \sqrt{\delta(x-b)}<br /> ~=~ (?) \int dx \sqrt{\delta(x-a) \; \delta(x-b)} ~=~ (?)<br />

If a and b are unequal, then the product under the square-root sign is
exactly zero everywhere. So how does one interpret the original integral
rigorously to end up with something like \delta(a-b) ?

In the "square root" approach

\langle a | b \rangle = 1, \ \ \ \ if a=b
\langle a | b \rangle = 0, \ \ \ \ if a \neq b

so \langle a | b \rangle can be interpreted as the *probability* amplitude of finding state a in the state b.

In the usual approach, where \langle a | b \rangle = \delta(a-b), the inner product \langle a | b \rangle should be interpreted as the *probability density* amplitude.
 
  • #181
meopemuk -> I think one should be very careful with the type of expression you wrote (I'm referring to your "square root of the delta"). The reason is again the problem with multiplication of distributions. First of all you'd have to define what kind of object is your square root of the delta, and I think you'd need it to be a distribution as well or you wouldn't be able to get the singularity structure correctly for its square, i.e. Dirac's delta. And then you'd have to take care about it's product. And to do so, you'd have to first define how it acts so one could study it's properties and then show that you can (a) multiply two such objects and (b) that the product is indeed the Dirac delta. I don't know if this can be done or not, but I suspect it cannot. So your expression would need some sort of correction.
 
  • #182
DrFaustus,

I can always define the delta function as a limit of a sequence of "normal" functions whose integral is equal to 1 and whose support is shrinking around one point.

Similarly, I can define the "square root of the delta function" as a limit of a sequence of "normal" functions such that integrals of their squares are equal to 1 and supports are shrinking.

Eugene.
 
  • #183
meopemuk said:
I can always define the delta function as a limit of a sequence of "normal" functions whose integral is equal to 1 and whose support is shrinking around one point.

Similarly, I can define the "square root of the delta function" as a limit of a sequence of "normal" functions such that integrals of their squares are equal to 1 and supports are shrinking.

You can do that, but the problem is that if you take the product of the "square root of the delta function" with any smooth square-integrable function, and integrate, the result is zero. So the "square root of the delta function" is equivalent to the null vector in Hilbert space of square-integrable functions.
 
  • #184
Avodyne said:
You can do that, but the problem is that if you take the product of the "square root of the delta function" with any smooth square-integrable function, and integrate, the result is zero. So the "square root of the delta function" is equivalent to the null vector in Hilbert space of square-integrable functions.

This is correct if you limit yourself to "smooth" functions only. The "square root of delta" is not smooth, but it *is* square-integrable, and the inner product of this function with itself is equal to 1. So, this function is not a null vector in the Hilbert space of "both smooth and non-smooth" square-integrable functions.

I don't think that we should ban non-smooth functions from our Hilbert space. If we do that then eigenfunctions of position and momentum are not permitted, and our theory lacks the important observables of position and momentum. This is not acceptable, in my opinion.

Moreover, wave function is not a physical field or substance. It is just a mathematical measure of probability distribution. There is no law forbidding the probability density to change abruptly. So, nothing forbids existence of discontinuous or non-smooth wave functions.

Eugene.
 
  • #185
meopemuk said:
This is correct if you limit yourself to "smooth" functions only. The "square root of delta" is not smooth, but it *is* square-integrable, and the inner product of this function with itself is equal to 1. So, this function is not a null vector in the Hilbert space of "both smooth and non-smooth" square-integrable functions.
No, Avodyne is referring to how it behaves as a distribution. As a distribution it vanishes, since its integral against a smooth function vanishes. It's nothing to do with its own smoothness.

I don't think that we should ban non-smooth functions from our Hilbert space. If we do that then eigenfunctions of position and momentum are not permitted, and our theory lacks the important observables of position and momentum. This is not acceptable, in my opinion.
We don't. The Hilbert space of standard QM has several non-smooth functions in it.

Also you should stop repeating the claim that if two operators have no eigenfunctions in the Hilbert space then they don't exist as observables in the theory. To be observables they just have to be self-adjoint on the Hilbert space, which position and momentum are. If their eigenfunctions aren't elements of the Hilbert space it just means that they have no states with no uncertainty in the value of that observable, which certainly doesn't mean they aren't observables.
 
  • #186
DarMM said:
Also you should stop repeating the claim that if two operators have no eigenfunctions in the Hilbert space then they don't exist as observables in the theory. To be observables they just have to be self-adjoint on the Hilbert space, which position and momentum are. If their eigenfunctions aren't elements of the Hilbert space it just means that they have no states with no uncertainty in the value of that observable, which certainly doesn't mean they aren't observables.

In my definition, an observable exists if for each (eigen)value there are states in which this (eigen)value is measured with no uncertainty. This is, of course, a personal opinion, which cannot be proven neither by experiment nor by a deeper theory.

Eugene.
 
  • #187
meopemuk said:
I can always define the delta function as a limit of a sequence of "normal" functions whose integral is equal to 1 and whose support is shrinking around one point.

Similarly, I can define the "square root of the delta function" as a limit of a sequence of "normal" functions such that integrals of their squares are equal to 1 and supports are shrinking.
As others have indicated, this doesn't seem possible.
Could you please give a specific example of such a sequence of "normal
functions" that does indeed have the desired properties?

The obvious first attempt fails: consider the usual delta distribution
represented as the limit of a sequence of Gaussians:

<br /> \delta_\epsilon(x) <br /> ~:=~ \lim_{\epsilon\to 0} ~ \frac{1}{\epsilon\sqrt{\pi}}<br /> ~ \exp(-x^2/\epsilon^2)<br />

Then the naive square-root of this is

<br /> \sqrt{\delta_\epsilon(x)}<br /> ~:=~ \lim_{\epsilon\to 0} ~ \frac{1}{\sqrt{\epsilon}\; \pi^{1/4}}<br /> ~ \exp(-x^2/2\epsilon^2)<br />

but a short computation shows that

<br /> \lim_{\epsilon\to 0} \int\!dx \; \sqrt{\delta_\epsilon(x)} ~=~ 0<br />

and similarly,

<br /> \lim_{\epsilon\to 0} \int\!dx \; x^n \sqrt{\delta_\epsilon(x)} ~=~ 0<br />

for any non-negative n.

So defining the "square root of a delta distribution" in the above way
doesn't work usefully. It's equivalent to the trivial zero distribution,
hence a set of such "functions", indexed by a continuous parameter,
cannot serve as a basis for a nontrivial Hilbert space.

(BTW, this also means that "square-root" is a misleading name since we
normally think that if \sqrt{z}=0, then z=0,
which is not the case here.)

I suppose one could then say: ok, the square of a dirac delta is not a
distribution, but some other object type which I'll call "Rdist", with
properties (presumably) like the following:

Definition: An Rdist space V is a linear space over the complex
field, equipped with a symmetric bilinear product

<br /> \star : V\times V \to D<br />

where D is a space of distributions, such that (for v,w\in V),

v\star v = \delta and v\star w = 0 if v \ne w.

But then how does one define a resolution of unity on V? There needs to
be two different kinds of product (inner and outer, presumably).
And we need integration over the elements of V to form continuous
linear combinations. But the trivially-zero integrals above seem to
prohibit this.

So if there's a way to make this idea both rigorous and useful,
you need to show me what it is. I've failed to figure it out for myself.
 
  • #188
strangerep said:
As others have indicated, this doesn't seem possible.
Could you please give a specific example of such a sequence of "normal
functions" that does indeed have the desired properties?

The obvious first attempt fails: consider the usual delta distribution
represented as the limit of a sequence of Gaussians:

<br /> \delta_\epsilon(x) <br /> ~:=~ \lim_{\epsilon\to 0} ~ \frac{1}{\epsilon\sqrt{\pi}}<br /> ~ \exp(-x^2/\epsilon^2)<br />

Then the naive square-root of this is

<br /> \sqrt{\delta_\epsilon(x)}<br /> ~:=~ \lim_{\epsilon\to 0} ~ \frac{1}{\sqrt{\epsilon}\; \pi^{1/4}}<br /> ~ \exp(-x^2/2\epsilon^2)<br />

but a short computation shows that

<br /> \lim_{\epsilon\to 0} \int\!dx \; \sqrt{\delta_\epsilon(x)} ~=~ 0<br />

and similarly,

<br /> \lim_{\epsilon\to 0} \int\!dx \; x^n \sqrt{\delta_\epsilon(x)} ~=~ 0<br />

for any non-negative n.

So defining the "square root of a delta distribution" in the above way
doesn't work usefully. It's equivalent to the trivial zero distribution,
hence a set of such "functions", indexed by a continuous parameter,
cannot serve as a basis for a nontrivial Hilbert space.

Let me rephrase your example a little bit.

Let us take a "normal" square-integrable function \phi(x), which is normalized to unity. For example, this can be a fixed-width gaussian. If "square root of delta" functions form a valid basis in the Hilbert space, then the sum of squares of projections of \phi(x) on all these basis functions must be equal to one.

Next we can show (as you already did) that the inner product (the projection) of \phi(x) with any "square root delta" centered at point a tends to zero as \epsilon\to 0

<br /> \lim_{\epsilon\to 0} \int\!dx \; \phi(x) \sqrt{\delta_\epsilon(x-a)} ~=~ 0<br />

This is definitely true. However, this does not mean that the sum of squares of all projections also tends to zero. As \epsilon\to 0 we should increase the number of \sqrt{\delta_\epsilon(x-a) functions in the basis, so that in the limit there is one such function centered at each value of a. In this limit there are uncountably many basis functions and the total sum (of squares of projections) becomes an indefinite number of the type (zero)x(infinity).


My guess is that if we take this limit properly, then the result should be (zero)x(infinity) = 1, which means that "square root of delta" functions is a valid basis set in the Hilbert space of square-integrable functions.

Eugene.
 
  • #189
Hey, I don't know if bringing up old threads is bad here (Apologies if it is), I just wanted to bring this thread back up since I feel that, like its predecessor, it got derailed by Bob_for_short and meopemuk discussing their own views of what QFT should be like. I just thought I'd bring this back up in case anybody wants a good discussion or wants to ask questions.
 
  • #190
Although there is no hard-and-fast rule, this is somewhat frowned upon. In this case, however, I think its is a great idea. I neither know enough nor have enough time to participate actively in the discussion, but I will be watching with interest.
 
  • #191
DarMM -> I do have a question for you. It might be a bit off topic, but not much. Do you know of any references where I could find an explicit position space expression for the Feynman propagator on the cylinder (1+1 D spacetime \mathbb{R} \times \mathbb{S} )? The momentum space representation is the same as on Minkowski space, but I basically don't know how to compute the inverse Fourier transform, which now involves a sum over the discrete momenta rather than an integral.
 
  • #192
DrFaustus said:
DarMM -> I do have a question for you. It might be a bit off topic, but not much. Do you know of any references where I could find an explicit position space expression for the Feynman propagator on the cylinder (1+1 D spacetime \mathbb{R} \times \mathbb{S} )? The momentum space representation is the same as on Minkowski space, but I basically don't know how to compute the inverse Fourier transform, which now involves a sum over the discrete momenta rather than an integral.
Hey, yes actually. If the infinite volume Feynman propagator is \Delta(x,t), then the finite volume propagator you are looking for is given by:
\sum_{n}\Delta(x + nL,t) \qquad n \in \mathbb{Z}
where L is the size of the circle. (The "length of the world")
A proof can be found in Glimm and Jaffe's book, section 7.3, immediately following Proposition 7.3.1. This is also a routine type of calculation in Lattice Field theory so you could take a look at:
"Quantum Fields on the Lattice" by I. Montvay and G. Münster, Cambridge Monographs on Mathematical Physics, Cambridge University Press, 1994.

If you want a quick idea of the proof, you can change the sum in the Fourier series into an integral and obtain
\frac{1}{L} \sum_{\underline{n} \in \mathbb{Z}} \int{d\omega} = \int{d^{2}k} \left[\sum_{\underline{n} \in \mathbb{Z}}\delta\left(\underline{k} - \frac{2\pi\underline{n}}{L} \right)\right]
then use Córdoba's formula to replace the delta function:
\left[\sum_{n \in \mathbb{Z}}\delta\left(\underline{k} -\frac{2\pi\underline{n}}{L} \right)\right] = \sum_{n \in \mathbb{Z}} e^{i nkL}<br /> \end{equation}
And you'll see the origin of the terms.
 
  • #193
I appreciate the proof above that \sqrt{\delta(x)} does not work as a basis element. I have often thought, "Why not just use the square root of delta?", and now I see why not. :)
 
  • #194
DarMM -> Thanks for the answer! It looks like the same answer I was considering, only in a slightly different way (didn't do all the computations, but it should be the same). The idea was to first perform the "time integral", which gives me \Delta(t,\vec{p}) in your notation. Then use Poisson's summation formula which basically gives the answer you wrote down. Only "problem" now is that the infinite volume propagator is written in terms of Bessel functions and I have no idea if that sum could be explicitly evaluated, perhaps in terms of some "exotic" function. Which is what I was actually hoping for and asking for. But I'm guessing that if in the book of Glimm and Jaffe it's left in that form, than that's pretty much the best we have. I will have a look at the references though. Oh, and it's good to know you're around again. :)
 
  • #195
DarMM said:
I just wanted to bring this thread back up since I feel that,
like its predecessor, it got derailed [...]. I just thought I'd bring this back up in case
anybody wants a good discussion or wants to ask questions.

Thanks for coming back. I was quite disappointed about what happened before.
I wanted to get clear first how the very simple example of a time-independent external
field can give rise to unitarily inequivalent Hilbert spaces, before moving on to
the more interesting case of a time-dependent external field which Bob_for_short
was interested in, and then move on to more difficult cases. Unfortunately, he became
impatient and walked out in a huff too soon.

Anyway, I'm still interested in seeing a case where the interaction entails a
loss of the CCRs (which is not the case for the simple external field example).
If you have the time and patience to write a post about that I'd be grateful. :-)
Maybe a new separate thread on that specific subject would be best, in which
we stick to that sub-topic (and this current thread is already very long).
 
  • #196
DrFaustus said:
Only "problem" now is that the infinite volume propagator is written in terms of Bessel functions and I have no idea if that sum could be explicitly evaluated, perhaps in terms of some "exotic" function. Which is what I was actually hoping for and asking for. But I'm guessing that if in the book of Glimm and Jaffe it's left in that form, than that's pretty much the best we have. I will have a look at the references though. Oh, and it's good to know you're around again. :)
Yeah, unfortunately not. Glimm and Jaffe leave it in that form because there's not much more you can do with it. I imagine you now have sums over modified Bessel functions of the second kind (only a guess, maybe you used other Bessel functions). However I can tell you that the series converges quite rapidly and so only the first few sums in the series are "large".
Oh, and thanks for welcoming me back!
 
  • #197
Canonical Commutation Relations

strangerep said:
Anyway, I'm still interested in seeing a case where the interaction entails a
loss of the CCRs (which is not the case for the simple external field example).
If you have the time and patience to write a post about that I'd be grateful. :-)
Maybe a new separate thread on that specific subject would be best, in which
we stick to that sub-topic (and this current thread is already very long).
I'll certainly do a post on it. Essentially we've already covered step one with the external field, which is the interaction causing a change of representation, but only to another Fock rep. The second step would be the interaction causing moving things further to a non-Fock rep (this is associated with mass renormalization). Finally we could cover the case where the interactions cause a complete failure of the CCR, (associated with Wave-Function/Field Strength renormalization).

I'll try it here first, however if it's a bit cumbersome I could move it to another thread.
Or perhaps, if the moderators could tell me, would it be better to start a fresh thread altogether?
 
  • #198
Scattering Theory

Before I begin I wanted to clear up some possible confusion about the different types of particles "bare", "free" and "physical" and how these relate to the different Hilbert spaces issue.

In scattering theory in QM, if you have the usual properties of Asymptotic completeness, e.t.c. then as the physical state vector \psi evolves past the time of interaction its evolution coincides with that of a free particle. Essentially:
U(t)\psi - U_{0}(t)\psi_{out} \rightarrow 0 as t \rightarrow \infty.
With U(t) the evolution operator and U(t)_{0} the free evolution operator.
Essentially the physical state eventually begins to look like a bunch of freely moving particles.

In QFT this is still the case, the state will eventually evolve into a collection of non-interacting particles, i.e. free particles. However Haag's theorem basically says that this evolution never tends to the evolution of an actual free theory. In other words, even though the particles do become non-interacting asymptotic scattering states their evolution is at no point similar to the evolution of particles from the free theory.

An example will make this less surprising. Take QED and set the electric charge to zero. This is a free theory consisting of free fermions and spin one bosons. The free fermions have charge 0. Now take normal QED and imagine Coulomb scattering. After the scattering the electrons will move away from each other and no longer interact. They become free. However they never actually look like the fermions from the e = 0 theory, because they always have an electric charge.

In a more general way this is the basic difference between QM and QFT that Haag's theorem tells us about. In QM I can take an eigenstate of the harmonic oscillator and through a series of physical operations prepare an eigenstate of the anharmonic oscillator.

However, in QFT, I can never take a single electron with charge e and from it prepare a free fermion with no charge. These two theories live in totally different worlds or in the language of QM, two different representations or Hilbert spaces.

This doesn't change anything about scattering theory fundamentally, I still have in/out states its just that they have nothing to do with the free Hamiltonian. You can even see this in regular (non-rigorous) QFT and QM. In QM the S-matrix is usually defined with things like Møller operators, however in QFT the S-matrix is defined as poles of the correlation functions. This is because in QFT Møller operators don't exist, since there is no link to an interacting theory. This is the whole reason for the LSZ formalism.
 
  • #199
DarMM -> Why do you say that in QFT Moeller operators don't exist? Staying in 2D and for a scalar field, I certainly can define an object like \exp[-i(t-s)H_0] \exp[i(t-s)H]. And this object is well defined and for finite s and t. My understanding is that problems occur when you take the limits s \to -\infty and t \to +\infty. Or am I missing something here?
 
  • #200


DarMM said:
In scattering theory in QM, if you have the usual properties of Asymptotic completeness, e.t.c.
then as the physical state vector \psi evolves past the time of interaction
its evolution coincides with that of a free particle.
[...]
In QFT this is still the case, the state will eventually evolve into a
collection of non-interacting particles, i.e. free particles.

I've begun to be quite puzzled why QFT is still based on this notion,
since it's been known for many decades that even for the
non-relativistic Coulomb interaction the asymptotic dynamics does not
tend to the free dynamics. I wrote a summary post on this recently:

https://www.physicsforums.com/showpost.php?p=2560777&postcount=3

But it seems that, even if one tries to solve the full theory in
terms of the correct asymptotic dynamics (as found in the paper
by Kulish & Fadeev referenced in my post), one overcomes only the
IR divergence but UV problems persist.

But don't let me divert your train of thought... :-)
 
Back
Top