Normalization of vacuum state.

In summary, the vacuum state is defined to be invariant under Poincare transformations and the lowest-energy eigenstate of the Hamiltonian. Its normalization to 1 is reasonable since it should remain vacuum under no change. However, there is a big qualitative difference between the normalizations of vacuum and excited states, as excited states have a continuum of states while the vacuum is isolated. This leads to the issue of infrared divergences in Quantum Electrodynamics, where the photon vacuum is an idealization and what we really deal with is a continuum of soft photon states. Unlike UV divergences which arise from poorly defined theories, IR divergences occur in well-defined models due to the presence of massless particles.
  • #1
kof9595995
679
2
It just occurred to me what if the vacuum state is not normalizable? We usually have the normalization [itex]\langle0|0\rangle=1[/itex], it's acceptable if we are sure the norm of vacuum state is always finite. However, we know states with definite momenta are normalized to delta functions, then how can we be sure vacuum state is not normalized to [itex]\langle0|0\rangle=\delta(0)[/itex]?
 
Physics news on Phys.org
  • #2
I think its because the states with definite momenta are continuous (you have states arbitrarily close in momentum) but the vacuum is isolated from those states.
I think we may have a problem if the mass was zero but maybe not.

someone who knows about this should correct me.
 
  • #3
kof9595995 said:
It just occurred to me what if the vacuum state is not normalizable? We usually have the normalization [itex]\langle0|0\rangle=1[/itex], it's acceptable if we are sure the norm of vacuum state is always finite. However, we know states with definite momenta are normalized to delta functions, then how can we be sure vacuum state is not normalized to [itex]\langle0|0\rangle=\delta(0)[/itex]?
Essentially the vacuum is defined that way to be invariant under Poincare transformations,
and to be the lowest-energy eigenstate of the Hamiltonian. That such a vacuum be normalized to 1 is reasonable since if we start with a vacuum and do nothing to it, then a measurement should indicate that we've still got vacuum -- with probability 1.

BTW, delta expressions like [itex]\delta(0)[/itex] don't really make sense. Remember that a Dirac delta is really only a distribution and therefore only makes precise sense when used as a kernel in an integral, e.g.,
[tex]
\int\!dy\; \delta(x-y) \, f(y) ~:=~ f(x)
[/tex]
It sometimes helps to think of the delta as [itex]\delta(x,y)[/itex], i.e., having 2 arguments rather than 1. That makes it a bit more obvious that it's a generalization of the Kronecker delta from discrete indices to continuous indices and is really nothing more than an identity mapping.

However, since [itex]\delta(x+a,y+a) = \delta(x,y)[/itex], (i.e., it's translation-invariant), one often takes the shortcut of writing it with only a single argument [itex]x-y[/itex], though this obscures some of its precise meaning at times.
 
  • #4
strangerep said:
Essentially the vacuum is defined that way to be invariant under Poincare transformations,
and to be the lowest-energy eigenstate of the Hamiltonian. That such a vacuum be normalized to 1 is reasonable since if we start with a vacuum and do nothing to it, then a measurement should indicate that we've still got vacuum -- with probability 1.
Well, the same can be said about definite momentum excites states, they are also eigenstates of H.

strangerep said:
BTW, delta expressions like [itex]\delta(0)[/itex] don't really make sense. Remember that a Dirac delta is really only a distribution and therefore only makes precise sense when used as a kernel in an integral, e.g.,
[tex]
\int\!dy\; \delta(x-y) \, f(y) ~:=~ f(x)
[/tex]
It sometimes helps to think of the delta as [itex]\delta(x,y)[/itex], i.e., having 2 arguments rather than 1. That makes it a bit more obvious that it's a generalization of the Kronecker delta from discrete indices to continuous indices and is really nothing more than an identity mapping.

However, since [itex]\delta(x+a,y+a) = \delta(x,y)[/itex], (i.e., it's translation-invariant), one often takes the shortcut of writing it with only a single argument [itex]x-y[/itex], though this obscures some of its precise meaning at times.
Yes, but now there is a big qualitative difference between the normalizations of vacuum and excited states, if vacuum isn't that different, then the normalization should be "something like [itex]\delta(0)[/itex]".
 
Last edited:
  • #5
alemsalem said:
I think its because the states with definite momenta are continuous (you have states arbitrarily close in momentum) but the vacuum is isolated from those states.
I think we may have a problem if the mass was zero but maybe not.

someone who knows about this should correct me.

Actually I've kept being told a continuum of states indicates nonnormalizability, and surely I haven't found any counter-example to the statement, but I'd really like to see how exactly are they connected.
And I have the same question with you about photon, since I don't recall any special treatment to photon vacuum normalization.
 
  • #6
Sure, this is the issue dealt with in Quantum Electrodynamics under the heading of infrared divergences. Any process may be accompanied by the emission of soft photons which remain undetectable. Thus the photon vacuum is an idealization, and what we really deal with is a continuum of soft photon states.
 
  • #7
kof9595995 said:
Actually I've kept being told a continuum of states indicates nonnormalizability, and surely I haven't found any counter-example to the statement, but I'd really like to see how exactly are they connected.
And I have the same question with you about photon, since I don't recall any special treatment to photon vacuum normalization.

I guess the first thing about continuum basis is that you can't just have an ordinary sum, you have to expand states in an integral (if you did the inner product of a generic state would be something like the sum over all numbers in an interval).

if you want to have something like orthonormal basis then <a|a'> = 0 for different vectors but then you can't just have <a|a> = number. because then ∫da' |<a|a'>|^2 = 0.. so you can't have the normalization of the state as just a number (nonnormalizability) and <a|a'> is not an ordinary function its a functional, and actually the integral is not just a limit of a sum of numbers... its just a consistent

I hope I made sense.
 
  • #8
Bill_K said:
Sure, this is the issue dealt with in Quantum Electrodynamics under the heading of infrared divergences. Any process may be accompanied by the emission of soft photons which remain undetectable. Thus the photon vacuum is an idealization, and what we really deal with is a continuum of soft photon states.
You got me interested, and I found this in a older post(https://www.physicsforums.com/showthread.php?t=468890&page=2):

DarMM said:
There is also a major conceptual difference between UV and IR divergences. UV divergences arise from the theory being poorly defined. Removing the UV divergences through renormalization is part of correctly constructing the theory.

IR divergences on the other hand occur in well-defined models, even rigorously constructed models (e.g. [tex]QED_{2}[/tex]). This is because IR divergences come about because of massless particles. In this case the asymptotic in/out spaces are not Fock spaces, if you act like they are you get IR divergences. So basically in [tex]QED_{2}[/tex] for example [tex]e^{-} + e^{-} \rightarrow e^{-} + e^{-}[/tex] doesn't exist as process, the states will always contain soft (not virtual) photon clouds. So the asymptotic states must contain soft-photon clouds. The divergence only occurs from trying to calculate fictitious processes.

One must always calculate real processes and therefore you need to include soft-photon clouds. Any process containing the required soft photon clouds (or one where they genuinely are not produced) is called infrared safe.

So what does it mean by " the asymptotic in/out spaces are not Fock spaces"?
 
  • #9
alemsalem said:
if you want to have something like orthonormal basis then <a|a'> = 0 for different vectors but then you can't just have <a|a> = number. because then ∫da' |<a|a'>|^2 = 0.. so you can't have the normalization of the state as just a number (nonnormalizability) and <a|a'> is not an ordinary function its a functional, and actually the integral is not just a limit of a sum of numbers... its just a consistent
So you mean
[tex]\int{da'|\langle a|a'\rangle|^2}=\int{da'\langle a|a'\rangle\langle a'|a\rangle}=\langle a|a\rangle[/tex]
So it's contradictory to have <a|a'> = 0 and <a|a> = number at the same time. Seems to be a good point, thanks.
 
  • #10
I believe what the comment means is that a state in Fock space is specified by a definite number of particles occupying each of a finite number of possible states, but in this case it's necessary to consider states having a nonspecific number of soft photons spread over an infinite number of possible states.
 
  • #11
kof9595995 said:
[...]
but now there is a big qualitative difference between the normalizations of vacuum and excited states, if vacuum isn't that different, then the normalization should be "something like [itex]\delta(0)[/itex]".
The difference is that the vacuum is a physically-realizable state whereas momentum eigenstates are not. We construct more realistic wave-packet states by smearing the momentum eigenstates, e.g.,
[tex]
\def\<{\langle}
\def\>{\rangle}
\psi_f ~=~ \int\!dp\, f(p) |p\>
[/tex]
where f(p) is a Schwarz function. Then
[tex]
\<\psi_g|\psi_f\> ~=~ \int\!dp\, g^*(p) f(p)
[/tex]
which is finite because both f,g are Schwartz functions.
http://en.wikipedia.org/wiki/Schwartz_space
 

What is normalization of vacuum state?

Normalization of vacuum state is a mathematical process used in quantum mechanics to adjust the amplitude of a quantum state. This ensures that the total probability of all possible outcomes is equal to 1, as required by the principles of quantum mechanics.

Why is normalization of vacuum state important?

Normalization of vacuum state is important because it allows for the accurate calculation of probabilities in quantum systems. It ensures that the probabilities of all possible outcomes add up to 1, which is essential for making accurate predictions in quantum mechanics.

How is normalization of vacuum state calculated?

The normalization of vacuum state is calculated by finding the square root of the inner product of the quantum state with itself. This is also known as the norm of the state and is denoted by ||ψ||.

What is the significance of the normalization constant in vacuum state?

The normalization constant in vacuum state is a scaling factor that ensures the total probability of all possible outcomes is equal to 1. It is a fundamental part of quantum mechanics and allows for the accurate calculation of probabilities in quantum systems.

Can the normalization of vacuum state be greater than 1?

No, the normalization of vacuum state cannot be greater than 1. This is because the total probability of all possible outcomes must always equal 1, as required by the principles of quantum mechanics. If the normalization is greater than 1, it means that the probabilities are not accurately calculated and the system is not in a valid quantum state.

Similar threads

Replies
18
Views
1K
Replies
2
Views
571
Replies
8
Views
1K
  • Quantum Physics
3
Replies
71
Views
3K
  • Quantum Physics
Replies
7
Views
1K
  • Quantum Physics
2
Replies
61
Views
1K
Replies
0
Views
487
Replies
3
Views
795
  • Quantum Physics
Replies
2
Views
964
  • Quantum Physics
Replies
1
Views
230
Back
Top