# Second Quantization and Field Operators

## Main Question or Discussion Point

When defining a field operator, textbooks usually say that one can define an operator which destroys (or creates) a particle at position r. What does this really mean? Are they actually referring to destroying (or creating) a state who has specific quantum numbers associated with the geometry that they call r. Any insight would be appreciated. Thanks.

Last edited:

Related Quantum Physics News on Phys.org
Fredrik
Staff Emeritus
Gold Member
A creation operator is just a linear function that takes an n-particle state to an n+1-particle state, and an annihilation operator takes an n-particle state to an n-1-particle state (and the vacuum state to the 0 vector), so the really short answer to your question is "yes". A longer answer would explain stuff like what one-particle states are, and how the Hilbert space of n-particle states can be constructed from the Hilbert space of one-particle states.

Also, the term "second quantization" is kind of old-fashioned and useless.

Last edited:
Demystifier
Gold Member
Also, the term "second quantization" is kind of old-fashioned and useless.
Maybe in particle physics.
But not in solid state physics or even string theory.

mkrems,

A theory which incorporates the fact that particles are "created" and "destroyed" necessarily has to include an operator which, when applied to a state of N particles, gives a state of N+1 particles.

At the theoretical level, what we really mean when we say particles can be "created" or "destroyed" is that states of different particle number are no longer orthogonal. If particle number is constant, then the inner product between a state of 2 particles and a state of 3 particles would necessarily have to be zero. That is, given a state of 2 particles, the probability of observing 3 particles would be zero. Letting particle number vary means we now have non-zero probability of observing a different particle number than what we started with.

Basically, the only way to do this is for the Hamiltonian to be built from an operator which changes particle as described above. If the Hamiltonian did not have such a term, then the time-evolution of a pure state of N particles will only, can only, evolve into states of N particles, forever and ever, amen.

Last edited:
vanesch
Staff Emeritus
Gold Member
At the theoretical level, what we really mean when we say particles can be "created" or "destroyed" is that states of different particle number are no longer orthogonal. If particle number is constant, then the inner product between a state of 2 particles and a state of 3 particles would necessarily have to be zero. That is, given a state of 2 particles, the probability of observing 3 particles would be zero. Letting particle number vary means we now have non-zero probability of observing a different particle number than what we started with. Uh, states with different particle number ARE orthogonal. What you are referring to is the time evolution operator ; in other words, you are confusing final states and initial states. So it is correct that an n-particle FINAL state is not necessarily orthogonal to the TIME EVOLUTION of an m-particle INITIAL state.

However, an m-particle initial state is still orthogonal to an n-particle initial state, and similar for final ones.

In the end, we're talking about zero or non-zero elements of the S-matrix.

You give me pause, vanesch. I have been corrected here before. Here is how I understand it.

If states A and B are orthogonal, then given state A, the probability of observing state B is zero. Period. The only way you could ever observe state B is if it is not orthogonal to A.

Consider a Hydrogen atom with the electron in an excited state. In the absence of the EM field, the excited state is orthogonal to the ground state and no transition can theoretically ever occur. When you perturb the system with the EM field, then you find that the state |electron excited + zero photons> is NOT orthogonal to the state |ground state electron + photon(s)>, i.e., a transition can occur.

What else do non-zero off-diagonal elements of the S-matrix represent? If the probability of putting m particles in and getting n particles out is non-zero, that is the same thing as saying the states are not orthogonal.

If phi is a state of m particles and psi is a state of n particles:
$$P=|\langle \phi(t_1)|\psi(t_2)\rangle|^2$$

vanesch
Staff Emeritus
Gold Member
You give me pause, vanesch. I have been corrected here before. Here is how I understand it.

If states A and B are orthogonal, then given state A, the probability of observing state B is zero. Period. The only way you could ever observe state B is if it is not orthogonal to A.
Indeed: given an initial state of 5 electrons, the probability to have an initial state of 6 electrons is zero!

Consider a Hydrogen atom with the electron in an excited state. In the absence of the EM field, the excited state is orthogonal to the ground state and no transition can theoretically ever occur. When you perturb the system with the EM field, then you find that the state |electron excited + zero photons> is NOT orthogonal to the state |ground state electron + photon(s)>, i.e., a transition can occur.
A *time-dependent* transition! At t = 0, your "probability for transition" is 0. After a time t, you have a finite probability of transition, which, in the approximation of Fermi's Golden Rule, is linear with t, so the coefficient will give you the "probability per unit of time" to decay.

The only thing that it means is that |electron excited + 0 photon> is not an eigenstate of the full hamiltonian.

What else do non-zero off-diagonal elements of the S-matrix represent? If the probability of putting m particles in and getting n particles out is non-zero, that is the same thing as saying the states are not orthogonal.
Do you notice that you use "in" and "out" ?

If phi is a state of m particles and psi is a state of n particles:
$$P=|\langle \phi(t_1)|\psi(t_2)\rangle|^2$$
Do you notice that you have a t1 and a t2 ?

Orthogonal states means: "have distinguishable properties".

Consider (in NR QM), the "position states" |x1> and |x2> . I hope you agree with me that for x1 not equal to x2, these are orthogonal states, right ?

Now, consider that we start with an electron in position state |x1>. A bit later, we find a non-zero probability to find it in state |x2>. Does that now mean that they are, after all, not orthogonal ? Of course not. It means that the TIME EVOLUTION operator U(t1,t2) has mapped |x1> onto a state that is not orthogonal to |x2>.

< x2 | U(t1,t2) | x1 > is non-zero. But that doesn't mean that < x2 | x1 > is non-zero, it only means that U(t1,t2) has a component | x2 > < x1 |.

Hey, I didn't say anything about time-independence.

Basically, the only way to do this is for the Hamiltonian to be built from an operator which changes particle as described above. If the Hamiltonian did not have such a term, then the time-evolution of a pure state of N particles will only, can only, evolve into states of N particles, forever and ever, amen.

reilly
There's a lot of history here, started by Heisenberg's initial solution of the harmonic oscillator by matrix mechanics. What he found was that linear combinations -- sum and difference of position and momentum operators made the algebra easier. They transformed one state into another: going from x to x' in configuration space is a translation; a, the destruction operator, for example, moves a state by -1, from a state with energy wN to one with E =W(n-1). These operators were once referred to as ladder operators and step operators. Step operators are used extensively in angular momentum theory, and, more generally, in the study of Lie groups.

Prof. Fock developed so called Fock space, in which the step operators create a representation of a space containing from 0 an infinite number of oscillators. So quantum fields form the basis of a very efficient formalism for systems and interactions that do not conserve particle number. You could, if you wanted, do all of quantum field theory with configuration wave functions -- the two approaches are connected bya unitary transformation -- and you would quickly find out why the usual formulation of QFT is so much in vogue.

By the way, 2nd quantization is a mis-characterization of QFT, which is nothing more than ordinary quantum theory in a Fock basis. A quantum field is simply a highly useful mathematical construct, just like say, a Bessel function, a vector, resolvents, complex numbers, in fact.

Regards,
Reilly Atkinson

Last edited:
kdv
Hey, I didn't say anything about time-independence.
But you said that "states of different number of particles are not orthogonal". Usually one woul dinterpret that statement as saying you meant "states of different number of particles at a given time are not orthogonal" which is what Patrick is saying is wrong.
I guess you meant states at different times which was not clear from your initial statement ( and if we start discussing states at different times, the time evolution of the system most be specified). I think that's the whole point of Patrick's objection.

reilly
1. As vanesch pointed out, states with different numbers of particles are always, repeat always orthogonal, quite independently of any time behavior. This is basic to the notion of Fock space;
<N| N+M> = 0 unless M=0 -- just put in your step operators, unless the number of a's on the left equals the number of a*'s on the right you get zero.

You might argue that as time goes on an initial state with fixed numbers of particles will generally evolve into a state with most any number of particles. This evolved state clearly then is no longer a state with a fixed N. Then you typically expand the state on the basis of the orthogonal |N> Fock states with N=0 to infinity.

The standard QFT interactions are built primarily on the 3-point interaction, which allows one particle to transform one to two, or two to one, or nothing to three, and three to nothing -- the two later types represent the vacuum creating, say, a electron positron pair along with a photon, or the other way around. Your statment about the Hamiltonian is correct
Regards,
Reilly Atkinson

Maybe in particle physics.
But not in solid state physics or even string theory.
It is an artifact of overcounting. The step where you consider the Schrödinger equation to be a classical field equation should be considered as quantizing minus one times, so overall it is 2 - 1 = 1. Also, I've read that some people proposed calling quantizing gravity as "Third Quantization"

Fra
Philosophical reflection

Also, I've read that some people proposed calling quantizing gravity as "Third Quantization"
See this amusing reflection of John Baez about the n'th quantization.

http://math.ucr.edu/home/baez/nth_quantization.html

I always made loose philosophical connections between string theory as a constrained case of higher order quantization.

If we are talking about indistinguishable particles, it seems clear that we can not distinguish between a multiparticle system or wether the SAME particle seems to be all over the place, or beeing "smeared out" like an extended object?

IE. is a state smeared out as an extended object, or do we have several indistinguishable objects? And what's the difference? The multiparticle interpretation vs the "second quantization". IMO it seems to be different "interpretation" of the same thing. I always actually liked the name "second quantization".

It even suggest here an inductive scheme. Which is what Baez reflects on. And others have done so as well.

/Fredrik

reilly
See this amusing reflection of John Baez about the n'th quantization.

http://math.ucr.edu/home/baez/nth_quantization.html

I always made loose philosophical connections between string theory as a constrained case of higher order quantization.

If we are talking about indistinguishable particles, it seems clear that we can not distinguish between a multiparticle system or wether the SAME particle seems to be all over the place, or beeing "smeared out" like an extended object?

IE. is a state smeared out as an extended object, or do we have several indistinguishable objects? And what's the difference? The multiparticle interpretation vs the "second quantization". IMO it seems to be different "interpretation" of the same thing. I always actually liked the name "second quantization".

It even suggest here an inductive scheme. Which is what Baez reflects on. And others have done so as well.

/Fredrik
Can you tell me where I might find the spectral resolution of Baez's K operator -- as in, for example does K^^N, as N goes to infinity, converge to a finite result? From Baez's description it seems to me that K is a unitary operator; but I'm far from certain about ascribing that characteristic to K.

Thanks,
Reilly Atkinson

Fra
K is the map from the category of hilbert state vectors and linear operators on this space, to the category of "fock" state vectors and the linear operators on fock space.

K^inf would results in some infinite dimensional mess, increasing the degrees of freedom - this is why I don't think receipe along makes sense. It would not converge to anything useful IMO. But I still find the reflection interesting.

I make the following parallell association here...

Consider a distinguishable event x.

1) Either the event happens, or it doesnt {0,1}

2) Consider that we inflate our information capacity in on dimension, we can now consider that continous probability that this event occures {p(x)} ~ [0,1] ~ R

3) Consider again that we inflate our information capacity in another dimension, we can now consider a continous probability for a certain probability {p(p(x))} ~[0,1]x[0,1] ~ R^2

We go from point, to string, to plane; 0-brane, 1-brane, 2-brane.

I know this is fuzzy, but to try to formalize this isn't the interesting part IMO. It's the conceptual thing behind it.

This is connected to how I consider dimensionality to be dynamic, but I do not do it like above. Instead of considering a continous string, one can cnosider "string bits", and that way it's easier to understand how objects of different dimensionality can morph into each other, as part of what I consider to be an optimation problem.

The limiting physical information capacity is my main guide here. A continous string, may be recovered as an approximation but I've got a feeling that it contains far too much ghost degrees of freedom. I don't think the continuum is physically observable, and therefor I would prefer not to see it in the models either.

/Fredrik

Fra
My original association is that if one like me, consider that the information has subjective reality in the observers microstructure then the state of knowledge of a point, can in fact look like a string. IE. the IMAGE of a point, can look like a string due to uncertainty. This is the coupling I make. But there seems to be more than one way to interpret this.

/Fredrik

Fra
It seems most stringers consider the string real in another way, and rather thinkgs that the string is compactified and looks like a point. But that way of thinking is similar to the bird -> frog projection I don't think makes sense.

I prefer to say that the frogs uncertainty inflates the point into a string. And I think this can be understood without actually postulating the existends of strings.

That said I don't like the string theory foundations as it stands but it's still interesting reflections to compare views. So even though I disagree, I can see where the strings come from. It's just that I would choose to see if quite differently.

/Fredrik

Fra
I think the preferred dimensionality might be understood as a generalization of the principle of maximum entropy. A too high dimensionality will decrease the certainty of information for obvious reasons, since the degrees of freedom we have no control of are inflated. A too low dimensionality will not be stable, since it keeps changing - ie the degrees of freedom is too low to describe the situation.

I'm sure there has been alot of work on this. But I see this as part of the problems that aren't yet solved to satisfaction. It's in this larger task I find the reflection on the quantization procedure as an induction step food for though, but not alone the solution.

Reillty, this is what I mean before when I don't think unitariy in the most general case can be maintained. To maintain it, I think we are forced to increas our degrees of freedom beyond what we can relate to. And I think the results is that we get lost in a landscape too large to relate to.

I think there may be another way, that instead of applying the standard QM procedure over and over again - tweak the procedure. I think there will still be a procedure, but then we can find a physical meaning of the procedural progress - time!

Thats my vision, but don't ask me to prove it, I can't. But it's the trac I'm tuned in on.

/Fredrik

Last edited:
strangerep
Can you tell me where I might find the spectral resolution of Baez's K operator -- as in, for example does K^^N, as N goes to infinity, converge to a finite result? From Baez's description it seems to me that K is a unitary operator; but I'm far from certain about ascribing that characteristic to K.
K is a functor, -- neither an operator, nor unitary. It maps from one Hilbert space to a
quite different one. E.g., from a 1-particle Hilbert space to a multiparticle Fock space.

That means it's not an operator, because operators act from one space to the same space.
Also, it can't be bijective, since (eg) the Fock space is larger than the 1-particle space.
Therefore, K can't have a well-defined inverse in the usual sense. Preserving an
Hermitian inner product also doesn't make sense here, since the spaces are different.

Fra
The interesting part IMO, is to, if one for a second forgets about specifics like "particle" concepts and so on, and instead just considers abstract spaces of distinguishable states,

In that context, how can we interpret the "second quantization".

We have on state space, and inflates it to a larger state space? For what purpose? Well seemingly do fit our observations, and maintain unitarity. The smaller state space was too small (too few degress of freedom).

And how can one interpret what is happening if we repeat the same trick? What if the fock space is also too small?

a) What if we do the third quantization? And what would be physical interpretation be?

b) Is there another way? and how are these ways related.

And then, by induction, what would the n'th quantization mean?

And now the point I find interesting, does the quantization procesure itself have any physical significance? or is it just a human paperdragon?

/Fredrik

vanesch
Staff Emeritus
Gold Member
The way I understand (n+1)-th quantisation, which can be wrong, so be careful, is this: if you have a classical theory with configuration space and lagrangian, then you "quantize" this system by assigning an independent dimension of Hilbert space to each point of the configuration space. In other words, what was a configuration in the classical theory (a point in configuration space) now becomes a basis state in the quantized description, and a general quantum state is now a superposition of all of these base states, which can be represented by a "wave function" over configuration space. The value of the wave function at a point of configuration space is of course nothing else but the component in the quantum state of its related base state (the coefficient in the basis expansion).

Now, this wavefunction obeys a certain dynamics, given by the Schroedinger equation and with some hocus pocus, you can see this again as a classical system with dynamics. But this time, the "configuration space" resembles the original Hilbert space. If you look upon this dynamics as a classical system, you can AGAIN quantify it. So this time, quantum states of the first system label the independent basis states of the second system.

The hocus pocus is related to the fact that the Schroedinger equation is first order in time, and one needs a second-order system in order to be able to consider it as the configuration space of a classical system.

Simple application: one-particle classical system --> simple classical dynamics with 3-dim configuration space. Quantum system: "wavefunctions in space" hilbert space. Hocus pokus: scalar field in space. Quantizing again: quantum field theory of scalar fields.

Fra
I personally find one problem of all this is how to make a distinction between distinguishable states and state of the dynamics describing the evolution of the former states. Because why would me make an distinction between states and states? It doesn't make sense.

To me, the dynamical rules, hamiltonians and whatever we use, are part of the total state and should in some sense be observable. So that all we have is a sort of self evolution.

The separation of initial conditions and laws comes out as a flaw. Because the law itsel can be considered as part of the initial conditions. And how can we describe distinguishable states in state spcec and distinguishable laws in lawspace on an equal footing, so that the self-evolution is part of the state and no "external" laws are needed.

I think a sensible descriptions should be able to define the future pointer without any external construct. Which suggests to me that the predictable part of the differential dynamics should be encoded in the set of initial information, in principle at the same level as the traditional "state".

As soon as someone say langrangian I get headache. I try my best to avoid anything that fools me into classical thinking. I've studied all that, but I find it easier to forget everything you ever "learned" and try to think from scratch. I think there must be a better way to seee the meaning of the action principles, and I connects it to the concepts of subjective probability. I even think of classical action as related to plausability. In classical mechanics course the ultimate motivation is that it agrees with newtons mechanics. The deeper motivation is still lacking.

I try to play stupid, becuse it's less confusing: I am an observer lost in space, how I can define this lagrangian or hamiltonian, and how does it help me survive :) And how does this process interfere with the mentioned description? Is this process even part of hte physical interactions?

/Fredrik

Fra
I think I am fuzzy as usual. A clarification.

My main message wasn't that I disagree with what Vanesch said, which it may seem like. I rather want to say that I think this problem need to understood together with the problem of time, and he problem of choosing observables.

In my thinking, the information capacity of any observer is self-regulatory. And there is a selective evolutionary pressure which causes inconsistency observes to loose information capacity. And it's tempting to associate this to loosing mass or energy. But I'm not clear on the exact connection. I'm still sleepless due to this.

Since I associate "mass or energy" with confidence it measn that loosing it means loosing confidence - or that the uncertainty increases. By increasing the degrees of freedom, like you do what you replace a number with a probability distribution, that's only an option if your information capacity allows for it. And there are probably different ways to use any given capacity. If you can make observations on a set of events, and you see that this events just keeps flipping, the question is how to best make use of your capacity? Form the average? and then you still end up with oscillations around the average - howto resolve that? perhaps if there is a pattern in the oscillations themselves this could give more bang for the buck. Then this patter itself could be stable. Or not. If not, similar expansions can go on and build a complex microstructure with has evolved to surviva in a given environment. Then the expectations might have a physical reality in this microstructure.

In this strange sense, "a superposition" can be understood to appear due to internal transformations, who in turn are driven by an optimation. There is a selection for transformations, where those transformations who doesn't make sense simply loose their confidence and go away.

In this, everything does acquire mass/energy (I'm still not clear on this), including the transformations themselves. Which means that there is a constraint on the possible transoformations especially for simple systems. Perhaps the the standard model could be understood as the simplest possible selections? So the defining characteristics of the expected structures are their interacting properties, what they "really look like" inside is a question that makes no sense.

I was hoping to continue with a toy model I have to see if I can define the optimation problem, whos solution would be the superposition or more properly the transform that generates it as the best guess. But then I started reading rovelli's book and got into sidetracks.

IMO, the quantization mystery has two issues.

a) the inflation of the degrees of freedom like vanesch describes - but this would then compete for encoding capacity, and inflating something will shrink something else, unless you inflate the number of distinguishable microstates of the observer.

b) the superpositon principle, or the complex amplitude thing vs standard probability.

I think the two are related.

/Fredrik

reilly