Quantizing a two-dimensional Fermion Oscillator

In summary, the problem with quantizing a two dimensional system defined by the Lagrange's function is that the equations of motion are non-trivial and the velocities are not independent of the coordinates. The attempt with the Schrödinger's equation and Hamiltonian fails because the canonical momenta are y and x. The path integrals and action cannot be used to define time evolution because there are no terms in the S that are proportional to \frac{1}{t_1-t_0}.
  • #71
jostpuur said:
I am only trying to understand what P&S are talking about, [...]
Have you tried Zee? I found P&S ch9 quite poor at explaining the essence of path
integrals the first time I read it. Especially the generating functional Z(J) and what it
is used for. Zee explains it more clearly and directly. After that, the more extensive
treatment in P&S started to become more understandable.

[...]I'm sure this is one good definition for the Grassmann integration[...]
Your definition of Grassman integration seems wrong to me (though again I don't
have time to fully deconstruct the details).
If [itex]f(\theta)[/itex] is a constant, the integral is zero in standard Grassman
calculus, but yours looks like it would give some other value.
 
Physics news on Phys.org
  • #72
My construction where

[tex]
\theta=(1,0,0),\quad\eta=(0,1,0),\quad\theta\eta=(0,0,1)
[/tex]

was wrong. In P&S [tex]f[/tex] is said to be a function of a Grassmann variable [tex]\theta[/tex]. It is not possible for the theta to be a fixed basis vector.

Okey, I still don't know what the Grassmann algebra is.

If I denote [tex]G[/tex] the construction I gave in linear algebra subforum (basically [tex]G=\mathbb{R}^2[/tex] with some additional information), perhaps

[tex]
\bigoplus_{g\in G} \mathbb{C}
[/tex]

could be a correct kind of algebra...

Have you tried Zee?

No. At some point I probably will, but it is always a labor to get new books. Library is usually empty of the most popular ones.
 
  • #73
jostpuur said:
Okey, I still don't know what the Grassmann algebra is.

Let A,B,C,... denotes ordinary complex numbers.

Then a 1-dimensional Grassman algebra consists of a single Grassman
variable [itex]\theta[/itex], its complex multiples [itex]A\theta[/itex],
and a 0 element, (so far it's a boring 1D vector space over [itex]\mathbb{C}[/itex]),
and the multiplication rules [itex]\theta^2 = 0; A\theta = \theta A[/itex].

The most general function [itex]f(\theta)[/itex] of a single
Grassman variable is [itex]A + B\theta[/itex] (because higher order
terms like [itex]\theta^2[/itex] are all 0.

A 2-dimensional Grassman algebra consists of a two Grassman
variables [itex]\theta,\eta[/itex], their complex linear combinations,
[itex]A\theta + B\eta[/itex], a 0 element, (so far it's a 2D vector space
over [itex]\mathbb{C}[/itex]), with the same multiplication
rules as above for [itex]\theta,\eta[/itex] separately, but also
[itex]\theta\eta + \eta\theta = 0[/itex].

The most general function [itex]f(\theta,\eta)[/itex] of a two
Grassman variables is [itex]A + B\theta + C\eta + D\theta\eta[/itex]
(because any higher order terms are either 0 or reduce to a lower
order term).

And so on for higher-dimensional Grassman algebras.

That's about all there is to it.

Integral calculus over a Grassman algebra proceeds partly by analogy
with ordinary integration. In particular, [itex]d\theta[/itex] is
required to be the same as [itex]d(\theta+\alpha)[/itex] (where
[itex]\alpha[/itex] is a constant Grassman number). This leads to
the rules shown in P&S at the top of p300 -- eqs 9.63 and 9.64.
 
  • #74
strangerep said:
Let A,B,C,... denotes ordinary complex numbers.

Then a 1-dimensional Grassman algebra consists of a single Grassman
variable [itex]\theta[/itex], its complex multiples [itex]A\theta[/itex],
and a 0 element, (so far it's a boring 1D vector space over [itex]\mathbb{C}[/itex]),
and the multiplication rules [itex]\theta^2 = 0; A\theta = \theta A[/itex].

The most general function [itex]f(\theta)[/itex] of a single
Grassman variable is [itex]A + B\theta[/itex] (because higher order
terms like [itex]\theta^2[/itex] are all 0.

Could [tex]\theta[/tex] as well be called Grassmann constant? If it is called variable, it sounds like [tex]\theta[/tex] could have different values.

Also, if A and B are complex numbers, and I was given a quantity A+4B, I would not emphasize A and B being constants, and calling this expression the function of 4, like [tex]f(4)=A+4B[/tex].
 
  • #75
Or is it like this: [tex]\theta[/tex] can have different values, and there exists a Grassmann algebra for each fixed [tex]\theta[/tex]?
 
  • #76
jostpuur said:
Could [tex]\theta[/tex] as well be called Grassmann constant?
No.

If it is called variable, it sounds like [tex]\theta[/tex] could have different values.
Consider a function f(x) where x is real. You wouldn't call "x" constant, even though any specific
value of x you plug into f(x) _is_ constant. [itex]\theta[/itex] is an element of a 1-dimensional
vector space. Besides [itex]\theta[/itex], this vector space contains 0 and any complex multiple of
[itex]\theta[/itex], e.g: [itex]C\theta[/itex].

if A and B are complex numbers, and I was given a quantity A+4B, I would not
emphasize A and B being constants, and calling this expression the function of 4, like [tex]f(4)=A+4B[/tex].
All the symbols occurring "A+4B" are from the same vector space, i.e., [itex]\mathbb C[/itex],
so this is not the same thing as [itex]A+B\theta[/tex].
 
  • #77
strangerep said:
No.Consider a function f(x) where x is real. You wouldn't call "x" constant, even though any specific
value of x you plug into f(x) _is_ constant.

Ok. But then we need more precise definition for the set of allowed values of [tex]\theta[/tex]. It is not my intention to only complain about lack of rigor, but I honestly haven't even got very good intuitive picture about this set either. I think I have now my own definition/construction ready for this, so that it seems to make sense, and I'm not sure that this claim:

[itex]\theta[/itex] is an element of a 1-dimensional
vector space. Besides [itex]\theta[/itex], this vector space contains 0 and any complex multiple of
[itex]\theta[/itex], e.g: [itex]C\theta[/itex].

is fully right. For each fixed [tex]\theta[/tex] we have a vector space [tex]V=\{C\theta\;|\;C\in\mathbb{C}\}[/tex], but I don't see how this could be the same set from which [tex]\theta[/tex] was originally chosen.

Here's my way to get Grassmann algebra, where Grassmann variables would be as similar to the real numbers as possible:

First we define a multiplication on the [tex]\mathbb{R}^2[/tex] like it was done in my post in linear algebra subforum. That means, [tex]\mathbb{R}^2\times\mathbb{R}^2\to\mathbb{R}^2[/tex],

For all [tex]x\in\mathbb{R}[/tex], [tex](x,0)(x,0)=(0,0)[/tex].

If [tex]0<x<x'[/tex], then [tex](x,0)(x',0)=(0,xx')[/tex] and [tex](x',0)(x,0)=(0,-xx')[/tex].

If [tex]x<0<x'[/tex] or [tex]x<x'<0[/tex] just put the signs naturally.

Finally for all [tex](x,y),(x',y')\in\mathbb{R}^2[/tex] put [tex](x,y)(x',y')=(x,0)(x',0)[/tex]

Now the [tex]\mathbb{R}[/tex] has been naturally (IMO naturally, perhaps somebody has something more natural...) extended to smallest possible set so that it has a nontrivial anti-commuting product.

At this point one should notice that it is not a good idea to define scalar multiplication [tex]\mathbb{R}\times\mathbb{R}^2\to\mathbb{R}^2[/tex] like [tex](\lambda,(x,y))\mapsto (\lambda x,\lambda y)[/tex], because the axiom [tex](\lambda \theta)\eta=\theta(\lambda \eta)[/tex] would not be satisfied.

However a set

[tex]
G=\bigoplus_{(x,y)\in\mathbb{R}^2}\mathbb{C}
[/tex]

becomes a well defined vector space, whose members are finite sums

[tex]
\lambda_1(x_1,y_1)+\cdots+\lambda_n(x_n,y_n).
[/tex]

It has a natural multiplication rule [tex]G\times G\to G[/tex], which can be defined recursively from

[tex]
(\lambda_1(x_1,y_1) + \lambda_2(x_2,y_2))(\lambda_3(x_3,y_3) + \lambda_4(x_4,y_4)) = \lambda_1\lambda_3 (x_1,y_1)(x_3,y_3) + \lambda_1\lambda_4 (x_1,y_1)(x_4,y_4) + \lambda_2\lambda_3 (x_2,y_2)(x_3,y_3) + \lambda_2\lambda_4 (x_2,y_2)(x_4,y_4),
[/tex]

where we use the previously defined multiplication on [tex]\mathbb{R}^2[/tex].

To my eye it seems that this [tex]G[/tex] is now a well defined algebra and has the desired properties: If one chooses a member [tex]\theta\in G[/tex], one gets a vector space [tex]\{C\theta\;|\;C\in\mathbb{C}\}\subset G[/tex], and if one chooses two members [tex]\theta,\eta\in G[/tex], then the identity [tex]\theta\eta = -\eta\theta[/tex] is always true.
 
  • #78
Now I thought about this more, and my construction doesn't yet make sense. The identity [tex]\theta\eta=-\eta\theta[/tex] would be true only if there is a scalar multiplication [tex](-1)(x,y)=(-x,-y)[/tex], which wasn't there originally. It could be made it too complicated because I was still thinking about my earlier construction attempt...


strangerep said:
Then a 1-dimensional Grassman algebra consists of a single Grassman
variable [itex]\theta[/itex], its complex multiples [itex]A\theta[/itex],
and a 0 element, (so far it's a boring 1D vector space over [itex]\mathbb{C}[/itex]),
and the multiplication rules [itex]\theta^2 = 0; A\theta = \theta A[/itex].

More ideas!:

I think this one dimensional Grassmann algebra can be considered as the set [tex]\mathbb{R}\times\{0,1\}[/tex] (with [tex](0,0)[/tex] and [tex](0,1)[/tex] identified as the common origin 0), with multiplication rules

[tex]
(x,0)(x',0)=(xx',0)
[/tex]
[tex]
(x,0)(x',1)=(xx',1)
[/tex]
[tex]
(x,1)(x',0)=(xx',1)
[/tex]
[tex]
(x,1)(x',1)=0
[/tex]

Here [tex]\mathbb{R}\times\{0\}[/tex] are like ordinary numbers, and [tex]\mathbb{R}\times\{1\}[/tex] are the Grassmann numbers. One could emphasize it with Greek letters [tex]\theta=(\theta,1)[/tex].

A 2-dimensional Grassman algebra consists of a two Grassman
variables [itex]\theta,\eta[/itex], their complex linear combinations,
[itex]A\theta + B\eta[/itex], a 0 element, (so far it's a 2D vector space
over [itex]\mathbb{C}[/itex]), with the same multiplication
rules as above for [itex]\theta,\eta[/itex] separately, but also
[itex]\theta\eta + \eta\theta = 0[/itex].

This would be a set [tex]\mathbb{R}\times\{0,1,2,3\}[/tex] with multiplication rules

[tex]
(x,0)(x',k) = (xx',k),\quad k\in\{0,1,2,3\}
[/tex]
[tex]
(x,1)(x',1) = 0
[/tex]
[tex]
(x,1)(x',2) = (xx', 3)
[/tex]
[tex]
(x,2)(x',1) = (-xx',3)
[/tex]
[tex]
(x,2)(x',2) = 0
[/tex]
[tex]
(x,k)(x',3) = 0 = (x,3)(x',k),\quad k\in\{1,2,3\}
[/tex]

hmhmhmhmh?

Argh! But now I forgot that these are not vector spaces... :mad: Why cannot I just read the definition from somewhere...

btw. I think that if you try to define two dimensional Grassmann algebra like that, it inevitably becomes a three dimensional, because there are members like

[tex]
x\theta + y\eta + z\theta\eta
[/tex]
 
  • #79
strangerep, I'm not saying that there would be anything wrong with your explanation, but it must be missing something. When the Grassmann algebra is defined like this:

strangerep said:
Then a 1-dimensional Grassman algebra consists of a single Grassman
variable [itex]\theta[/itex], its complex multiples [itex]A\theta[/itex],
and a 0 element, (so far it's a boring 1D vector space over [itex]\mathbb{C}[/itex]),
and the multiplication rules [itex]\theta^2 = 0; A\theta = \theta A[/itex].

The most general function [itex]f(\theta)[/itex] of a single
Grassman variable is [itex]A + B\theta[/itex] (because higher order
terms like [itex]\theta^2[/itex] are all 0.

It is already assumed, that we know from what set the [tex]\theta\in ?[/tex] is.

Consider a function f(x) where x is real. You wouldn't call "x" constant, even though any specific value of x you plug into f(x) _is_ constant. [tex]\theta[/tex] is an element of a 1-dimensional vector space. Besides [tex]\theta[/tex], this vector space contains 0 and any complex multiple of [tex]\theta[/tex], e.g: [tex]C\theta[/tex].

Once [tex]\theta[/tex] exists, we get a vector space [tex]V_{\theta}:=\{c\theta\;|\;c\in\mathbb{C}\}[/tex], and it is true that [tex]\theta\in V_{\theta}[/tex], but you cannot use this vector space to define what [tex]\theta[/tex] is, because the [tex]\theta[/tex] is already needed in the definition of this vector space.

This is important. At the moment I couldn't tell for example if a phrase "Let [tex]\theta=4[/tex]..." would be absurd or not. Are they numbers that anti-commute like [tex]3\cdot4 = -4\cdot 3[/tex]? Is the multiplication some map [tex]\mathbb{R}\times\mathbb{R}\to \mathbb{R}[/tex], or [tex]\mathbb{R}\times\mathbb{R}\to X[/tex], or [tex]X\times X\to X[/tex], where X is something?
 
  • #80
jostpuur said:
Why cannot I just read the definition from somewhere...
You have, but you also have a persistent mental block against it that is beyond my
skill to dislodge.

Once [tex]\theta[/tex] exists, we get a vector space [tex]V_{\theta}:=\{c\theta\;|\;c\in\mathbb{C}\}[/tex], and it is true that [tex]\theta\in V_{\theta}[/tex], but you cannot use this vector space to define what [tex]\theta[/tex] is,
because the [tex]\theta[/tex] is already needed in the definition of this vector space.

[itex]\theta[/itex] is an abstract mathematical entity such that [itex]\theta^2=0[/itex].
There really is nothing more to it than that.

This is all a bit like asking what [itex]i[/itex] is. For some students initially, the answer
that "[itex]i[/itex] is an abstract mathematical entity such that [itex]i^2 = -1[/itex]" is
unsatisfying, and they try to express [itex]i[/itex] in terms of something else they
already understand, thus missing the essential point that [itex]i[/itex] was originally
invented because that's not possible.
 
  • #81
The biggest difference between [tex]i[/tex] and [tex]\theta[/tex] is that [tex]i[/tex] is just a constant, where as [tex]\theta[/tex] is a variable which can have different values.

If I substitute [tex]\theta=3[/tex] and on the other hand [tex]\theta=4[/tex], will the product of these two Grassmann numbers be zero, or will it anti-commute non-trivially: Like [tex]3\cdot 4= 0 = 4\cdot 3[/tex], or [tex]3\cdot 4 = - 4\cdot 3 \neq 0[/tex]?

Did I already do something wrong when I substituted 3 and 4? If so, is there something else whose substitution would be more allowed?
 
  • #82
This is all a bit like asking what [itex]i[/itex] is. For some students initially, the answer
that "[itex]i[/itex] is an abstract mathematical entity such that [itex]i^2 = -1[/itex]" is
unsatisfying, and they try to express [itex]i[/itex] in terms of something else they
already understand, thus missing the essential point that [itex]i[/itex] was originally
invented because that's not possible.

IMO you cannot get satisfying intuitive picture of complex numbers unless you see at least one construction for them. The famous one is of course the one where we set [tex]\mathbb{R}=\mathbb{R}\times\{0\}[/tex], [tex]i=(0,1)\in\mathbb{R}^2[/tex], and let [tex]\mathbb{R}\cup \{i\}[/tex] generate the field [tex]\mathbb{C}[/tex].

Another one is where we identify all real numbers [tex]x\in\mathbb{R}[/tex] with diagonal matrices

[tex]
\left[\begin{array}{cc}
x & 0 \\ 0 & x \\
\end{array}\right]
[/tex]

We can then set

[tex]
i = \left[\begin{array}{cc}
0 & 1 \\ -1 & 0 \\
\end{array}\right]
[/tex]

and we get the complex numbers again.

strangerep said:
[itex]\theta[/itex] is an abstract mathematical entity such that [itex]\theta^2=0[/itex].
There really is nothing more to it than that.

There must be more. If that is all, I could set

[tex]
\theta = \left[\begin{array}{cc}
0 & 1 \\ 0 & 0 \\
\end{array}\right]
[/tex]

and be happy. The biggest difference between this matrix, and the [tex]\theta[/tex] we want to have, is that this matrix is not a variable that could have different values, but [tex]\theta[/tex] is supposed to be a variable.


...


btw would it be fine to set

[tex]
\theta = \left[\begin{array}{cc}
0 & \theta \\ 0 & 0 \\
\end{array}\right]?
[/tex]
 
  • #83
jostpuur said:
IMO you cannot get satisfying intuitive picture of complex numbers unless you see at least one construction for them. The famous one is of course the one where we set [tex]\mathbb{R}=\mathbb{R}\times\{0\}[/tex], [tex]i=(0,1)\in\mathbb{R}^2[/tex],
and let [tex]\mathbb{R}\cup \{i\}[/tex] generate the field [tex]\mathbb{C}[/tex].

OK, I think I see the source of some of the confusion. Let's do a reboot, and
change the notation a bit to be more explicit...

Begin with a (fixed) nilpotent entity [tex]\Upsilon[/tex] whose only properties
are that it commutes with the complex numbers, and [tex]\Upsilon^2 = 0[/tex].
Also, [tex]0\Upsilon = \Upsilon 0 = 0[/tex]. Then let [tex]\mathbb{A} := \mathbb{C}\cup \{\Upsilon\}[/tex]
generate an algebra. I'll call the set of numbers [tex]\mathbb{U} := \{z \Upsilon : z \in \mathbb{C}\}[/tex]
the nilpotent numbers.

I can now consider a nilpotent variable [tex]\theta \in \mathbb{U}[/tex].
Similarly, I can consider a more general variable [tex]a \in \mathbb{A}[/tex].
I can also consider functions [tex]f(\theta) : \mathbb{U} \to \mathbb{A}[/tex].

More generally, I can consider two separate copies of [tex]\mathbb{U}[/tex], called
[tex]\mathbb{U}_1, \mathbb{U}_2[/tex], say. I can then impose the condition
that elements of each copy anticommute with each other. I.e., if
[tex]\theta \in \mathbb{U}_1, ~\eta \in \mathbb{U}_2[/tex], then
[tex]\theta\eta + \eta\theta = 0[/tex]. In this way, one builds up multidimensional
Grassman algebras.
 
Last edited:
  • #84
Okey, thanks for patience :wink: I see this started getting frustrating, but I pressed on because confusion was genuine.

So my construction in the post #67 was otherwise correct, expect that it was a mistake to define

[tex]
\theta:=(1,0,0),\quad \eta:=(0,1,0).
[/tex]

Instead the notation [tex]\theta[/tex] should have been preserved for all members of [tex]\langle(1,0,0)\rangle[/tex] (the vector space spanned by the unit vector (1,0,0)), and similarly with [tex]\eta[/tex].
 
  • #85
jostpuur said:
I would have been surprised if the same operators would have worked
for the Dirac field, that worked for the Klein-Gordon field, since the
Dirac field has so different Lagrange's function. It would not have
been proper quantizing. You don't quantize the one dimensional
infinite square well by stealing operators from harmonic oscillator
either!

strangerep said:
They're not using "the same operators that worked for the K-G field".
They're (attempting to) use the same prescription based on a
correspondence between Poisson brackets of functions on phase space,
and commutators of operators on Hilbert space. They find
that commutators don't work, and resort to anti-commutators. So in the
step between classical phase space and Hilbert space, they've
implicitly introduced a Grassman algebra even though they don't use
that name until much later in the book. The crucial point is that
the anti-commutativity is introduced before the correct Hilbert space
is constructed.

jostpuur said:
I must know what would happen to the operators if classical
variables were not anti-commuting.

If we did not let the Dirac's field be composed of anti-commuting numbers, then wouldn't the canonical way of quantizising it be the quantization as constrained system, because that is what the Dirac's field is. It has constraints between the canonical momenta field and the [tex]\psi[/tex] configuration. P&S are not talking about any constraints in their "first quantization attempt", but only try the quantization as harmonic oscillator.
 
  • #86
samalkhaiat said:
You created this thread and gave it the title "fermion oscillator", yet you don't seem to know the difference between Fermi and Bose dynamics.
You wrote an incorrect Bosonic Lagrangian and asked us to help you quantize that wrong Lagrangian! You also asked us to obtain information from the wrong Bosonic Lagrangian and use that information to explain Fermion oscillator! These requests of yours are certainly meaningless!

I see my original question wasn't logical, but I have some excuse for this. My first encounter with the Dirac field was with the book by Peskin & Schroeder. They could have honestly said that they are going to postulate anti-commuting operators, but instead they preferred motivating the quantization somehow. Basically they introduce a classical Dirac field described by Lagrangian

[tex]
\mathcal{L}_{\textrm{Dirac}} = \overline{\psi}(i\gamma^{\mu}\partial_{\mu} - m)\psi,
[/tex]

which is an example of a system where the canonical momenta is constrained with the generalized coordinates according to

[tex]
\pi =i\psi^{\dagger},
[/tex]

and then explain that because this system cannot be quantizised the same way as harmonic oscillators can be, therefore the quantization of system described by [tex]\mathcal{L}_{\textrm{Dirac}}[/tex] must involve anti-commuting operators. This is where I got the idea, that a constraint between canonical momenta and generalized coordinate leads into fermionic system, then devised the simplest example of a similar constrained system,

[tex]
L=\dot{x}y - x\dot{y} - x^2 - y^2
[/tex]

which has the constraint

[tex]
(p_x,p_y) = (y,-x),
[/tex]

and came here to ask that how does this give a fermionic system, and caused lot of confusion. Was that a my mistake? I'm not sure. It's fine if you think so. My opinion is that the explanation by Peskin & Schroeder sucks incredibly.

samalkhaiat said:
Your equations of motion represent a 2-D oscillator, your Lagrangian does not represent any system!

Here you are making a mistake. The Lagrangian I wrote is an example of a constrained system.
 
  • #87
jostpuur said:
[...] Peskin & Schroeder [...] could have honestly said that they are going to postulate anti-commuting operators, but instead they preferred motivating the quantization somehow.[...]

I think you are too harsh on P&S. In sect 3.5, bottom of p52, they have a section
"How Not to Quantize the Dirac Field". Then over on pp55-56 they show that
anti-commutation relations resolve various problems. The last two paragraphs
on p56 do indeed talk about postulating anti-commutation relations, but
they done so in the context of a larger discussion about why this is a good thing.

For the purposes of P&S's book, introducing the full machinery of constrained
Dirac-Bergman quantization would have consumed several chapters by itself,
and does not really belong in an "Introduction" to QFT.
 
  • #88
strangerep said:
For the purposes of P&S's book, introducing the full machinery of constrained
Dirac-Bergman quantization would have consumed several chapters by itself,
and does not really belong in an "Introduction" to QFT.

I wouldn't have expected them to explain the quantization of constrained systems, but their presentation left me in belief that it is the constraint between momenta and generalized coordinates, that forces us into anti-commuting brackets, and at the same time I was left fully unaware that there even existed some other theory about quantization with constraints. Assuming I'm now right when I think that the constraint never had anything to do with the anti-commuting brackets, I suppose it's: end fine, all fine?

I would be curious to know if I'm the only one who's had similar mislead thoughts with the Dirac field.
 
  • #89
Some trivial remarks concerning a quantization of a zero dimensional system:

If we were given a task of quantisizing a system whose coordinate space is zero dimensional point, a natural way to approach this using already known concepts would be to consider a one dimensional infinitely deep well of width L, and study it on the limit [tex]L\to 0[/tex], because on this limit the one dimensional coordinate space becomes zero dimensional. All the energy levels diverge on the limit [tex]L\to 0[/tex],

[tex]
E_n = \frac{\hbar^2\pi^2n^2}{2mL^2} \to \infty,
[/tex]

however, the divergence of the ground state is not a problem, because we can always normalize the energy so that the ground state remains as the origo in the energy space. The truly important remark is that the energy difference between the ground state and all the excitation states diverge,

[tex]
E_n - E_1 = \frac{\hbar^2\pi^2}{2mL^2}(n^2 - 1)\to\infty,\quad\quad n>1,
[/tex]

thus we can conclude that when the potential well is pushed zero dimensional, all the excitation states become unattainable with finite energies. My final conclusion of all this would be, that the zero dimensional one point system is quantized so that it has only one energy level, and thus very trivial dynamics.

A more interesting application of zero dimensional system:

We start with a one dimensional system with a following potential

[tex]
V(x)=\left\{\begin{array}{ll}
\infty,\quad& x < 0\\
0,\quad& 0<x<L\\
\infty,\quad& L<x<M\\
0,\quad &M<x<M + \sqrt{1 + \alpha L^2}L\\
\infty,\quad & M + \sqrt{1 + \alpha L^2}L < x\\
\end{array}\right.
[/tex]

where [tex]\alpha>0[/tex] is some constant. So basically the system consists of two disconnected wells. Other one has width [tex]L[/tex], and the other one [tex]\sqrt{1 + \alpha L^2}L[/tex]. On the limit [tex]L\to 0[/tex] the excitation states of each well vanish again, but now it turns out that the difference between the ground states of the each well remains finite.

[tex]
E_{\textrm{zero right}} - E_{\textrm{zero left}} \;=\; \frac{\hbar^2\pi^2}{2m}\Big(\frac{1}{1 + \alpha L^2}\; -\; 1\Big)\frac{1}{L^2}\; =\; \frac{\hbar^2\pi^2}{2m} \frac{\alpha}{1 + \alpha L^2} \;\to\; \frac{\hbar^2\pi^2\alpha}{2m}
[/tex]

Now the behavior of the quantized system on the limit [tex]L\to 0[/tex] is that it has precisely two energy levels, which can be thought of as the particle occupying either one the zero dimensional points [tex]\{0\}[/tex] or [tex]\{M\}[/tex], which together compose the coordinate space.

If on the other hand, I was given the knowledge that a behavior of some quantum system is such that it has two energy levels, and I was then given the task of coming up with a suitable classical coordinate space and a Lagrangian that produces this two level behavior, this is what I would give. System consisting of two points, or alternatively a limit definition starting with a more traditional one dimensional system. Would this be frowned upon? To me this looks simple and understandable, but would more professional theoreticians prefer devising some Grassmann algebra explanation for the asked two energy level system? How different would it be from the naive construction given by me here, in the end?
 
Last edited:

Similar threads

Replies
3
Views
245
Replies
2
Views
522
Replies
1
Views
732
Replies
14
Views
1K
  • Differential Equations
Replies
5
Views
591
Replies
14
Views
1K
  • Quantum Physics
Replies
4
Views
807
  • Advanced Physics Homework Help
Replies
1
Views
805
  • Quantum Physics
Replies
6
Views
734
Replies
6
Views
1K
Back
Top