## Fermion oscillator

 Quote by strangerep Most people seem to get by ok using canonical anti-commutations relations (or abstract Grassman algebras) directly. I still don't really know where you're trying to go with all this.
I am only trying to understand what P&S are talking about, and I'm still not fully convinced that it is like

$$\theta:=(1,0,0),\quad\eta:=(0,1,0)$$

in my previous post, because it seems extremely strange to use notation

$$\int d\theta\;f(\theta) = \int d(1,0,0)\; f(1,0,0)$$

for anything.

In fact now it would make tons of sense to define integrals like

$$\int\limits_{\gamma} d\gamma\;f(\gamma) = \lim_{N\to\infty}\sum_{k=0}^N (\gamma(t_{k+1}) - \gamma(t_k))f(\gamma(t_k))$$

where $$\gamma:[a,b]\to\mathbb{R}^3$$ is some path, and where we use the Grassmann multiplication

$$\big(\gamma(t_{k+1}) - \gamma(t_k),\; f(\gamma(t_k))\big)\mapsto (\gamma(t_{k+1}) - \gamma(t_k))f(\gamma(t_k)).$$

For example with

$$f(x_1,x_2,x_3) = (0,x_1^2,0)$$

and

$$\gamma(t) = (t,0,0),\quad 0\leq t\leq L$$

the integral would be

$$\int\limits_{\gamma} d\gamma\; f(\gamma) = (0,0,\frac{1}{3}L^3).$$

I'm sure this is one good definition for the Grassmann integration, but I cannot know if this is the kind that we are supposed to have.

 Quote by strangerep I think you mean "representation" rather than "construction". (You're devising a concrete representation of an abstract algebra.)
I was careful to use word "construction", because the thing I defined in the linear algebra subforum was not an algebra. It was something else, but had something anti-commuting.

Recognitions:
 Quote by jostpuur I am only trying to understand what P&S are talking about, [...]
Have you tried Zee? I found P&S ch9 quite poor at explaining the essence of path
integrals the first time I read it. Especially the generating functional Z(J) and what it
is used for. Zee explains it more clearly and directly. After that, the more extensive
treatment in P&S started to become more understandable.

 [...]I'm sure this is one good definition for the Grassmann integration[...]
Your definition of Grassman integration seems wrong to me (though again I don't
have time to fully deconstruct the details).
If $f(\theta)$ is a constant, the integral is zero in standard Grassman
calculus, but yours looks like it would give some other value.

My construction where

$$\theta=(1,0,0),\quad\eta=(0,1,0),\quad\theta\eta=(0,0,1)$$

was wrong. In P&S $$f$$ is said to be a function of a Grassmann variable $$\theta$$. It is not possible for the theta to be a fixed basis vector.

Okey, I still don't know what the Grassmann algebra is.

If I denote $$G$$ the construction I gave in linear algebra subforum (basically $$G=\mathbb{R}^2$$ with some additional information), perhaps

$$\bigoplus_{g\in G} \mathbb{C}$$

could be a correct kind of algebra...

 Have you tried Zee?
No. At some point I probably will, but it is always a labor to get new books. Library is usually empty of the most popular ones.

Recognitions:
 Quote by jostpuur Okey, I still don't know what the Grassmann algebra is.
Let A,B,C,... denotes ordinary complex numbers.

Then a 1-dimensional Grassman algebra consists of a single Grassman
variable $\theta$, its complex multiples $A\theta$,
and a 0 element, (so far it's a boring 1D vector space over $\mathbb{C}$),
and the multiplication rules $\theta^2 = 0; A\theta = \theta A$.

The most general function $f(\theta)$ of a single
Grassman variable is $A + B\theta$ (because higher order
terms like $\theta^2$ are all 0.

A 2-dimensional Grassman algebra consists of a two Grassman
variables $\theta,\eta$, their complex linear combinations,
$A\theta + B\eta$, a 0 element, (so far it's a 2D vector space
over $\mathbb{C}$), with the same multiplication
rules as above for $\theta,\eta$ separately, but also
$\theta\eta + \eta\theta = 0$.

The most general function $f(\theta,\eta)$ of a two
Grassman variables is $A + B\theta + C\eta + D\theta\eta$
(because any higher order terms are either 0 or reduce to a lower
order term).

And so on for higher-dimensional Grassman algebras.

That's about all there is to it.

Integral calculus over a Grassman algebra proceeds partly by analogy
with ordinary integration. In particular, $d\theta$ is
required to be the same as $d(\theta+\alpha)$ (where
$\alpha$ is a constant Grassman number). This leads to
the rules shown in P&S at the top of p300 -- eqs 9.63 and 9.64.

 Quote by strangerep Let A,B,C,... denotes ordinary complex numbers. Then a 1-dimensional Grassman algebra consists of a single Grassman variable $\theta$, its complex multiples $A\theta$, and a 0 element, (so far it's a boring 1D vector space over $\mathbb{C}$), and the multiplication rules $\theta^2 = 0; A\theta = \theta A$. The most general function $f(\theta)$ of a single Grassman variable is $A + B\theta$ (because higher order terms like $\theta^2$ are all 0.
Could $$\theta$$ as well be called Grassmann constant? If it is called variable, it sounds like $$\theta$$ could have different values.

Also, if A and B are complex numbers, and I was given a quantity A+4B, I would not emphasize A and B being constants, and calling this expression the function of 4, like $$f(4)=A+4B$$.

 Or is it like this: $$\theta$$ can have different values, and there exists a Grassmann algebra for each fixed $$\theta$$?

Recognitions:
 Quote by jostpuur Could $$\theta$$ as well be called Grassmann constant?
No.

 If it is called variable, it sounds like $$\theta$$ could have different values.
Consider a function f(x) where x is real. You wouldn't call "x" constant, even though any specific
value of x you plug into f(x) _is_ constant. $\theta$ is an element of a 1-dimensional
vector space. Besides $\theta$, this vector space contains 0 and any complex multiple of
$\theta$, e.g: $C\theta$.

 if A and B are complex numbers, and I was given a quantity A+4B, I would not emphasize A and B being constants, and calling this expression the function of 4, like $$f(4)=A+4B$$.
All the symbols occurring "A+4B" are from the same vector space, i.e., $\mathbb C$,
so this is not the same thing as $A+B\theta[/tex].  Quote by strangerep No. Consider a function f(x) where x is real. You wouldn't call "x" constant, even though any specific value of x you plug into f(x) _is_ constant. Ok. But then we need more precise definition for the set of allowed values of $$\theta$$. It is not my intention to only complain about lack of rigor, but I honestly haven't even got very good intuitive picture about this set either. I think I have now my own definition/construction ready for this, so that it seems to make sense, and I'm not sure that this claim:  [itex]\theta$ is an element of a 1-dimensional vector space. Besides $\theta$, this vector space contains 0 and any complex multiple of $\theta$, e.g: $C\theta$.
is fully right. For each fixed $$\theta$$ we have a vector space $$V=\{C\theta\;|\;C\in\mathbb{C}\}$$, but I don't see how this could be the same set from which $$\theta$$ was originally chosen.

Here's my way to get Grassmann algebra, where Grassmann variables would be as similar to the real numbers as possible:

First we define a multiplication on the $$\mathbb{R}^2$$ like it was done in my post in linear algebra subforum. That means, $$\mathbb{R}^2\times\mathbb{R}^2\to\mathbb{R}^2$$,

For all $$x\in\mathbb{R}$$, $$(x,0)(x,0)=(0,0)$$.

If $$0<x<x'$$, then $$(x,0)(x',0)=(0,xx')$$ and $$(x',0)(x,0)=(0,-xx')$$.

If $$x<0<x'$$ or $$x<x'<0$$ just put the signs naturally.

Finally for all $$(x,y),(x',y')\in\mathbb{R}^2$$ put $$(x,y)(x',y')=(x,0)(x',0)$$

Now the $$\mathbb{R}$$ has been naturally (IMO naturally, perhaps somebody has something more natural...) extended to smallest possible set so that it has a nontrivial anti-commuting product.

At this point one should notice that it is not a good idea to define scalar multiplication $$\mathbb{R}\times\mathbb{R}^2\to\mathbb{R}^2$$ like $$(\lambda,(x,y))\mapsto (\lambda x,\lambda y)$$, because the axiom $$(\lambda \theta)\eta=\theta(\lambda \eta)$$ would not be satisfied.

However a set

$$G=\bigoplus_{(x,y)\in\mathbb{R}^2}\mathbb{C}$$

becomes a well defined vector space, whose members are finite sums

$$\lambda_1(x_1,y_1)+\cdots+\lambda_n(x_n,y_n).$$

It has a natural multiplication rule $$G\times G\to G$$, which can be defined recursively from

$$(\lambda_1(x_1,y_1) + \lambda_2(x_2,y_2))(\lambda_3(x_3,y_3) + \lambda_4(x_4,y_4)) = \lambda_1\lambda_3 (x_1,y_1)(x_3,y_3) + \lambda_1\lambda_4 (x_1,y_1)(x_4,y_4) + \lambda_2\lambda_3 (x_2,y_2)(x_3,y_3) + \lambda_2\lambda_4 (x_2,y_2)(x_4,y_4),$$

where we use the previously defined multiplication on $$\mathbb{R}^2$$.

To my eye it seems that this $$G$$ is now a well defined algebra and has the desired properties: If one chooses a member $$\theta\in G$$, one gets a vector space $$\{C\theta\;|\;C\in\mathbb{C}\}\subset G$$, and if one chooses two members $$\theta,\eta\in G$$, then the identity $$\theta\eta = -\eta\theta$$ is always true.

Now I thought about this more, and my construction doesn't yet make sense. The identity $$\theta\eta=-\eta\theta$$ would be true only if there is a scalar multiplication $$(-1)(x,y)=(-x,-y)$$, which wasn't there originally. It could be made it too complicated because I was still thinking about my earlier construction attempt...

 Quote by strangerep Then a 1-dimensional Grassman algebra consists of a single Grassman variable $\theta$, its complex multiples $A\theta$, and a 0 element, (so far it's a boring 1D vector space over $\mathbb{C}$), and the multiplication rules $\theta^2 = 0; A\theta = \theta A$.
More ideas!:

I think this one dimensional Grassmann algebra can be considered as the set $$\mathbb{R}\times\{0,1\}$$ (with $$(0,0)$$ and $$(0,1)$$ identified as the common origin 0), with multiplication rules

$$(x,0)(x',0)=(xx',0)$$
$$(x,0)(x',1)=(xx',1)$$
$$(x,1)(x',0)=(xx',1)$$
$$(x,1)(x',1)=0$$

Here $$\mathbb{R}\times\{0\}$$ are like ordinary numbers, and $$\mathbb{R}\times\{1\}$$ are the Grassmann numbers. One could emphasize it with Greek letters $$\theta=(\theta,1)$$.

 A 2-dimensional Grassman algebra consists of a two Grassman variables $\theta,\eta$, their complex linear combinations, $A\theta + B\eta$, a 0 element, (so far it's a 2D vector space over $\mathbb{C}$), with the same multiplication rules as above for $\theta,\eta$ separately, but also $\theta\eta + \eta\theta = 0$.
This would be a set $$\mathbb{R}\times\{0,1,2,3\}$$ with multiplication rules

$$(x,0)(x',k) = (xx',k),\quad k\in\{0,1,2,3\}$$
$$(x,1)(x',1) = 0$$
$$(x,1)(x',2) = (xx', 3)$$
$$(x,2)(x',1) = (-xx',3)$$
$$(x,2)(x',2) = 0$$
$$(x,k)(x',3) = 0 = (x,3)(x',k),\quad k\in\{1,2,3\}$$

hmhmhmhmh?

Argh! But now I forgot that these are not vector spaces... Why cannot I just read the definition from somewhere...

btw. I think that if you try to define two dimensional Grassmann algebra like that, it inevitably becomes a three dimensional, because there are members like

$$x\theta + y\eta + z\theta\eta$$

strangerep, I'm not saying that there would be anything wrong with your explanation, but it must be missing something. When the Grassmann algebra is defined like this:

 Quote by strangerep Then a 1-dimensional Grassman algebra consists of a single Grassman variable $\theta$, its complex multiples $A\theta$, and a 0 element, (so far it's a boring 1D vector space over $\mathbb{C}$), and the multiplication rules $\theta^2 = 0; A\theta = \theta A$. The most general function $f(\theta)$ of a single Grassman variable is $A + B\theta$ (because higher order terms like $\theta^2$ are all 0.
It is already assumed, that we know from what set the $$\theta\in ?$$ is.

 Consider a function f(x) where x is real. You wouldn't call "x" constant, even though any specific value of x you plug into f(x) _is_ constant. $$\theta$$ is an element of a 1-dimensional vector space. Besides $$\theta$$, this vector space contains 0 and any complex multiple of $$\theta$$, e.g: $$C\theta$$.
Once $$\theta$$ exists, we get a vector space $$V_{\theta}:=\{c\theta\;|\;c\in\mathbb{C}\}$$, and it is true that $$\theta\in V_{\theta}$$, but you cannot use this vector space to define what $$\theta$$ is, because the $$\theta$$ is already needed in the definition of this vector space.

This is important. At the moment I couldn't tell for example if a phrase "Let $$\theta=4$$..." would be absurd or not. Are they numbers that anti-commute like $$3\cdot4 = -4\cdot 3$$? Is the multiplication some map $$\mathbb{R}\times\mathbb{R}\to \mathbb{R}$$, or $$\mathbb{R}\times\mathbb{R}\to X$$, or $$X\times X\to X$$, where X is something?

Recognitions:
 Quote by jostpuur Why cannot I just read the definition from somewhere...
You have, but you also have a persistent mental block against it that is beyond my
skill to dislodge.

 Once $$\theta$$ exists, we get a vector space $$V_{\theta}:=\{c\theta\;|\;c\in\mathbb{C}\}$$, and it is true that $$\theta\in V_{\theta}$$, but you cannot use this vector space to define what $$\theta$$ is, because the $$\theta$$ is already needed in the definition of this vector space.
$\theta$ is an abstract mathematical entity such that $\theta^2=0$.
There really is nothing more to it than that.

This is all a bit like asking what $i$ is. For some students initially, the answer
that "$i$ is an abstract mathematical entity such that $i^2 = -1$" is
unsatisfying, and they try to express $i$ in terms of something else they
already understand, thus missing the essential point that $i$ was originally
invented because that's not possible.

 The biggest difference between $$i$$ and $$\theta$$ is that $$i$$ is just a constant, where as $$\theta$$ is a variable which can have different values. If I substitute $$\theta=3$$ and on the other hand $$\theta=4$$, will the product of these two Grassmann numbers be zero, or will it anti-commute non-trivially: Like $$3\cdot 4= 0 = 4\cdot 3$$, or $$3\cdot 4 = - 4\cdot 3 \neq 0$$? Did I already do something wrong when I substituted 3 and 4? If so, is there something else whose substitution would be more allowed?

 This is all a bit like asking what $i$ is. For some students initially, the answer that "$i$ is an abstract mathematical entity such that $i^2 = -1$" is unsatisfying, and they try to express $i$ in terms of something else they already understand, thus missing the essential point that $i$ was originally invented because that's not possible.
IMO you cannot get satisfying intuitive picture of complex numbers unless you see at least one construction for them. The famous one is of course the one where we set $$\mathbb{R}=\mathbb{R}\times\{0\}$$, $$i=(0,1)\in\mathbb{R}^2$$, and let $$\mathbb{R}\cup \{i\}$$ generate the field $$\mathbb{C}$$.

Another one is where we identify all real numbers $$x\in\mathbb{R}$$ with diagonal matrices

$$\left[\begin{array}{cc} x & 0 \\ 0 & x \\ \end{array}\right]$$

We can then set

$$i = \left[\begin{array}{cc} 0 & 1 \\ -1 & 0 \\ \end{array}\right]$$

and we get the complex numbers again.

 Quote by strangerep $\theta$ is an abstract mathematical entity such that $\theta^2=0$. There really is nothing more to it than that.
There must be more. If that is all, I could set

$$\theta = \left[\begin{array}{cc} 0 & 1 \\ 0 & 0 \\ \end{array}\right]$$

and be happy. The biggest difference between this matrix, and the $$\theta$$ we want to have, is that this matrix is not a variable that could have different values, but $$\theta$$ is supposed to be a variable.

....

btw would it be fine to set

$$\theta = \left[\begin{array}{cc} 0 & \theta \\ 0 & 0 \\ \end{array}\right]?$$

Recognitions:
 Quote by jostpuur IMO you cannot get satisfying intuitive picture of complex numbers unless you see at least one construction for them. The famous one is of course the one where we set $$\mathbb{R}=\mathbb{R}\times\{0\}$$, $$i=(0,1)\in\mathbb{R}^2$$, and let $$\mathbb{R}\cup \{i\}$$ generate the field $$\mathbb{C}$$.
OK, I think I see the source of some of the confusion. Let's do a reboot, and
change the notation a bit to be more explicit...

Begin with a (fixed) nilpotent entity $$\Upsilon$$ whose only properties
are that it commutes with the complex numbers, and $$\Upsilon^2 = 0$$.
Also, $$0\Upsilon = \Upsilon 0 = 0$$. Then let $$\mathbb{A} := \mathbb{C}\cup \{\Upsilon\}$$
generate an algebra. I'll call the set of numbers $$\mathbb{U} := \{z \Upsilon : z \in \mathbb{C}\}$$
the nilpotent numbers.

I can now consider a nilpotent variable $$\theta \in \mathbb{U}$$.
Similarly, I can consider a more general variable $$a \in \mathbb{A}$$.
I can also consider functions $$f(\theta) : \mathbb{U} \to \mathbb{A}$$.

More generally, I can consider two separate copies of $$\mathbb{U}$$, called
$$\mathbb{U}_1, \mathbb{U}_2$$, say. I can then impose the condition
that elements of each copy anticommute with each other. I.e., if
$$\theta \in \mathbb{U}_1, ~\eta \in \mathbb{U}_2$$, then
$$\theta\eta + \eta\theta = 0$$. In this way, one builds up multidimensional
Grassman algebras.

 Okey, thanks for patience I see this started getting frustrating, but I pressed on because confusion was genuine. So my construction in the post #67 was otherwise correct, expect that it was a mistake to define $$\theta:=(1,0,0),\quad \eta:=(0,1,0).$$ Instead the notation $$\theta$$ should have been preserved for all members of $$\langle(1,0,0)\rangle$$ (the vector space spanned by the unit vector (1,0,0)), and similarly with $$\eta$$.

 Quote by jostpuur I would have been surprised if the same operators would have worked for the Dirac field, that worked for the Klein-Gordon field, since the Dirac field has so different Lagrange's function. It would not have been proper quantizing. You don't quantize the one dimensional infinite square well by stealing operators from harmonic oscillator either!
 Quote by strangerep They're not using "the same operators that worked for the K-G field". They're (attempting to) use the same prescription based on a correspondence between Poisson brackets of functions on phase space, and commutators of operators on Hilbert space. They find that commutators don't work, and resort to anti-commutators. So in the step between classical phase space and Hilbert space, they've implicitly introduced a Grassman algebra even though they don't use that name until much later in the book. The crucial point is that the anti-commutativity is introduced before the correct Hilbert space is constructed.
 Quote by jostpuur I must know what would happen to the operators if classical variables were not anti-commuting.
If we did not let the Dirac's field be composed of anti-commuting numbers, then wouldn't the canonical way of quantizising it be the quantization as constrained system, because that is what the Dirac's field is. It has constraints between the canonical momenta field and the $$\psi$$ configuration. P&S are not talking about any constraints in their "first quantization attempt", but only try the quantization as harmonic oscillator.