Register to reply 
Fermion oscillator 
Share this thread: 
#73
May2208, 07:04 PM

Sci Advisor
P: 1,905

Then a 1dimensional Grassman algebra consists of a single Grassman variable [itex]\theta[/itex], its complex multiples [itex]A\theta[/itex], and a 0 element, (so far it's a boring 1D vector space over [itex]\mathbb{C}[/itex]), and the multiplication rules [itex]\theta^2 = 0; A\theta = \theta A[/itex]. The most general function [itex]f(\theta)[/itex] of a single Grassman variable is [itex]A + B\theta[/itex] (because higher order terms like [itex]\theta^2[/itex] are all 0. A 2dimensional Grassman algebra consists of a two Grassman variables [itex]\theta,\eta[/itex], their complex linear combinations, [itex]A\theta + B\eta[/itex], a 0 element, (so far it's a 2D vector space over [itex]\mathbb{C}[/itex]), with the same multiplication rules as above for [itex]\theta,\eta[/itex] separately, but also [itex]\theta\eta + \eta\theta = 0[/itex]. The most general function [itex]f(\theta,\eta)[/itex] of a two Grassman variables is [itex]A + B\theta + C\eta + D\theta\eta[/itex] (because any higher order terms are either 0 or reduce to a lower order term). And so on for higherdimensional Grassman algebras. That's about all there is to it. Integral calculus over a Grassman algebra proceeds partly by analogy with ordinary integration. In particular, [itex]d\theta[/itex] is required to be the same as [itex]d(\theta+\alpha)[/itex] (where [itex]\alpha[/itex] is a constant Grassman number). This leads to the rules shown in P&S at the top of p300  eqs 9.63 and 9.64. 


#74
May2308, 04:55 AM

P: 2,032

Also, if A and B are complex numbers, and I was given a quantity A+4B, I would not emphasize A and B being constants, and calling this expression the function of 4, like [tex]f(4)=A+4B[/tex]. 


#75
May2308, 05:21 AM

P: 2,032

Or is it like this: [tex]\theta[/tex] can have different values, and there exists a Grassmann algebra for each fixed [tex]\theta[/tex]?



#76
May2308, 07:25 PM

Sci Advisor
P: 1,905

value of x you plug into f(x) _is_ constant. [itex]\theta[/itex] is an element of a 1dimensional vector space. Besides [itex]\theta[/itex], this vector space contains 0 and any complex multiple of [itex]\theta[/itex], e.g: [itex]C\theta[/itex]. so this is not the same thing as [itex]A+B\theta[/tex]. 


#77
May2408, 01:23 AM

P: 2,032

Here's my way to get Grassmann algebra, where Grassmann variables would be as similar to the real numbers as possible: First we define a multiplication on the [tex]\mathbb{R}^2[/tex] like it was done in my post in linear algebra subforum. That means, [tex]\mathbb{R}^2\times\mathbb{R}^2\to\mathbb{R}^2[/tex], For all [tex]x\in\mathbb{R}[/tex], [tex](x,0)(x,0)=(0,0)[/tex]. If [tex]0<x<x'[/tex], then [tex](x,0)(x',0)=(0,xx')[/tex] and [tex](x',0)(x,0)=(0,xx')[/tex]. If [tex]x<0<x'[/tex] or [tex]x<x'<0[/tex] just put the signs naturally. Finally for all [tex](x,y),(x',y')\in\mathbb{R}^2[/tex] put [tex](x,y)(x',y')=(x,0)(x',0)[/tex] Now the [tex]\mathbb{R}[/tex] has been naturally (IMO naturally, perhaps somebody has something more natural...) extended to smallest possible set so that it has a nontrivial anticommuting product. At this point one should notice that it is not a good idea to define scalar multiplication [tex]\mathbb{R}\times\mathbb{R}^2\to\mathbb{R}^2[/tex] like [tex](\lambda,(x,y))\mapsto (\lambda x,\lambda y)[/tex], because the axiom [tex](\lambda \theta)\eta=\theta(\lambda \eta)[/tex] would not be satisfied. However a set [tex] G=\bigoplus_{(x,y)\in\mathbb{R}^2}\mathbb{C} [/tex] becomes a well defined vector space, whose members are finite sums [tex] \lambda_1(x_1,y_1)+\cdots+\lambda_n(x_n,y_n). [/tex] It has a natural multiplication rule [tex]G\times G\to G[/tex], which can be defined recursively from [tex] (\lambda_1(x_1,y_1) + \lambda_2(x_2,y_2))(\lambda_3(x_3,y_3) + \lambda_4(x_4,y_4)) = \lambda_1\lambda_3 (x_1,y_1)(x_3,y_3) + \lambda_1\lambda_4 (x_1,y_1)(x_4,y_4) + \lambda_2\lambda_3 (x_2,y_2)(x_3,y_3) + \lambda_2\lambda_4 (x_2,y_2)(x_4,y_4), [/tex] where we use the previously defined multiplication on [tex]\mathbb{R}^2[/tex]. To my eye it seems that this [tex]G[/tex] is now a well defined algebra and has the desired properties: If one chooses a member [tex]\theta\in G[/tex], one gets a vector space [tex]\{C\theta\;\;C\in\mathbb{C}\}\subset G[/tex], and if one chooses two members [tex]\theta,\eta\in G[/tex], then the identity [tex]\theta\eta = \eta\theta[/tex] is always true. 


#78
May2408, 02:31 AM

P: 2,032

Now I thought about this more, and my construction doesn't yet make sense. The identity [tex]\theta\eta=\eta\theta[/tex] would be true only if there is a scalar multiplication [tex](1)(x,y)=(x,y)[/tex], which wasn't there originally. It could be made it too complicated because I was still thinking about my earlier construction attempt...
I think this one dimensional Grassmann algebra can be considered as the set [tex]\mathbb{R}\times\{0,1\}[/tex] (with [tex](0,0)[/tex] and [tex](0,1)[/tex] identified as the common origin 0), with multiplication rules [tex] (x,0)(x',0)=(xx',0) [/tex] [tex] (x,0)(x',1)=(xx',1) [/tex] [tex] (x,1)(x',0)=(xx',1) [/tex] [tex] (x,1)(x',1)=0 [/tex] Here [tex]\mathbb{R}\times\{0\}[/tex] are like ordinary numbers, and [tex]\mathbb{R}\times\{1\}[/tex] are the Grassmann numbers. One could emphasize it with Greek letters [tex]\theta=(\theta,1)[/tex]. [tex] (x,0)(x',k) = (xx',k),\quad k\in\{0,1,2,3\} [/tex] [tex] (x,1)(x',1) = 0 [/tex] [tex] (x,1)(x',2) = (xx', 3) [/tex] [tex] (x,2)(x',1) = (xx',3) [/tex] [tex] (x,2)(x',2) = 0 [/tex] [tex] (x,k)(x',3) = 0 = (x,3)(x',k),\quad k\in\{1,2,3\} [/tex] hmhmhmhmh? Argh! But now I forgot that these are not vector spaces... Why cannot I just read the definition from somewhere... btw. I think that if you try to define two dimensional Grassmann algebra like that, it inevitably becomes a three dimensional, because there are members like [tex] x\theta + y\eta + z\theta\eta [/tex] 


#79
May2408, 05:34 AM

P: 2,032

strangerep, I'm not saying that there would be anything wrong with your explanation, but it must be missing something. When the Grassmann algebra is defined like this:
This is important. At the moment I couldn't tell for example if a phrase "Let [tex]\theta=4[/tex]..." would be absurd or not. Are they numbers that anticommute like [tex]3\cdot4 = 4\cdot 3[/tex]? Is the multiplication some map [tex]\mathbb{R}\times\mathbb{R}\to \mathbb{R}[/tex], or [tex]\mathbb{R}\times\mathbb{R}\to X[/tex], or [tex]X\times X\to X[/tex], where X is something? 


#80
May2408, 07:57 PM

Sci Advisor
P: 1,905

skill to dislodge. There really is nothing more to it than that. This is all a bit like asking what [itex]i[/itex] is. For some students initially, the answer that "[itex]i[/itex] is an abstract mathematical entity such that [itex]i^2 = 1[/itex]" is unsatisfying, and they try to express [itex]i[/itex] in terms of something else they already understand, thus missing the essential point that [itex]i[/itex] was originally invented because that's not possible. 


#81
May2508, 02:03 AM

P: 2,032

The biggest difference between [tex]i[/tex] and [tex]\theta[/tex] is that [tex]i[/tex] is just a constant, where as [tex]\theta[/tex] is a variable which can have different values.
If I substitute [tex]\theta=3[/tex] and on the other hand [tex]\theta=4[/tex], will the product of these two Grassmann numbers be zero, or will it anticommute nontrivially: Like [tex]3\cdot 4= 0 = 4\cdot 3[/tex], or [tex]3\cdot 4 =  4\cdot 3 \neq 0[/tex]? Did I already do something wrong when I substituted 3 and 4? If so, is there something else whose substitution would be more allowed? 


#82
May2508, 03:28 AM

P: 2,032

Another one is where we identify all real numbers [tex]x\in\mathbb{R}[/tex] with diagonal matrices [tex] \left[\begin{array}{cc} x & 0 \\ 0 & x \\ \end{array}\right] [/tex] We can then set [tex] i = \left[\begin{array}{cc} 0 & 1 \\ 1 & 0 \\ \end{array}\right] [/tex] and we get the complex numbers again. [tex] \theta = \left[\begin{array}{cc} 0 & 1 \\ 0 & 0 \\ \end{array}\right] [/tex] and be happy. The biggest difference between this matrix, and the [tex]\theta[/tex] we want to have, is that this matrix is not a variable that could have different values, but [tex]\theta[/tex] is supposed to be a variable. .... btw would it be fine to set [tex] \theta = \left[\begin{array}{cc} 0 & \theta \\ 0 & 0 \\ \end{array}\right]? [/tex] 


#83
May2508, 08:11 PM

Sci Advisor
P: 1,905

change the notation a bit to be more explicit... Begin with a (fixed) nilpotent entity [tex]\Upsilon[/tex] whose only properties are that it commutes with the complex numbers, and [tex]\Upsilon^2 = 0[/tex]. Also, [tex]0\Upsilon = \Upsilon 0 = 0[/tex]. Then let [tex]\mathbb{A} := \mathbb{C}\cup \{\Upsilon\}[/tex] generate an algebra. I'll call the set of numbers [tex]\mathbb{U} := \{z \Upsilon : z \in \mathbb{C}\}[/tex] the nilpotent numbers. I can now consider a nilpotent variable [tex]\theta \in \mathbb{U}[/tex]. Similarly, I can consider a more general variable [tex]a \in \mathbb{A}[/tex]. I can also consider functions [tex]f(\theta) : \mathbb{U} \to \mathbb{A}[/tex]. More generally, I can consider two separate copies of [tex]\mathbb{U}[/tex], called [tex]\mathbb{U}_1, \mathbb{U}_2[/tex], say. I can then impose the condition that elements of each copy anticommute with each other. I.e., if [tex]\theta \in \mathbb{U}_1, ~\eta \in \mathbb{U}_2[/tex], then [tex]\theta\eta + \eta\theta = 0[/tex]. In this way, one builds up multidimensional Grassman algebras. 


#84
May2608, 05:30 PM

P: 2,032

Okey, thanks for patience I see this started getting frustrating, but I pressed on because confusion was genuine.
So my construction in the post #67 was otherwise correct, expect that it was a mistake to define [tex] \theta:=(1,0,0),\quad \eta:=(0,1,0). [/tex] Instead the notation [tex]\theta[/tex] should have been preserved for all members of [tex]\langle(1,0,0)\rangle[/tex] (the vector space spanned by the unit vector (1,0,0)), and similarly with [tex]\eta[/tex]. 


#85
Jun2708, 04:41 AM

P: 2,032




#86
Jul1608, 10:22 AM

P: 2,032

[tex] \mathcal{L}_{\textrm{Dirac}} = \overline{\psi}(i\gamma^{\mu}\partial_{\mu}  m)\psi, [/tex] which is an example of a system where the canonical momenta is constrained with the generalized coordinates according to [tex] \pi =i\psi^{\dagger}, [/tex] and then explain that because this system cannot be quantizised the same way as harmonic oscillators can be, therefore the quantization of system described by [tex]\mathcal{L}_{\textrm{Dirac}}[/tex] must involve anticommuting operators. This is where I got the idea, that a constraint between canonical momenta and generalized coordinate leads into fermionic system, then devised the simplest example of a similar constrained system, [tex] L=\dot{x}y  x\dot{y}  x^2  y^2 [/tex] which has the constraint [tex] (p_x,p_y) = (y,x), [/tex] and came here to ask that how does this give a fermionic system, and caused lot of confusion. Was that a my mistake? I'm not sure. It's fine if you think so. My opinion is that the explanation by Peskin & Schroeder sucks incredibly. 


#87
Jul1708, 03:34 AM

Sci Advisor
P: 1,905

"How Not to Quantize the Dirac Field". Then over on pp5556 they show that anticommutation relations resolve various problems. The last two paragraphs on p56 do indeed talk about postulating anticommutation relations, but they done so in the context of a larger discussion about why this is a good thing. For the purposes of P&S's book, introducing the full machinery of constrained DiracBergman quantization would have consumed several chapters by itself, and does not really belong in an "Introduction" to QFT. 


#88
Jul1708, 07:00 AM

P: 2,032

I would be curious to know if I'm the only one who's had similar mislead thoughts with the Dirac field. 


#89
Jul1708, 07:42 AM

P: 2,032

Some trivial remarks concerning a quantization of a zero dimensional system:
If we were given a task of quantisizing a system whose coordinate space is zero dimensional point, a natural way to approach this using already known concepts would be to consider a one dimensional infinitely deep well of width L, and study it on the limit [tex]L\to 0[/tex], because on this limit the one dimensional coordinate space becomes zero dimensional. All the energy levels diverge on the limit [tex]L\to 0[/tex], [tex] E_n = \frac{\hbar^2\pi^2n^2}{2mL^2} \to \infty, [/tex] however, the divergence of the ground state is not a problem, because we can always normalize the energy so that the ground state remains as the origo in the energy space. The truly important remark is that the energy difference between the ground state and all the excitation states diverge, [tex] E_n  E_1 = \frac{\hbar^2\pi^2}{2mL^2}(n^2  1)\to\infty,\quad\quad n>1, [/tex] thus we can conclude that when the potential well is pushed zero dimensional, all the excitation states become unattainable with finite energies. My final conclusion of all this would be, that the zero dimensional one point system is quantized so that it has only one energy level, and thus very trivial dynamics. A more interesting application of zero dimensional system: We start with a one dimensional system with a following potential [tex] V(x)=\left\{\begin{array}{ll} \infty,\quad& x < 0\\ 0,\quad& 0<x<L\\ \infty,\quad& L<x<M\\ 0,\quad &M<x<M + \sqrt{1 + \alpha L^2}L\\ \infty,\quad & M + \sqrt{1 + \alpha L^2}L < x\\ \end{array}\right. [/tex] where [tex]\alpha>0[/tex] is some constant. So basically the system consists of two disconnected wells. Other one has width [tex]L[/tex], and the other one [tex]\sqrt{1 + \alpha L^2}L[/tex]. On the limit [tex]L\to 0[/tex] the excitation states of each well vanish again, but now it turns out that the difference between the ground states of the each well remains finite. [tex] E_{\textrm{zero right}}  E_{\textrm{zero left}} \;=\; \frac{\hbar^2\pi^2}{2m}\Big(\frac{1}{1 + \alpha L^2}\; \; 1\Big)\frac{1}{L^2}\; =\; \frac{\hbar^2\pi^2}{2m} \frac{\alpha}{1 + \alpha L^2} \;\to\; \frac{\hbar^2\pi^2\alpha}{2m} [/tex] Now the behavior of the quantized system on the limit [tex]L\to 0[/tex] is that it has precisely two energy levels, which can be thought of as the particle occupying either one the zero dimensional points [tex]\{0\}[/tex] or [tex]\{M\}[/tex], which together compose the coordinate space. If on the other hand, I was given the knowledge that a behavior of some quantum system is such that it has two energy levels, and I was then given the task of coming up with a suitable classical coordinate space and a Lagrangian that produces this two level behavior, this is what I would give. System consisting of two points, or alternatively a limit definition starting with a more traditional one dimensional system. Would this be frowned upon? To me this looks simple and understandable, but would more professional theoreticians prefer devising some Grassmann algebra explanation for the asked two energy level system? How different would it be from the naive construction given by me here, in the end? 


Register to reply 
Related Discussions  
The Fermion Cube  Beyond the Standard Model  19  
Four fermion interaction?  Quantum Physics  1  
Bulk Fermion in RS1: What sets fermion parity?  General Physics  1  
What will happen if two fermions, like electrons, come together  High Energy, Nuclear, Particle Physics  4  
Fermion+Fermion = Boson ?  Quantum Physics  11 