Q&A on String Theory Course Indices/QFT

In summary, the Spinor index is used when dealing with matrix elements of the operators of a spinor group (SL(2,C)). The Spinor index is attached to the spinor itself, not the Dirac spinor.
  • #1
latentcorpse
1,444
0
So I have a few questions about a string theory course I am taking, although I guess the questions are largely on indices/QFT stuff!

(i) Consider the expression
[itex]\bar{\psi}^a \gamma^\mu \partial_\mu \psi^b \eta_{ab}[/itex]
we were going to take the transpose of this and it was said that the transpose operation only acts on Dirac spinor indices. I thought Dirac spinor indices were the indices attached to the Dirac spinors, [itex]\psi[/itex]. However, we calculated
[itex] (\bar{\psi}^a \gamma^\mu \partial_\mu \psi^b \eta_{ab})^T= - \eta_{ab} \partial_\mu \psi^b^T \gamma^\mu^T C^T \psi^a[/itex]

i.e. the index on the [itex]\gamma^\mu[/itex] must also be a spinor index as it has been transposed as well. So my question is, what is the actual definition of a spinor index?

(ii) We then showed that
[itex] (\bar{\psi}^a \gamma^\mu \partial_\mu \psi^b \eta_{ab})^T=-\eta_{ab} \partial_\mu \bar{\psi}^b \gamma^\mu \psi^a[/itex]
and that
[itex](\bar{\psi}^a \gamma^\mu \partial_\mu \psi^b \eta_{ab})^\dagger=+\eta_{ab} \partial_\mu \bar{\psi}^b \gamma^\mu \psi^a[/itex]

i.e. that [itex] (\bar{\psi}^a \gamma^\mu \partial_\mu \psi^b \eta_{ab})^T = - (\bar{\psi}^a \gamma^\mu \partial_\mu \psi^b \eta_{ab})^\dagger[/itex]

why does this allow us to conclude that [itex]\bar{\psi}^a \gamma^\mu \partial_\mu \psi^b \eta_{ab}[/itex] is real?

(iii) Why does taking the transpose of two anticommuting objects introduce a minus sign?
i.e. if A,B anticommute then why does [itex](AB)^T=-BA[/itex]?

(iii) Does the Polyakov action describe a bosonic particle?

(iv) Why does teh equation of motion [itex] ( - \partial_\tau + \partial_\sigma ) \psi_-^a=0[/itex] imply that

[itex]\psi_-^a=k^a(\sigma + \tau)[/itex] i.e. a left moving wave?

I tried substituting it in and I get [itex]-\dot{h}(\sigma + \tau} + h'(\sigma + \tau)[/itex] where dot is tau derivative and ' is sigma derivative. Why should this thing equal zero though?

(v) Given that [itex]\delta \psi^a = \gamma^\nu \epsilon \partial_\nu X^a[/itex] we showed

[itex]\delta \bar{\psi}^a = \delta ( {\psi^a}^T C ) = \partial_\nu X^a ( \epsilon^T {\gamma^\nu}^T C) = - \partial_\nu X^a ( \epsilon^T C \gamma^\nu C^{-1} C ) = - \partial_\nu X^a \bar{\epsilon} \gamma^\nu[/itex]
I agree with all of this except I don't understand why the [itex]\partial_\nu X^a[/itex] at the front isn't also transposed?

(vi) Consider the following manipulations:

[itex]\bar{\psi}^a \gamma^\mu \partial_\mu ( \gamma^\nu \epsilon \partial_\nu X^b ) \eta_{ab}[/itex]
where [itex]\epsilon[/itex] is a constant anticommutating Majorana spinor that generates the supersymmetry.

We can write this as

[itex]\bar{\psi}^a \gamma^\mu \partial_\nu ( \gamma^\nu \epsilon \partial_\mu X^b ) \eta_{ab}[/itex]
This is using the fact that since gamma and epsilon are constant the only contributing term is the one with teh double derivative and then we can use commutativity of partial derivatives to swap [itex] \mu[/itex] and [itex]\nu[/itex]

Then (since this is found in the supersymmetry action, it is going to be getting integrated over the string worldsheet), we integrate by parts to get:

[itex]-\partial_\nu \bar{\psi}^a \gamma^\mu \gamma^\nu \epsilon \partial_\mu X^b \eta_{ab}[/itex]
I get where this term comes from but what happened to the surface term?
Then this becomes

[itex] - ( \partial_\mu \bar{\psi}^b \gamma^\nu \gamma^\mu \epsilon)^T \partial_\nu X^a \eta_{ab}[/itex]
I do not follow this last step AT ALL!

Thanks very much for any help!
 
Last edited:
Physics news on Phys.org
  • #2
(iii) because they're anticommuting. So the transposition of the spinor matrices will invert their order, however, because the objects carry Grassmann parity 1, the minus sign pops up to account for that.

(ii) I think there's something fishy with what you wrote. Why would the spacetime derivative switch between spinors ? Or maybe I'm missing something, or probably to make the calculations to verify whether they're right or not. Anyways, perhaps it helps to know that on a Grassmann algebra, an element with definite parity is real wrt an involution iff it's equal to its involute.

(i) the Gamma matrices carry spinorial indices, of course. A spinor index is used each time we're dealing with matrix elements of the operators of finite dimensional representations of a spinor group (SL(2,C) in this case).
 
  • #3
bigubau said:
(iii) because they're anticommuting. So the transposition of the spinor matrices will invert their order, however, because the objects carry Grassmann parity 1, the minus sign pops up to account for that.

(ii) I think there's something fishy with what you wrote. Why would the spacetime derivative switch between spinors ? Or maybe I'm missing something, or probably to make the calculations to verify whether they're right or not. Anyways, perhaps it helps to know that on a Grassmann algebra, an element with definite parity is real wrt an involution iff it's equal to its involute.

(i) the Gamma matrices carry spinorial indices, of course. A spinor index is used each time we're dealing with matrix elements of the operators of finite dimensional representations of a spinor group (SL(2,C) in this case).

Hey bigubau, thanks for your reply! Unfortunately, I don't really understand what you wrote!

(iii) Is there a way to show this by writing it out? I have no idea what Grassmann parity is about!
I think he wrote something like [itex](\bar{A}B)^T = (A^\alpha C_{\alpha \beta} B^\beta)^T= \dots[/itex] where A and B are anticommuting spinors. Can this be used to show why the minus sign appears?

(ii) I'll maybe email the lecturer about this and ask what was happening there then

(i) Can you explain what a spinor index is at a more basic level? My knowledge of groups is fairly limited. Why does the gamma carry a spinor index?

Thanks.
 
  • #4
This is way too many questions for one post. They're not even related to one another.


(i) A spinor index is the index labeling the vector space of a spinor representation of the Lorentz group. The gamma matrices are intertwiners that take a spinor and antispinor representation to the vector representation of the Lorentz group. You should review this material in Weinberg or any decent QFT book.

(ii)
[itex]
(\bar{\psi}^a \gamma^\mu \partial_\mu \psi^b \eta_{ab})^T = - (\bar{\psi}^a \gamma^\mu \partial_\mu \psi^b \eta_{ab})^\dagger
[/itex]

On the RHS, we have the Hermitian conjugate, which is a transpose + complex conjugation. So the LHS and RHS differ by a complex conjugation, which in the case of Grassman variables, comes with an extra minus sign by convention:

[tex] (\chi \psi)^* = \psi^* \chi^* = - \chi^* \psi^*[/tex]

So the c.c. of the transpose = transpose, we conclude that the quantity is real.

(iii) Consider

[tex] \begin{pmatrix} \chi_1 & \chi_2 \end{pmatrix} \begin{pmatrix} \psi_1 \\ \psi_2 \end{pmatrix} = \chi_1 \psi_1 + \chi_2\psi_2,[/tex]

while


[tex] \begin{pmatrix} \psi_1 & \psi_2 \end{pmatrix} \begin{pmatrix} \chi_1 \\ \chi_2 \end{pmatrix} =\psi_1\chi_1+ \psi_2\chi_2= -(\chi_1 \psi_1 + \chi_2\psi_2),[/tex]

so transpose must come with a minus sign for every time we commute odd variables.

(iii) The Polykaov action is 2d, it describes a bosonic string.
 
  • #5
latentcorpse said:
(iii) Is there a way to show this by writing it out? I have no idea what Grassmann parity is about!
I think he wrote something like [itex](\bar{A}B)^T = (A^\alpha C_{\alpha \beta} B^\beta)^T= \dots[/itex] where A and B are anticommuting spinors. Can this be used to show why the minus sign appears?

You don't need to use index gymnastics to show that. You know that your spinors are finite matrices, so [itex] (AB)^{T} = B^{T}A^{T} [/itex]. You need though an extra minus sign, because the spinors are really anticommuting. If you don't have the minus, then by plugging A=B, you would NOT have the A^2 =0 condition anymore.

latentcorpse said:
(i) Can you explain what a spinor index is at a more basic level? My knowledge of groups is fairly limited. Why does the gamma carry a spinor index?

I'm aware of the following explanation which I can give to you.

The matrix representations of the 2 psi's are different, one is a 4x1 matrix (the barred one), the other is a 1x4matrix (the one w/o a bar on top of it). To link them we need a 4x4 matrix which would automatically carry one index for the first spinor and one index from the other. The product of the 3 objects must be a real number, that is a 1x1 matrix. So gamma-s carry spinor indices.

I saw you added several points to the original problem. I can't help you on these, because my knowledge of SUSY and strings is very, very well approximated by the number 0.
 
  • #6
latentcorpse said:
(iv) Why does teh equation of motion [itex] ( - \partial_\tau + \partial_\sigma ) \psi_-^a=0[/itex] imply that

[itex]\psi_-^a=k^a(\sigma + \tau)[/itex] i.e. a left moving wave?

I tried substituting it in and I get [itex]-\dot{h}(\sigma + \tau} + h'(\sigma + \tau)[/itex] where dot is tau derivative and ' is sigma derivative. Why should this thing equal zero though?

It's horribly misleading to use dot and ' notation for a function that is the sum of one variable. Since we use the notation

[tex]f'(x) = \frac{df(x)}{dx}[/tex]

then

[tex]\psi'(\sigma+\tau) = \frac{d\psi(\sigma+\tau)}{d(\sigma+\tau)}.[/tex]

You can relate these derivatives to [tex]\partial_{\tau,\sigma} \psi[/tex] by using the chain rule.



(v) Given that [itex]\delta \psi^a = \gamma^\nu \epsilon \partial_\nu X^a[/itex] we showed

[itex]\delta \bar{\psi}^a = \delta ( {\psi^a}^T C ) = \partial_\nu X^a ( \epsilon^T {\gamma^\nu}^T C) = - \partial_\nu X^a ( \epsilon^T C \gamma^\nu C^{-1} C ) = - \partial_\nu X^a \bar{\epsilon} \gamma^\nu[/itex]
I agree with all of this except I don't understand why the [itex]\partial_\nu X^a[/itex] at the front isn't also transposed?

[itex]\partial_\nu X^a[/itex] has no spinor indices.

(vi) Consider the following manipulations:

[itex]\bar{\psi}^a \gamma^\mu \partial_\mu ( \gamma^\nu \epsilon \partial_\nu X^b ) \eta_{ab}[/itex]
where [itex]\epsilon[/itex] is a constant anticommutating Majorana spinor that generates the supersymmetry.

We can write this as

[itex]\bar{\psi}^a \gamma^\mu \partial_\nu ( \gamma^\nu \epsilon \partial_\mu X^b ) \eta_{ab}[/itex]
This is using the fact that since gamma and epsilon are constant the only contributing term is the one with teh double derivative and then we can use commutativity of partial derivatives to swap [itex] \mu[/itex] and [itex]\nu[/itex]

Then (since this is found in the supersymmetry action, it is going to be getting integrated over the string worldsheet), we integrate by parts to get:

[itex]-\partial_\nu \bar{\psi}^a \gamma^\mu \gamma^\nu \epsilon \partial_\mu X^b \eta_{ab}[/itex]
I get where this term comes from but what happened to the surface term?

If you're dealing with a closed string, there is no boundary to support the surface term.

Then this becomes

[itex] - ( \partial_\mu \bar{\psi}^b \gamma^\nu \gamma^\mu \epsilon)^T \partial_\nu X^a \eta_{ab}[/itex]
I do not follow this last step AT ALL!

Thanks very much for any help!

I have no idea where the transpose came from, but ignoring that, it's just a relabeling of [tex]\mu,\nu[/tex]. I don't think the transpose should be there.
 
  • #7
the transpose should be there since you need an [tex] \bar{\epsilon}[/tex] term to make the action invariant under super-symmetry transformations and you can transpose it freely since it is a scalar
 
  • #8
fzero said:
This is way too many questions for one post. They're not even related to one another.
(ii)
[itex]
(\bar{\psi}^a \gamma^\mu \partial_\mu \psi^b \eta_{ab})^T = - (\bar{\psi}^a \gamma^\mu \partial_\mu \psi^b \eta_{ab})^\dagger
[/itex]

On the RHS, we have the Hermitian conjugate, which is a transpose + complex conjugation. So the LHS and RHS differ by a complex conjugation, which in the case of Grassman variables, comes with an extra minus sign by convention:

[tex] (\chi \psi)^* = \psi^* \chi^* = - \chi^* \psi^*[/tex]

So the c.c. of the transpose = transpose, we conclude that the quantity is real.

But even if i accept that the cc introduces a minus sign, our equation tells us that dagger=-transpose i.e. that c.c. of transpose = - transpose.
but you wrote that c.c. of transpose = transpose?

fzero said:
(iii) Consider

[tex] \begin{pmatrix} \chi_1 & \chi_2 \end{pmatrix} \begin{pmatrix} \psi_1 \\ \psi_2 \end{pmatrix} = \chi_1 \psi_1 + \chi_2\psi_2,[/tex]

while[tex] \begin{pmatrix} \psi_1 & \psi_2 \end{pmatrix} \begin{pmatrix} \chi_1 \\ \chi_2 \end{pmatrix} =\psi_1\chi_1+ \psi_2\chi_2= -(\chi_1 \psi_1 + \chi_2\psi_2),[/tex]

so transpose must come with a minus sign for every time we commute odd variables.

So let's just say [itex] \chi = \begin{pmatrix} \chi_1 & \chi_2 \end{pmatrix} [/itex] and [itex]\psi= \begin{pmatrix} \psi_1 \\ \psi_2 \end{pmatrix}[/itex]

Then [itex]\chi \psi = \begin{pmatrix} \chi_1 \chi_2 \end{pmatrix} \begin{pmatrix} \psi_1 \\ \psi_2 \end{pmatrix} = \chi_1 \psi_1 + \chi_2 \psi_2[/itex]

Now [itex]- \psi^T \chi^T = - \begin{pmatrix} \psi_1 & \psi_2 \end{pmatrix} \begin{pmatrix} \chi_1 \\ \chi_2 \end{pmatrix} = - (\psi_1 \chi_1 + \psi_2 \chi_2 ) = - (- (\chi_1 \psi_1 + \chi_2 \psi_2)) = \chi_1 \psi_1 + \chi_2 \psi_2[/itex] where i used the anticommutativity in the 2nd to last equality

but then this means that [itex]\chi \psi = - \psi^T \chi^T[/itex] and we wanted to show [itex]( \chi \psi)^T = - \psi^T \chi^T[/itex]. Where did I go wrong?
 
  • #9
fzero said:
It's horribly misleading to use dot and ' notation for a function that is the sum of one variable. Since we use the notation

[tex]f'(x) = \frac{df(x)}{dx}[/tex]

then

[tex]\psi'(\sigma+\tau) = \frac{d\psi(\sigma+\tau)}{d(\sigma+\tau)}.[/tex]

You can relate these derivatives to [tex]\partial_{\tau,\sigma} \psi[/tex] by using the chain rule.
Ok so we have

[itex](-\partial_\tau + \partial_\sigma ) \psi_-^a = -\partial_\tau k^a(\sigma + \tau) + \partial_\sigma k^a(\sigma + \tau) = - \frac{\partial k^a(\sigma + \tau)}{\partial \tau} \frac{\partial \tau}{\partial \tau} + \frac{\partial k^a(\sigma + \tau)}{\partial \sigma} \frac{\partial \sigma}{\partial \sigma} = - \frac{\partial k^a(\sigma + \tau)}{\partial \tau} + \frac{\partial k^a(\sigma + \tau)}{\partial \sigma}[/itex]

So how do I show this is equal to zero now?

fzero said:
[itex]\partial_\nu X^a[/itex] has no spinor indices.
Well we know that transpose only acts on spinor indices. So it won't be transposed by that logic but it will still be moved to the front of the expression because we have transposed the whole thing, right?

fzero said:
If you're dealing with a closed string, there is no boundary to support the surface term.
This kind of makes sense I guess. What do you mean there is no boundary though? A closed string worldsheet is essentially the surface of a cylinder - doesn't this have a boundary?

Thanks again!
 
  • #10
latentcorpse said:
But even if i accept that the cc introduces a minus sign, our equation tells us that dagger=-transpose i.e. that c.c. of transpose = - transpose.
but you wrote that c.c. of transpose = transpose?

It's hard to be more precise without knowing your conventions. For example are the gamma matrices Hermitian or anti-Hermitian? It might also be the case that

[tex]i\bar{\psi}^a \gamma^\mu \partial_\mu \psi^b \eta_{ab}[/tex]

is the quantity that is real.

So let's just say [itex] \chi = \begin{pmatrix} \chi_1 & \chi_2 \end{pmatrix} [/itex] and [itex]\psi= \begin{pmatrix} \psi_1 \\ \psi_2 \end{pmatrix}[/itex]

Then [itex]\chi \psi = \begin{pmatrix} \chi_1 \chi_2 \end{pmatrix} \begin{pmatrix} \psi_1 \\ \psi_2 \end{pmatrix} = \chi_1 \psi_1 + \chi_2 \psi_2[/itex]

Now [itex]- \psi^T \chi^T = - \begin{pmatrix} \psi_1 & \psi_2 \end{pmatrix} \begin{pmatrix} \chi_1 \\ \chi_2 \end{pmatrix} = - (\psi_1 \chi_1 + \psi_2 \chi_2 ) = - (- (\chi_1 \psi_1 + \chi_2 \psi_2)) = \chi_1 \psi_1 + \chi_2 \psi_2[/itex] where i used the anticommutativity in the 2nd to last equality

but then this means that [itex]\chi \psi = - \psi^T \chi^T[/itex] and we wanted to show [itex]( \chi \psi)^T = - \psi^T \chi^T[/itex]. Where did I go wrong?

[tex]\chi\psi[/tex] is a 1x1 matrix, so it's equal to its transpose. I picked a simple example.
 
  • #11
latentcorpse said:
Ok so we have

[itex](-\partial_\tau + \partial_\sigma ) \psi_-^a = -\partial_\tau k^a(\sigma + \tau) + \partial_\sigma k^a(\sigma + \tau) = - \frac{\partial k^a(\sigma + \tau)}{\partial \tau} \frac{\partial \tau}{\partial \tau} + \frac{\partial k^a(\sigma + \tau)}{\partial \sigma} \frac{\partial \sigma}{\partial \sigma} = - \frac{\partial k^a(\sigma + \tau)}{\partial \tau} + \frac{\partial k^a(\sigma + \tau)}{\partial \sigma}[/itex]

So how do I show this is equal to zero now?

You're not applying the chain rule correctly:


[itex](-\partial_\tau + \partial_\sigma ) \psi_-^a = -\partial_\tau k^a(\sigma + \tau) + \partial_\sigma k^a(\sigma + \tau) = - \frac{\partial k^a(\sigma + \tau)}{\partial (\sigma +\tau)} \frac{\partial (\sigma +\tau)}{\partial \tau} + \frac{\partial k^a(\sigma + \tau)}{\partial (\sigma +\tau)} \frac{\partial (\sigma +\tau)}{\partial \sigma} [/itex]


Well we know that transpose only acts on spinor indices. So it won't be transposed by that logic but it will still be moved to the front of the expression because we have transposed the whole thing, right?

[tex]\partial_\nu X^a[/tex] commutes with everything.

This kind of makes sense I guess. What do you mean there is no boundary though? A closed string worldsheet is essentially the surface of a cylinder - doesn't this have a boundary?

Thanks again!

But [tex]-\infty < \tau < \infty[/tex], so there's no boundary at finite times.
 
  • #12
Lol instead of paying cambridge you should pay fzero to teach you
 
  • #13
sgd37 said:
Lol instead of paying cambridge you should pay fzero to teach you

Would probably be more effective.
 
  • #14
fzero said:
You're not applying the chain rule correctly:


[itex](-\partial_\tau + \partial_\sigma ) \psi_-^a = -\partial_\tau k^a(\sigma + \tau) + \partial_\sigma k^a(\sigma + \tau) = - \frac{\partial k^a(\sigma + \tau)}{\partial (\sigma +\tau)} \frac{\partial (\sigma +\tau)}{\partial \tau} + \frac{\partial k^a(\sigma + \tau)}{\partial (\sigma +\tau)} \frac{\partial (\sigma +\tau)}{\partial \sigma} [/itex]
So this obviously breaks up into four terms:

[itex]-\frac{\partial k^a}{\partial \tau} \frac[\partial \sigma}[\partial \tau} - \frac[\partial k^a}{\partial \tau} + \partial k^a}{\partial \sigma} + \frac{\partial k^a}{\partial \sigma} \frac{\partial \tau}{\partial \sigma}[/itex]

But [itex]\sigma[/itex] can label any point on our string - why should it have any dependence on [itex]\tau[/itex]? i.e. why should [itex]\frac{\partial \sigma}{\partial \tau} \neq 0[/itex]?

fzero said:
[tex]\partial_\nu X^a[/tex] commutes with everything.
So I agree that this term won't have spinor indices and therefore doesn't get transposed but how do we know that it commutes with everything?

Thanks.
 
  • #15
fzero said:
It's hard to be more precise without knowing your conventions. For example are the gamma matrices Hermitian or anti-Hermitian? It might also be the case that

[tex]i\bar{\psi}^a \gamma^\mu \partial_\mu \psi^b \eta_{ab}[/tex]

is the quantity that is real.



[tex]\chi\psi[/tex] is a 1x1 matrix, so it's equal to its transpose. I picked a simple example.

Our gamma matrices are [itex]\gamma_0 = \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix} , \gamma_1 = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}[/itex]

So I find that when we take the hermitian conjugate (i.e. c.c. and transpose) that
[itex]\gamma_0^\dagger=-\gamma_0 \quad \gamma_1^\dagger = \gamma^1[/itex]. Does that help?
 
  • #16
sgd37 said:
the transpose should be there since you need an [tex] \bar{\epsilon}[/tex] term to make the action invariant under super-symmetry transformations and you can transpose it freely since it is a scalar

So can you explain (all the steps involved) in going from
[itex]\partial_\nu \bar{\psi}^a \gamma^\mu \gamma^\nu \epsilon \partial_\mu X^b \eta_{ab}[/itex]
to
[itex]- ( \partial_\mu \bar{\psi}^b \gamma^\nu \gamma^\mu \epsilon)^T \partial_\nu X^a \eta_{ab}[/itex]

So it looks as if we relabelled [itex]\mu \leftrightarrow \nu , a \leftrightarrow b[/itex]
I understand that we can transpose scalars freely, so are you saying that the term in the brackets is a scalar? I don't see how that can be?

Thanks!
 
  • #17
latentcorpse said:
So this obviously breaks up into four terms:

[itex]-\frac{\partial k^a}{\partial \tau} \frac[\partial \sigma}[\partial \tau} - \frac[\partial k^a}{\partial \tau} + \partial k^a}{\partial \sigma} + \frac{\partial k^a}{\partial \sigma} \frac{\partial \tau}{\partial \sigma}[/itex]

But [itex]\sigma[/itex] can label any point on our string - why should it have any dependence on [itex]\tau[/itex]? i.e. why should [itex]\frac{\partial \sigma}{\partial \tau} \neq 0[/itex]?

I never said that [itex]\frac{\partial \sigma}{\partial \tau} \neq 0[/itex], only that you had applied the chain rule in way that was not useful. Go back and finish the calculation and you'll see what I mean.

So I agree that this term won't have spinor indices and therefore doesn't get transposed but how do we know that it commutes with everything?

Thanks.

It's a bosonic field and you're dealing with classical expressions, so ordering doesn't matter. You'd have to be a bit more careful if you were computing quantum correlation functions or something, but for now those details don't matter.

latentcorpse said:
Our gamma matrices are [itex]\gamma_0 = \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix} , \gamma_1 = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}[/itex]

So I find that when we take the hermitian conjugate (i.e. c.c. and transpose) that
[itex]\gamma_0^\dagger=-\gamma_0 \quad \gamma_1^\dagger = \gamma^1[/itex]. Does that help?

The other important piece of information you want to do those computations is that the fermions in 2d can be taken to be Majorana-Weyl. In particular this means that the components are real.

You really should go through all the steps of showing that

[itex]
(\bar{\psi}^a \gamma^\mu \partial_\mu \psi^b \eta_{ab})^T=-\eta_{ab} \partial_\mu \bar{\psi}^b \gamma^\mu \psi^a
[/itex]

yourself. One thing that's useful is that the MW condition means that [tex]\bar{\psi}^b = (\psi^b)^T \gamma_0[/tex]. The other is that the rule for taking complex conjugates of Grassman numbers means that

[tex](\eta_{ab} \partial_\mu \bar{\psi}^b \gamma^\mu \psi^a)^* = - \eta_{ab} \partial_\mu \bar{\psi}^b \gamma^\mu \psi^a[/tex]

This is the proper reality condition for a product of two Grassman numbers.
 
  • #18
fzero said:
It's a bosonic field and you're dealing with classical expressions, so ordering doesn't matter. You'd have to be a bit more careful if you were computing quantum correlation functions or something, but for now those details don't matter.
So bosonic fields will commute with everything then? We only need to worry about commutativity of [itex]\psi[/itex]'s and [itex]\gamma[/itex]'s i.e. objects with spinor indices then?

fzero said:
The other important piece of information you want to do those computations is that the fermions in 2d can be taken to be Majorana-Weyl. In particular this means that the components are real.

You really should go through all the steps of showing that

[itex]
(\bar{\psi}^a \gamma^\mu \partial_\mu \psi^b \eta_{ab})^T=-\eta_{ab} \partial_\mu \bar{\psi}^b \gamma^\mu \psi^a
[/itex]

yourself.
Yep, I can do this.

fzero said:
One thing that's useful is that the MW condition means that [tex]\bar{\psi}^b = (\psi^b)^T \gamma_0[/tex]. The other is that the rule for taking complex conjugates of Grassman numbers means that

[tex](\eta_{ab} \partial_\mu \bar{\psi}^b \gamma^\mu \psi^a)^* = - \eta_{ab} \partial_\mu \bar{\psi}^b \gamma^\mu \psi^a[/tex]

This is the proper reality condition for a product of two Grassman numbers.

So is there a way (like for the transpose earlier) that you can convince me of this minus sign appearing when the cc is taken?

I tried writing something out along the lines of

[itex]\chi \psi = \begin{pmatrix} \chi_1 & \chi_2 \end{pmatrix} \begin{pmatrix} \psi_1 \\ \psi_2 \end{pmatrix} = \chi_1 \psi_1 + \chi_2 \psi_2[/itex]

and

[itex]( \chi \psi )^* = \begin{pmatrix} \chi_1^* & \chi_2^* \end{pmatrix} \begin{pmatrix} \psi_1^* \\ \psi_2^* \end{pmatrix} = \chi_1^* \psi_1^* + \chi_2^* \psi_2^*[/itex]

but couldn't relate them in any way to get a minus sign?
 
  • #19
latentcorpse said:
So bosonic fields will commute with everything then? We only need to worry about commutativity of [itex]\psi[/itex]'s and [itex]\gamma[/itex]'s i.e. objects with spinor indices then?

When you get to quantization (i.e. Virasoro algebra) there are nontrivial commutation relations for bosonic operators. However here we're purely dealing with classical Lagrangians.


So is there a way (like for the transpose earlier) that you can convince me of this minus sign appearing when the cc is taken?

I tried writing something out along the lines of

[itex]\chi \psi = \begin{pmatrix} \chi_1 & \chi_2 \end{pmatrix} \begin{pmatrix} \psi_1 \\ \psi_2 \end{pmatrix} = \chi_1 \psi_1 + \chi_2 \psi_2[/itex]

and

[itex]( \chi \psi )^* = \begin{pmatrix} \chi_1^* & \chi_2^* \end{pmatrix} \begin{pmatrix} \psi_1^* \\ \psi_2^* \end{pmatrix} = \chi_1^* \psi_1^* + \chi_2^* \psi_2^*[/itex]

but couldn't relate them in any way to get a minus sign?

As I said in post #4. it's a common convention to choose

[tex]
(\chi \psi)^* = \psi^* \chi^* = - \chi^* \psi^*
[/tex]

If we were to choose

[tex]
(\chi \psi)^* = \chi^* \psi^*
[/tex]

then we would prove that [tex] i \eta_{ab} \partial_\mu \bar{\psi}^b \gamma^\mu \psi^a[/tex] was real instead.
 
  • #20
fzero said:
When you get to quantization (i.e. Virasoro algebra) there are nontrivial commutation relations for bosonic operators. However here we're purely dealing with classical Lagrangians.




As I said in post #4. it's a common convention to choose

[tex]
(\chi \psi)^* = \psi^* \chi^* = - \chi^* \psi^*
[/tex]

If we were to choose

[tex]
(\chi \psi)^* = \chi^* \psi^*
[/tex]

then we would prove that [tex] i \eta_{ab} \partial_\mu \bar{\psi}^b \gamma^\mu \psi^a[/tex] was real instead.

Awesome. Thanks a lot. Do you have any ideas for that transpose thing that I posted about at the bottom of the first page in this thread?
 
  • #21
Are you a genius fzero? Hah! Do you work in String Theory or QFT? Your explanations are awesome and thorough, feynmanesque.
 
  • #22
latentcorpse said:
Awesome. Thanks a lot. Do you have any ideas for that transpose thing that I posted about at the bottom of the first page in this thread?

I think sgd37 is correct in that

[itex]
-\partial_\nu \bar{\psi}^a \gamma^\mu \gamma^\nu \epsilon = (- \partial_\mu \bar{\psi}^b \gamma^\nu \gamma^\mu \epsilon)^T
[/itex]

up to a sign because it's a 1x1 matrix.

It's also possible that they were doing something different. The SUSY variation of the fermion kinetic term involves two terms:

[tex]
\delta(
\bar{\psi}^a \gamma^\mu \partial_\mu \psi^b \eta_{ab} ) = (\delta\bar{\psi}^a) \gamma^\mu \partial_\mu \psi^b \eta_{ab} - \bar{\psi}^a \gamma^\mu \partial_\mu ( \delta \psi^b) \eta_{ab}. [/tex]

The second term gives

[itex]
\bar{\psi}^a \gamma^\mu \partial_\mu ( \gamma^\nu \epsilon \partial_\nu X^b ) \eta_{ab}
[/itex]

while I believe that the first term can be manipulated into

[itex]
( \partial_\mu \bar{\psi}^b \gamma^\nu \gamma^\mu \epsilon)^T \partial_\nu X^a \eta_{ab},
[/itex]

again up to signs. These terms cancel against the variation of the bosonic action.
 
  • #23
fzero said:
I think sgd37 is correct in that

[itex]
-\partial_\nu \bar{\psi}^a \gamma^\mu \gamma^\nu \epsilon = (- \partial_\mu \bar{\psi}^b \gamma^\nu \gamma^\mu \epsilon)^T
[/itex]

up to a sign because it's a 1x1 matrix.
How do we recognise it as 1x1 though since not all the indices are contracted?

I suppose if [itex]\psi^a[/itex] is 2x1 then [itex]\partial_\nu \bar{\psi}^a[/itex] will be 1x2.
The product of the gammas will still be 2x2.
The epsilon is a Majorana spinor so that will also be 2x1

So this term is really (1x2)x(2x2)x(2x1)=1x1. Is that how you did it? Or is there an easier way to see this?

Why does the fermionic contribution cancel the bosonic contribution? This basically means the action will be invariant under SUSY variations - why do we expect or want this to be the case?

And finally, the SUSY variation in question for all the above work was
[itex]\delta X^a = \bar{\epsilon} \psi^a[/itex]
[itex]\delta \psi^a = \gamma^\nu \epsilon \partial_\nu X^a[/itex]
Where did this come from? Is this just an example that the lecturer must have picked at random? That's what I thought but then it seems a bit of a coincidence that you started talking about exactly the same variations without me telling you what they were? Are they a particularly common choice or something? Presumably, once you pick [itex]\delta X^a[/itex], that fixes [itex]\delta \psi^a[/itex] since otherwise you could pick any kind of transformation for [itex]\psi^a[/itex] that wouldn't necessarily cancel the bosonic contribution?
 
  • #24
latentcorpse said:
How do we recognise it as 1x1 though since not all the indices are contracted?

I suppose if [itex]\psi^a[/itex] is 2x1 then [itex]\partial_\nu \bar{\psi}^a[/itex] will be 1x2.
The product of the gammas will still be 2x2.
The epsilon is a Majorana spinor so that will also be 2x1

So this term is really (1x2)x(2x2)x(2x1)=1x1. Is that how you did it? Or is there an easier way to see this?

Essentially that counting is the way that you see it. You could also see it by explicitly putting spinor indices on everything. There's another fundamental issue at play here, which is one of physics. This term was supposed to be a candidate for a kinetic energy term in a Lagrangian. If we had not contracted all of the spinor indices, this would not be a Lorentz invariant, and could never have been in the Lagrangian in the first place.

Why does the fermionic contribution cancel the bosonic contribution? This basically means the action will be invariant under SUSY variations - why do we expect or want this to be the case?

There are a couple of reasons why we want worldsheet supersymmetry. The first has to do with a pathology of the bosonic string. If you work out the states that propagate in spacetime for the bosonic string, the lightest state actually has [tex]m^2<0[/tex], which means it's a tachyon. These particles travel faster than light and would violate causality. Furthermore, they signal an instability of the vacuum, since they behave like they are at a local maximum in a potential.

It turns out that if you add enough fermions on the worldsheet to make a supersymmetric theory, there is a natural way to project out all tachyonic states. You will no doubt learn about this so called GSO projection soon.

Furthermore, worldsheet SUSY allows us to construct string theories which have SUSY on target space. While SUSY has not yet been found in nature, there are several reasons why it is phenomenologically desirable, such as explaining why the cosmological constant is not absurdly huge and solving the hierarchy problem of the electroweak sector.

And finally, the SUSY variation in question for all the above work was
[itex]\delta X^a = \bar{\epsilon} \psi^a[/itex]
[itex]\delta \psi^a = \gamma^\nu \epsilon \partial_\nu X^a[/itex]
Where did this come from? Is this just an example that the lecturer must have picked at random? That's what I thought but then it seems a bit of a coincidence that you started talking about exactly the same variations without me telling you what they were? Are they a particularly common choice or something? Presumably, once you pick [itex]\delta X^a[/itex], that fixes [itex]\delta \psi^a[/itex] since otherwise you could pick any kind of transformation for [itex]\psi^a[/itex] that wouldn't necessarily cancel the bosonic contribution?

What you try to pick as a transformation doesn't matter. As long as there are equal numbers of worldsheet bosons and fermions (with appropriate Lagrangians), that transformation is a symmetry, whether someone recognizes it or not.
 
  • #25
fzero said:
Essentially that counting is the way that you see it. You could also see it by explicitly putting spinor indices on everything. There's another fundamental issue at play here, which is one of physics. This term was supposed to be a candidate for a kinetic energy term in a Lagrangian. If we had not contracted all of the spinor indices, this would not be a Lorentz invariant, and could never have been in the Lagrangian in the first place.



There are a couple of reasons why we want worldsheet supersymmetry. The first has to do with a pathology of the bosonic string. If you work out the states that propagate in spacetime for the bosonic string, the lightest state actually has [tex]m^2<0[/tex], which means it's a tachyon. These particles travel faster than light and would violate causality. Furthermore, they signal an instability of the vacuum, since they behave like they are at a local maximum in a potential.

It turns out that if you add enough fermions on the worldsheet to make a supersymmetric theory, there is a natural way to project out all tachyonic states. You will no doubt learn about this so called GSO projection soon.

Furthermore, worldsheet SUSY allows us to construct string theories which have SUSY on target space. While SUSY has not yet been found in nature, there are several reasons why it is phenomenologically desirable, such as explaining why the cosmological constant is not absurdly huge and solving the hierarchy problem of the electroweak sector.



What you try to pick as a transformation doesn't matter. As long as there are equal numbers of worldsheet bosons and fermions (with appropriate Lagrangians), that transformation is a symmetry, whether someone recognizes it or not.

Today we were discussing the vierbein. This involved the formula

[itex]g_{ab}e^a_\alpha e^b_\beta = \eta_{\alpha \beta}[/itex]

Now, I don't really understand what is going on with the indices here. We appear to have mixed and matched abstract indices and basis indices. All we said about them was that [itex]\alpha, \beta[/itex] are tangent space indices. Even that confuses me though because they are "lowered" indices and so surely they should be cotangent space indices. Can you explain what is going on here?

So what is the point of the veirbein? Is it purely a function that relates the curved metric to the flat metric by the above equation?

And then, we said that although under local lorentz transofrmations, the metric is invariant but that [itex]e^a_\alpha \rightarrow e^a_\alpha \Lambda^\alpha_\beta[/itex] I tried to prove this invariance by substitution
Making a Lorentz transformation gives
[itex]g_{ab}e^a_\alpha \Lambda^\alpha_\gamma e^b_\beta \Lambda^\beta_\delta = \eta_{\alpha \beta}[/itex]
Now I need to get those [itex]\Lambda[/itex]'s over to teh other side where I can use the property [itex]\Lambda^T \eta \Lambda = \eta[/itex]. But how do I move them over? Do I just flip the indices on them or do i need to invert the matrices themselves? Either way, I don't see how it is going to give me what I want...

Thanks.
 
  • #26
latentcorpse said:
Today we were discussing the vierbein. This involved the formula

[itex]g_{ab}e^a_\alpha e^b_\beta = \eta_{\alpha \beta}[/itex]

Now, I don't really understand what is going on with the indices here. We appear to have mixed and matched abstract indices and basis indices. All we said about them was that [itex]\alpha, \beta[/itex] are tangent space indices. Even that confuses me though because they are "lowered" indices and so surely they should be cotangent space indices. Can you explain what is going on here?

The indices are in the right place. [tex]e^a_\alpha[/tex] are the inverse vierbeins. The vierbein is a map between the tangent bundle and and a vector bundle constructed from the vector representation of the Lorentz group (whose sections are copies of Minkowski space):

[tex]e: TM \rightarrow V, ~~~~ \zeta^a \rightarrow {e^\alpha}_a \zeta^a.~~~(*) [/tex]

Then

[tex](\zeta,\chi) = g_{ab} \zeta^a \chi^b = \eta_{\alpha\beta} {e^\alpha}_a {e^\beta}_b \zeta^a \chi^b.[/tex]

We define [tex]e^a_\alpha[/tex] according to

[tex] e^a_\alpha e^\alpha_b = \delta^a_b.[/tex]

These are indeed maps from

[tex]e^{-1}: T^*M \rightarrow V^*. [/tex]

So what is the point of the veirbein? Is it purely a function that relates the curved metric to the flat metric by the above equation?

It doesn't just relate the curved metric to the flat metric, it allows us to express all tensors
in terms of flat coordinates, which can be very useful in doing computations. Furthermore, we can use the decomposition of the vector bundle used in (*) in a product of spin bundles, [tex] V = S^+\times S^-[/tex], to write a vierbein for spin fields. This will allow, for example, the definition of a covariant derivative for spinors on a curved manifold.

And then, we said that although under local lorentz transofrmations, the metric is invariant but that [itex]e^a_\alpha \rightarrow e^a_\alpha \Lambda^\alpha_\beta[/itex] I tried to prove this invariance by substitution
Making a Lorentz transformation gives
[itex]g_{ab}e^a_\alpha \Lambda^\alpha_\gamma e^b_\beta \Lambda^\beta_\delta = \eta_{\alpha \beta}[/itex]
Now I need to get those [itex]\Lambda[/itex]'s over to teh other side where I can use the property [itex]\Lambda^T \eta \Lambda = \eta[/itex]. But how do I move them over? Do I just flip the indices on them or do i need to invert the matrices themselves? Either way, I don't see how it is going to give me what I want...

Thanks.

[tex]g_{ab}e^a_\alpha \Lambda^\alpha_\gamma e^b_\beta \Lambda^\beta_\delta= (\Lambda^T \eta \Lambda )_{\gamma\delta}[/tex]
 
  • #27
fzero said:
The indices are in the right place. [tex]e^a_\alpha[/tex] are the inverse vierbeins. The vierbein is a map between the tangent bundle and and a vector bundle constructed from the vector representation of the Lorentz group (whose sections are copies of Minkowski space):

[tex]e: TM \rightarrow V, ~~~~ \zeta^a \rightarrow {e^\alpha}_a \zeta^a.~~~(*) [/tex]

Then

[tex](\zeta,\chi) = g_{ab} \zeta^a \chi^b = \eta_{\alpha\beta} {e^\alpha}_a {e^\beta}_b \zeta^a \chi^b.[/tex]

We define [tex]e^a_\alpha[/tex] according to

[tex] e^a_\alpha e^\alpha_b = \delta^a_b.[/tex]

These are indeed maps from

[tex]e^{-1}: T^*M \rightarrow V^*. [/tex]

Good post! That's cleared up what these things are quite a lot!
What however, is a vector bundle? As far as I'm aware, all vectors I encountered in the GR course I took last year lived in the tangent space [itex]T_p(M)[/itex] so I'm a bit confused as to what it is?

And so the indices are meant to be a mix and match of abstract and basis indices since the abstract ones relate the curved metric (or indeed tensor) to the corresponding object in our local Lorentz frame - is this correct?

fzero said:
It doesn't just relate the curved metric to the flat metric, it allows us to express all tensors
in terms of flat coordinates, which can be very useful in doing computations. Furthermore, we can use the decomposition of the vector bundle used in (*) in a product of spin bundles, [tex] V = S^+\times S^-[/tex], to write a vierbein for spin fields. This will allow, for example, the definition of a covariant derivative for spinors on a curved manifold.
So when you say that we can express all tensors in terms of flat coordinates, how would that work on say [itex]T^{\alpha \beta}{}_\gamma[/itex]?
Will it just be [itex]T^{ab}{}_c = e^a_\alpha e^b_\beta e^\gamma_c T^{\alpha \beta}{}_\gamma[/itex]?
fzero said:
[tex]g_{ab}e^a_\alpha \Lambda^\alpha_\gamma e^b_\beta \Lambda^\beta_\delta= (\Lambda^T \eta \Lambda )_{\gamma\delta}[/tex]
So here you used the veirbein identity for the metric to rearrange it but how did you justify moving that [itex]\Lambda^T[/itex] to the very left? Does everything commute with everything else in this scenario?Later on in this same lecture though he says that:
A spinor is recognised by how it transforms under Lorentz transformations [itex]\psi \rightarrow \psi' \simeq ( 1 + \frac{1}{4} \Lambda_{ab} \gamma^{ab} + \dots ) \psi[/itex]
In curved spaces, [itex]\psi[/itex] ( a spinor) transforms under local Lorentz transform as [itex]\psi \rightarrow \psi' = ( 1 + \frac{1}{4} \Lambda_{\alpha \beta} \gamma^{\alpha \beta} + \dots ) \psi[/itex]

A couple of things confuse me here:
(i) Why is the second equation an exact equality and the first one an approx? Maybe this is just simply that they were both meant ot be approx and I copied it down wrong though?
(ii) I thought in all this discussion about the indices on the veirbein that we established that the abstract indices referred to the curved spacetime and the basis (Greek) indices refer tot eh flat spacetime. However, here when he's talking about the curved one he uses basis indices and they seem to be the wrong way around?
 
Last edited:
  • #28
latentcorpse said:
Good post! That's cleared up what these things are quite a lot!
What however, is a vector bundle? As far as I'm aware, all vectors I encountered in the GR course I took last year lived in the tangent space [itex]T_p(M)[/itex] so I'm a bit confused as to what it is?

You might want to find a convenient reference for bundles (start with wikipedia and then find some geometry text), but I can give a slight explanation.

Let's start with the tangent bundle. First we have a manifold [tex]M[/tex] of dimension [tex]n[/tex]. The tangent space is [tex]\mathbb{R}^n[/tex]. At a point [tex]p[/tex] of [tex]M[/tex] there is an associated vector [tex]v(p)[/tex]. However as we move from [tex]p[/tex] to another point [tex]p'[/tex], [tex]v(p)[/tex] is parallel-transported to some new vector [tex]v'(p')[/tex]. The tangent bundle is the space of points in [tex]M[/tex] together with the tangent vectors defined at those points, together with some additional conditions. Locally we have a product structure [tex]M\times \mathbb{R}^n[/tex] but because of the parallel transport the full space is not simply a product.

A vector bundle is an analogous structure where instead of the tangent space, we have a more general vector space [tex]V[/tex] over [tex]M[/tex]. In the case of the vierbeins, the vector space is the vector representation of the Lorentz group on [tex]M[/tex].

And so the indices are meant to be a mix and match of abstract and basis indices since the abstract ones relate the curved metric (or indeed tensor) to the corresponding object in our local Lorentz frame - is this correct?

Well the vierbein is a map from tangent vectors to an orthonormal basis. The local Lorentz group has a natural action on the basis vectors.

So when you say that we can express all tensors in terms of flat coordinates, how would that work on say [itex]T^{\alpha \beta}{}_\gamma[/itex]?
Will it just be [itex]T^{ab}{}_c = e^a_\alpha e^b_\beta e^\gamma_c T^{\alpha \beta}{}_\gamma[/itex]?

Pretty much.

So here you used the veirbein identity for the metric to rearrange it but how did you justify moving that [itex]\Lambda^T[/itex] to the very left? Does everything commute with everything else in this scenario?

You just have to follow the indices. You have [tex]{\Lambda^\alpha}_\gamma \eta_{\alpha\beta}[/tex] and that is [tex]\Lambda^T \eta[/tex]. You can flesh that out by raising and lowering indices appropriately.
 
  • #29
fzero said:
You might want to find a convenient reference for bundles (start with wikipedia and then find some geometry text), but I can give a slight explanation.

Let's start with the tangent bundle. First we have a manifold [tex]M[/tex] of dimension [tex]n[/tex]. The tangent space is [tex]\mathbb{R}^n[/tex]. At a point [tex]p[/tex] of [tex]M[/tex] there is an associated vector [tex]v(p)[/tex]. However as we move from [tex]p[/tex] to another point [tex]p'[/tex], [tex]v(p)[/tex] is parallel-transported to some new vector [tex]v'(p')[/tex]. The tangent bundle is the space of points in [tex]M[/tex] together with the tangent vectors defined at those points, together with some additional conditions. Locally we have a product structure [tex]M\times \mathbb{R}^n[/tex] but because of the parallel transport the full space is not simply a product.

A vector bundle is an analogous structure where instead of the tangent space, we have a more general vector space [tex]V[/tex] over [tex]M[/tex]. In the case of the vierbeins, the vector space is the vector representation of the Lorentz group on [tex]M[/tex].



Well the vierbein is a map from tangent vectors to an orthonormal basis. The local Lorentz group has a natural action on the basis vectors.



Pretty much.



You just have to follow the indices. You have [tex]{\Lambda^\alpha}_\gamma \eta_{\alpha\beta}[/tex] and that is [tex]\Lambda^T \eta[/tex]. You can flesh that out by raising and lowering indices appropriately.

Ok. Great. Any ideas for that spinor transformation stuff?
 
  • #30
latentcorpse said:
Later on in this same lecture though he says that:
A spinor is recognised by how it transforms under Lorentz transformations [itex]\psi \rightarrow \psi' \simeq ( 1 + \frac{1}{4} \Lambda_{ab} \gamma^{ab} + \dots ) \psi[/itex]
In curved spaces, [itex]\psi[/itex] ( a spinor) transforms under local Lorentz transform as [itex]\psi \rightarrow \psi' = ( 1 + \frac{1}{4} \Lambda_{\alpha \beta} \gamma^{\alpha \beta} + \dots ) \psi[/itex]

A couple of things confuse me here:
(i) Why is the second equation an exact equality and the first one an approx? Maybe this is just simply that they were both meant ot be approx and I copied it down wrong though?

It's semantics. You could write it as exact with the ellipsis or as approximate and just keep the linear part.

(ii) I thought in all this discussion about the indices on the veirbein that we established that the abstract indices referred to the curved spacetime and the basis (Greek) indices refer tot eh flat spacetime. However, here when he's talking about the curved one he uses basis indices and they seem to be the wrong way around?

The first expression he writes down is for flat space. The second one is correct for curved space because we're using the flat coordinates.
 
  • #31
fzero said:
The first expression he writes down is for flat space. The second one is correct for curved space because we're using the flat coordinates.

I still don't follow. I thought we were using the veirbein to put Latin indices in the curved case and basis indices for the flat case. Surely this would mean the 1st eqn would be for curved space and the second for flat.
 
  • #32
latentcorpse said:
I still don't follow. I thought we were using the veirbein to put Latin indices in the curved case and basis indices for the flat case. Surely this would mean the 1st eqn would be for curved space and the second for flat.

There's two problems with trying to define spinors on a curved manifold. The first is that a general curved manifold is not invariant under Lorentz transformations, so you can't even define the transformation of a spinor in curved indices. You can define tensors because you have a [tex]GL(n)[/tex] group acting on the tangent space, but [tex]GL(n)[/tex] does not have spinor representations. So you must introduce local frames to define spinors.
 
  • #33
fzero said:
There's two problems with trying to define spinors on a curved manifold. The first is that a general curved manifold is not invariant under Lorentz transformations, so you can't even define the transformation of a spinor in curved indices. You can define tensors because you have a [tex]GL(n)[/tex] group acting on the tangent space, but [tex]GL(n)[/tex] does not have spinor representations. So you must introduce local frames to define spinors.

OK. This makes sense to me. Can you explain the choice of indices in those two equations though? Like, why does he chose latin for one and greek for the other?
 
  • #34
latentcorpse said:
OK. This makes sense to me. Can you explain the choice of indices in those two equations though? Like, why does he chose latin for one and greek for the other?

I expect that it's because you're using Latin indices for spacetime indices, whether spacetime is Minkowski or curved.
 
  • #35
fzero said:
I expect that it's because you're using Latin indices for spacetime indices, whether spacetime is Minkowski or curved.

so to summarise, the transformation in Minkowski space (the first one i wrote down) had latin indices because in Minkowski space we can define spinor transformations and this will hold in any basis so we can move to abstract indices.

However, in a curved spacetime we cannot define spinor transformations so we need to move to flat spacetime (where we use greek indices) and the objects we use in the second transformation I wrote down will be related to their corresponding curved space quantities using veirbeins?
 

Similar threads

  • Advanced Physics Homework Help
Replies
1
Views
874
  • Advanced Physics Homework Help
3
Replies
95
Views
5K
  • Advanced Physics Homework Help
Replies
2
Views
956
  • Advanced Physics Homework Help
Replies
0
Views
286
  • Advanced Physics Homework Help
Replies
2
Views
2K
  • Advanced Physics Homework Help
Replies
2
Views
965
  • Advanced Physics Homework Help
Replies
20
Views
2K
  • Advanced Physics Homework Help
Replies
2
Views
1K
  • Advanced Physics Homework Help
Replies
15
Views
2K
  • Advanced Physics Homework Help
Replies
22
Views
3K
Back
Top