# Deriving Dirac matrices

gulsen
I'm trying to linearize Klein-Gordon equation, following Dirac's nobel lecture:
$$(E^2 - (pc)^2 - (mc^2)^2) \psi = 0$$
$$(E + \alpha pc + \beta mc^2) (E + \alpha pc - \beta mc^2) \psi = 0$$
Expanding the equations yields:
-$$\alpha$$ and $$\beta$$ commutes with E and p
-$$\alpha^2 = \beta^2 = 1$$
-$$\alpha \beta + \beta \alpha = 0$$

I have been trying solve for $$\alpha$$ and $$\beta$$. I tried this: I assume that both a 2x2 matrices, each element being a matrix. Then I had 12 equations. I doubt 12 equations is sufficient to solve a non-linear equation system of 8 unknown matrices.

I tried looking at several QFT books, and all had what $$\alpha$$ and $$\beta$$ are, coming out of air. I haven't found the solution anywhere yet :grumpy:
Anyone willing to point a path?

----
note to the guy who's moving my threads to homework forum: this's again not a homework question. i'd post it to homework forum if it was a homework question. and as far i know, not every physical question that involves mathematics means it's a homework.

## Answers and Replies

Homework Helper
Gold Member
gulsen said:
I'm trying to linearize Klein-Gordon equation, following Dirac's nobel lecture:
$$(E^2 - (pc)^2 - (mc^2)^2) \psi = 0$$
$$(E + \alpha pc + \beta mc^2) (E + \alpha pc - \beta mc^2) \psi = 0$$
Expanding the equations yields:
-$$\alpha$$ and $$\beta$$ commutes with E and p
-$$\alpha^2 = \beta^2 = 1$$
-$$\alpha \beta + \beta \alpha = 0$$

I have been trying solve for $$\alpha$$ and $$\beta$$. I tried this: I assume that both a 2x2 matrices, each element being a matrix. Then I had 12 equations. I doubt 12 equations is sufficient to solve a non-linear equation system of 8 unknown matrices.

I tried looking at several QFT books, and all had what $$\alpha$$ and $$\beta$$ are, coming out of air. I haven't found the solution anywhere yet :grumpy:
Anyone willing to point a path?

----
note to the guy who's moving my threads to homework forum: this's again not a homework question. i'd post it to homework forum if it was a homework question. and as far i know, not every physical question that involves mathematics means it's a homework.

Because they square to one,you know they have eigenvalues equal to +1 or -1. You know they must be hermitian. You can also show that they are traceless (which is equivalent to saying they have even dimensionality since their eigenvalues are =1 or -1). Even with all that information, the choice of alpha and beta is not unique, but it is easy to get the standard representation from these conditions.

Perturbation
Write $\alpha_i$ first, then find anticommutation relations for these and for them with $\beta$. You then have yourself an algebra so you can work out a representation of this algebra. The most common one being the Chiral representation, or at least that's the one I'm most familiar with.

If you look at just the $\alpha_i$s we have

$$\{\alpha_i,\alpha_j\}=-\delta_{ij}$$

A representation of which is

$$\alpha_i=i\sigma_i$$

Where $\sigma_i$ are the Pauli sigma matrices, the above coming from the anti-commutation relations between the Pauli matrices (or the Lie algebra for SU(2)). To include $\beta$ you have to make the dimensionality of the representation bigger than 2, as there isn't a 2x2 Hermitian matrix for $\beta$ that anti-commutes with the 2x2 $\alpha_i=i\sigma_i$.

Last edited:
Homework Helper
To work out ALL the possible representations, it helps to write things in Clifford algebra form. Let $$\alpha = (x,y,z)$$ and write $$\beta = t$$. This will simplify the notation.

You have that x^2 = y^2 = z^2 = -t^2 = -1. You have that x,y,z and t all anticommute.

To define a representation, first choose a complex combination of {x,y,z,t} that squares to unity. For example, (ixy)(ixy) = - (xy)(xy) = + xx yy = (-1)(-1) = +1.

Next, choose another complex combination that squares to unity AND commutes with the first of them. For example, in addition to ixy, one might choose zt. Note that (zt)(zt) = -(zz)(tt) = -(-1)(+1) = +1.

The two choices you've made for "commuting roots of unity" define a total of four degrees of freedom. In the example we've been working, the four degrees of freedom are {1, ixy, zt, ixyzt}. Each of these squares to +1. Consequently, each carries eigenvalues of +- 1. It is these four operators that our representation will diagonalize.

To define a representation, you must now choose two more complex combinations of {x,y,z,t}. The rule is that each of these two choices must come from outside the algebra defined by the degrees of freedom of the first two choices.

Continuing our example, ix squares to +1, and is outside the degrees of freedom of {ixy, zt}. And iyz also squares to +1 and is outside the degrees of freedom of {ixy,zt}. Finally, the product of ix and iyz = -xyz (which of course squares to +1) and is outside the degrees of freedom of {ixy,zt}. Therefore, for the last two square roots of unity, we can choose ix and iyz.

Now we've got four roots of unity, in two sequences, each of which could be used as the four simultaneously diagonalized operators in our algebra. But in this case, since we are working on a more or less standard representation, we will diagonalize the {ixy, zt} pair, and leave the {ix,iyz} pair to define the off diagonal elements.

Now the usual way of defining a representation is to give the matrices that correspond to x, y, z and t. But now that we have these two pairs of roots of unity, we can associate them with particular matrix elements.

Consider the following matrix product:
$$\left(\begin{array}{cccc}0&0&0&0\\0&1&0&0\\0&0&0&0\\0&0&0&0\end{array}\right)$$ $$\left(\begin{array}{cccc}1&1&1&1\\1&1&1&1\\1&1&1&1\\1&1&1&1\end{array}\right)$$$$\left(\begin{array}{cccc}1&0&0&0\\0&0&0&0\\0&0&0&0\\0&0&0&0\end{array}\right)$$
$$=\left(\begin{array}{cccc}0&0&0&0\\1&0&0&0\\0&0&0&0\\0&0&0&0\end{array}\right)$$

From the above example, it should be clear that it is possible to get a matrix that is all zero except for a single 1 in the mn position by sandwhiching the "all 1s" matrix between two diagonal matrices, one with a single 1 in the mth position on the diagonal, and the other with a single one in the nth position. Therefore, if we specify the "all 1s" matrix and all the diagonal matrices with a single 1, we can obtain all the arbitrary matrices and therefore define the representation.

But, the diagonal matrices with a single unit on the diagonal are just the diagonalized matrices for the operators that we choose to diagonalize. In this example, these matrices are defined by
$$(1\pm ixy)(1\pm zt)/4$$
where the +-1 are taken independently to give the four positions.

Now the thing that is special about the matrices with a single unit in the diagonal is that they are "primitive idempotents". That is, they square to themselves. The same is true for a matrix that has all its elements equal to 1/4. Therefore we can use our other two roots of unity to define the matrix that is all 1s:
$$4(1+ix)(1+iyz)/4$$

Note that in the above, we've chosen one particular choice of four possible choices of signs.

Thus for this representation, a matrix with a single one in the mn position is equivalent to the algebraic element
$$(1\pm ixy)(1\pm zt)(1+ix)(1+iyz)(1\pm ixy)(1\pm zt)$$
where the first two +-1 are chosen to give m and the second pair are chosen to give n.

If one desires to obtain the representation according to the usual definition, one simply solves the 16 equations. This is a lot easier than one might think because if one chooses values according to how I've described above, there will only be four equations that mention a particular vector, for example, x. And those four equations will invert trivially (which is why the standard reps of x,y,z,t of the Dirac algebra have only a single unit in each row or column and all other elements are zero).

Carl

Last edited:
JohnPhys
Have you tried looking at Mandl & Shaw's book? If I remember correctly, they go through and show how you can derive all of the general properties of either the alphas and betas or the dirac matrices ( can't remember which ). It was a pretty good reference though.