# Question about factoring the Klein-Gordon equation

• A
• BiGyElLoWhAt
In summary, the author is trying to solve a linear equation by coming up with a new operator that squares into the box operator. He has tried compactification into index notation, and found that it doesn't work. He has also tried a specific example, and found that when he rotates the matrix he starts with by alpha degrees, it doesn't work. He is not sure why.

#### BiGyElLoWhAt

Gold Member
TL;DR Summary
When trying to factor the klein-gordon equation, you need to make your cross terms go away. Either your matrices can be anti-commutative or orthogonal. Is orthogonality a valid solution?
Take the Klein-Gordon equation:
##\Box^2 = m^2##
Say we want to linearize this equation, we try to come up with a new operator that squares into ##\Box^2##.
##(A\partial_t - B\partial_x - C\partial_y - D\partial_z)^2 = \Box^2##

So we need ##-A^2=B^2=C^2=D^2=I## as this gives back the 2nd partial operators with appropriate signs (I just put the - sign on A for simplicity), and we also need to get rid of our cross terms, as they don't appear in the box operator. The way I see it, you can have one of two conditions be true (one is actually a special case of the other).
Either ##AB = BA = 0## or ##AB +BA = 0##

The simplest solution, to me, seems to be the first. If you can effectively brute force your way to finding one of the matrices, then you can just treat them like (pseudo?)tensors, and make them be orthogonal. On that note, I'm not sure how to address the subtleties of orthogonality/rotations in 4-d. These matrices aren't going to be in space-time, right? Since I'm using standard matrix multiplication here, these should exist in something like R^4. So then the idea would be to construct a 4d euclidean rotation matrix from 6 plane rotation generators, using +/-90 degrees in the various permutations of the rotations. However, in 3d, rotations are non-commutative, and can also be represented by quaternions, so ##q(\frac{\theta}{2}) V q^*(\frac{\theta}{2})##. I don't see how this can then work in 4d. Perhaps the solution is the octonions, however, iirc, not only are the octonions non-commutative, I think they are also non associative. This seems to imply that there is no good way to construct my matrices A B C D using the planar rotation generators.

Intuitively, it seems like it should work, and any family of solutions that I can find should satisfy my factorization requirements.
What I've (kind of) tried is:
*Not sure how to compactify this equation into index notation off the top of my head, so I'll just provide a non-trivial example*
Take basis vectors w, x, y, z, then
##R_{wy} = \left ( \begin{array}{cccc}
cos(\alpha) & 0 & isin(\alpha) & 0 \\
0 & 1 & 0 & 0 \\
isin(\alpha) & 0 &cos(\alpha) &0 \\
0 & 0 & 0 & 1 \\
\end{array} \right )
##
Then the thought is take a matrix that squares to I, (but isn't I itself), and then simply rotate it 3x to get 4 orthogonal matrices. I'm not entirely sure what I need to keep track of in order to properly perform a 4d rotation, like with the quaternions, you need to left hand and right hand multiply by half your rotation and it's conjugate.

Would this not give me a solution? If not, why not? If so, why is the anti-commutative condition preferentially taken over the orthogonality condition?

Edit*
So I worked through a specific example as I had suggested it, and didn't get 0 out like I thought I would. I'm not sure why though. Maybe someone can elaborate. This might just be a poor choice for the rotation plane, actually.
So I used the rotation matrix above and let alpha be 90 degrees.
##\left ( \begin{array}{cccc}
0 & 0 & i & 0 \\
0 & 1 & 0 & 0 \\
i & 0 &0 &0 \\
0 & 0 & 0 & 1 \\
\end{array} \right )
\left ( \begin{array}{cccc}
0 & 0 & i & 0 \\
0 & 0 & 0 &i \\
-i & 0 &0 &0 \\
0 & -i & 0 & 0 \\
\end{array} \right ) =
\left ( \begin{array}{cccc}
1 & 0 &0 & 0 \\
0 & 0 & 0 &i \\
0& 0 &-1 &0 \\
0 & -i & 0 & 0 \\
\end{array} \right )
##
When I take my RHS and multiply it with the right matrix on LHS, I get back the rotation matrix that I started with, not the 0 matrix. I will try a different plane, as now that I look at it, just visually, the matrix I rotated seems like it might live in the plane that I rotated it in.

Similar results for the w-x plane.

Last edited:
Are you following Dirac or dealing with another situation ?

Matrices you are defining are simple to come by. Before doing that, simply multiply both sides by ##A##, and then ##B## etc. This yield 4 equations which look simple to solve. Of course, to be useful one would need to show things rotate reasonably. I don’t think they will with your approach.

Moderator's note: Thread moved to Quantum Physics forum.

anuttarasammyak said:
Are you following Dirac or dealing with another situation ?
Well, I'm sort of following Dirac. I know that in the end I should end up with 4x4 matrices, or 2x2 matrices (? with matrix elements) who's components are the Pauli matrices. I'm effectively trying to take the same approach and come up with a solution, pretending that I don't already know/can't look up the answer. Basically I'm trying to find the intuition behind the solution.
Paul Colby said:
Matrices you are defining are simple to come by. Before doing that, simply multiply both sides by ##A##, and then ##B## etc. This yield 4 equations which look simple to solve. Of course, to be useful one would need to show things rotate reasonably. I don’t think they will with your approach.
Which equation are you referring to? AB = BA = 0? If I go
##A^2B = ABA \to I B = ABA##
the same would give me
##A = BAB##
##B=ABA=CBC=DBD##
etc. for all permutations.
I suppose this implies that left multiplication and right multiplication are effectively inverses.
##A^{-1}AB = ABA##

Not sure what to do with that.
I am starting to wonder if rotations/orthogonality are actually well defined like this in 4d.
Looking ahead a little bit, the matrices I solve for should be the Dirac matrices, which turn things into spinors, which don't rotate normally. However, starting from the Klein-Gordon equation and simply trying to factor, it's not obvious to me how I should know that ahead of time.
As mentioned previously, the intuitive approach (to me) would be to make everything orthogonal. Cross terms go to zero, matrices square to identity, problem solved. However, the Dirac matrices are (per Wikipedia) anti-commutative, not orthogonal. So clearly if the orthogonal solutions do exist, they were not chosen as the preferred solutions.

I can’t see your matrices as well defined. On the one hand you have ##B^2=1## which implies B has no null vectors while on the other ##BC = 0## which implies all vectors are in the null space. Your definitions are inconsistent.

BiGyElLoWhAt
Ahh, I overlooked that I need a null vector in order to satisfy the AB=BC =...=0. So these orthogonal solutions just don't exist. Thanks, I figured I was missing something silly, but I figured it would be in my rotations.
Rookie move on my part.