Understanding solutions of Dirac equation

Markus Kahn
Messages
110
Reaction score
14
Homework Statement
Full disclaimer: This post is the same as the one in the following link with some slight modifications (I'm the author of the original) https://physics.stackexchange.com/questions/469386/understanding-solutions-of-the-dirac-equation


In one of the lectures that I'm currently taking we encountered the Dirac equation. The general solution was given as
$$\psi ( x ) = \sum _ { s } \int \frac { d ^ { 3 } \bf { p } } { ( 2 \pi ) ^ { 2 } 2 \omega _ { p } } \left[ a _ { s } ( p ) u ^ { s } ( p ) e ^ { - i p \cdot x } + b _ { s } ^ { * } ( p ) v ^ { s } ( p ) e ^ { + i p \cdot x } \right],$$
where
$$u^{s}(p)=\begin{pmatrix}{\sqrt{\sigma \cdot p} \xi^{s}} \\ {\sqrt{\overline{\sigma} \cdot p} \xi^{s}}\end{pmatrix} \quad\text{and}\quad v ^ { s } ( p ) = \begin{pmatrix} { \sqrt { \sigma \cdot p } \xi ^ { s } } \\ { - \sqrt { \bar { \sigma } \cdot p } \xi ^ { s } } \end{pmatrix}.$$
Note that we defined ##\sigma^\mu \equiv (1,\vec{\sigma})## and ##\bar\sigma^\mu \equiv (1,-\vec\sigma)## and ##s\in\{+,-\}## for
$$\xi^+ \equiv \begin{pmatrix}1\\0\end{pmatrix},~\xi^-\equiv\begin{pmatrix}0\\1\end{pmatrix}.$$

My problem is now that I'm a bit confused on how to evalute the expression ##\sqrt{p\cdot\sigma}\xi^s##. If I understood correctly we have ##p\cdot \sigma = p_\mu\sigma^\mu## which makes this expression a matrix. But how am I supposed to take the square-root now?

This expression can also be found in Perskin chapter 3, S. 46.
Relevant Equations
All given above.
some notes:
There was actually no proof given why ##u^s(p)## or ##v^s(p)## should solve the Dirac equation, only a statement that one could prove it using the identity
$$(\sigma\cdot p)(\bar\sigma\cdot p)=p^2=m^2.$$
We were using the Wely-representation of the ##\gamma##-matrices, if this should be relevant. I think I can prove the statement if someone could explain to me how one should evaluate the expression ##\sqrt{p_\mu\sigma^\mu}\,\xi^s##. In Perskin QFT book I found the explanation

"where it is understood that in taking the square root of a matrix, we take the positive root of each eigenvalue."[pp. 46]
but honestly I can't figure out how this is supposed to work for ##p_1\neq p_2\neq 0##.

I'd like to stress here that I'm not interested in alternative ways of expressing solutions, I'd like to understand why we can write them in this specific form..
 
Physics news on Phys.org
Markus Kahn said:
some notes:
There was actually no proof given why ##u^s(p)## or ##v^s(p)## should solve the Dirac equation, only a statement that one could prove it using the identity
$$(\sigma\cdot p)(\bar\sigma\cdot p)=p^2=m^2.$$
We were using the Wely-representation of the ##\gamma##-matrices, if this should be relevant. I think I can prove the statement if someone could explain to me how one should evaluate the expression ##\sqrt{p_\mu\sigma^\mu}\,\xi^s##. In Perskin QFT book I found the explanation

"where it is understood that in taking the square root of a matrix, we take the positive root of each eigenvalue."[pp. 46]
but honestly I can't figure out how this is supposed to work for ##p_1\neq p_2\neq 0##.

I'd like to stress here that I'm not interested in alternative ways of expressing solutions, I'd like to understand why we can write them in this specific form..

Well, you can verify that when you square the quantity:

##A + B \overrightarrow{\sigma} \cdot \overrightarrow{p}##

you get ##A^2 + 2 A B \overrightarrow{\sigma} \cdot \overrightarrow{p} + B^2 p^2##

So if you choose ##A## and ##B## appropriately, you can get this to equal ##E + \overrightarrow{\sigma} \cdot \overrightarrow{p}##.

So we can write: ##\sqrt{E + \overrightarrow{\sigma} \cdot \overrightarrow{p}}## as

##\sqrt{\frac{E+m}{2}} + \sqrt{\frac{1}{2(E+m)}} \overrightarrow{\sigma} \cdot \overrightarrow{p}##

(there are other solutions with minus signs in various places). I'm not sure how this relates to the hint about eigenvalues, though.
 
  • Like
Likes Markus Kahn
Thank you for the reply! I could reproduce all of your calculations, so now the only question is what the connection to the eigenvalues is as suggested by Perskin.
 
Markus Kahn said:
Thank you for the reply! I could reproduce all of your calculations, so now the only question is what the connection to the eigenvalues is as suggested by Perskin.

You can sort-of connect it with eigenvalues in this way:

You have a 2x2 matrix ##A## and you want to take its square-root. You can try to write ##A## in the form:

##A = U D U^{-1}##

where ##D## is a diagonal matrix whose entries are the eigenvalues of ##A##. In that case, we can write

##\sqrt{A} = U \sqrt{D} U^{-1}##

where ##\sqrt{D}## just means replacing each diagonal entry of ##D## by its square-root. We can verify that

##\sqrt{A} \sqrt{A} = U \sqrt{D} U^{-1} U \sqrt{D} U^{-1} = U \sqrt{D} \sqrt{D} U^{-1} = U \sqrt{D} U^{-1} = A##

So if we knew the matrix ##U## to use, then we could use the eigenvalues to take the square-root.
 
Does this work if we some of the eigenvalues are negative? I always thought that this was only possible if ##A## is positive definite...
 
Markus Kahn said:
Does this work if we some of the eigenvalues are negative? I always thought that this was only possible if ##A## is positive definite...

Well, if the eigenvalues are negative, then the square-root will necessarily have imaginary eigenvalues. For example, a square-roots of the matrix ##\left( \begin{array} \\ -1 & 0 \\ 0 & -1 \end{array} \right)## are:

##\left( \begin{array} \\ i & 0 \\ 0 & i \end{array} \right)##

(there is more than one square-root of a matrix)
 
##|\Psi|^2=\frac{1}{\sqrt{\pi b^2}}\exp(\frac{-(x-x_0)^2}{b^2}).## ##\braket{x}=\frac{1}{\sqrt{\pi b^2}}\int_{-\infty}^{\infty}dx\,x\exp(-\frac{(x-x_0)^2}{b^2}).## ##y=x-x_0 \quad x=y+x_0 \quad dy=dx.## The boundaries remain infinite, I believe. ##\frac{1}{\sqrt{\pi b^2}}\int_{-\infty}^{\infty}dy(y+x_0)\exp(\frac{-y^2}{b^2}).## ##\frac{2}{\sqrt{\pi b^2}}\int_0^{\infty}dy\,y\exp(\frac{-y^2}{b^2})+\frac{2x_0}{\sqrt{\pi b^2}}\int_0^{\infty}dy\,\exp(-\frac{y^2}{b^2}).## I then resolved the two...
Hello everyone, I’m considering a point charge q that oscillates harmonically about the origin along the z-axis, e.g. $$z_{q}(t)= A\sin(wt)$$ In a strongly simplified / quasi-instantaneous approximation I ignore retardation and take the electric field at the position ##r=(x,y,z)## simply to be the “Coulomb field at the charge’s instantaneous position”: $$E(r,t)=\frac{q}{4\pi\varepsilon_{0}}\frac{r-r_{q}(t)}{||r-r_{q}(t)||^{3}}$$ with $$r_{q}(t)=(0,0,z_{q}(t))$$ (I’m aware this isn’t...
Hi, I had an exam and I completely messed up a problem. Especially one part which was necessary for the rest of the problem. Basically, I have a wormhole metric: $$(ds)^2 = -(dt)^2 + (dr)^2 + (r^2 + b^2)( (d\theta)^2 + sin^2 \theta (d\phi)^2 )$$ Where ##b=1## with an orbit only in the equatorial plane. We also know from the question that the orbit must satisfy this relationship: $$\varepsilon = \frac{1}{2} (\frac{dr}{d\tau})^2 + V_{eff}(r)$$ Ultimately, I was tasked to find the initial...
Back
Top