Understanding solutions of Dirac equation

In summary, the Dirac equation is a mathematical equation proposed by physicist Paul Dirac in 1928 that describes the behavior of particles with spin. It has important implications in quantum mechanics and led to the prediction of antimatter. The equation is solved using mathematical techniques and has various applications in physics, such as in understanding the behavior of particles and the development of technologies. Current research efforts are focused on further understanding and applying the Dirac equation in areas such as extreme environments and technology advancements.
  • #1
Markus Kahn
112
14
Homework Statement
Full disclaimer: This post is the same as the one in the following link with some slight modifications (I'm the author of the original) https://physics.stackexchange.com/questions/469386/understanding-solutions-of-the-dirac-equation


In one of the lectures that I'm currently taking we encountered the Dirac equation. The general solution was given as
$$\psi ( x ) = \sum _ { s } \int \frac { d ^ { 3 } \bf { p } } { ( 2 \pi ) ^ { 2 } 2 \omega _ { p } } \left[ a _ { s } ( p ) u ^ { s } ( p ) e ^ { - i p \cdot x } + b _ { s } ^ { * } ( p ) v ^ { s } ( p ) e ^ { + i p \cdot x } \right],$$
where
$$u^{s}(p)=\begin{pmatrix}{\sqrt{\sigma \cdot p} \xi^{s}} \\ {\sqrt{\overline{\sigma} \cdot p} \xi^{s}}\end{pmatrix} \quad\text{and}\quad v ^ { s } ( p ) = \begin{pmatrix} { \sqrt { \sigma \cdot p } \xi ^ { s } } \\ { - \sqrt { \bar { \sigma } \cdot p } \xi ^ { s } } \end{pmatrix}.$$
Note that we defined ##\sigma^\mu \equiv (1,\vec{\sigma})## and ##\bar\sigma^\mu \equiv (1,-\vec\sigma)## and ##s\in\{+,-\}## for
$$\xi^+ \equiv \begin{pmatrix}1\\0\end{pmatrix},~\xi^-\equiv\begin{pmatrix}0\\1\end{pmatrix}.$$

My problem is now that I'm a bit confused on how to evalute the expression ##\sqrt{p\cdot\sigma}\xi^s##. If I understood correctly we have ##p\cdot \sigma = p_\mu\sigma^\mu## which makes this expression a matrix. But how am I supposed to take the square-root now?

This expression can also be found in Perskin chapter 3, S. 46.
Relevant Equations
All given above.
some notes:
There was actually no proof given why ##u^s(p)## or ##v^s(p)## should solve the Dirac equation, only a statement that one could prove it using the identity
$$(\sigma\cdot p)(\bar\sigma\cdot p)=p^2=m^2.$$
We were using the Wely-representation of the ##\gamma##-matrices, if this should be relevant. I think I can prove the statement if someone could explain to me how one should evaluate the expression ##\sqrt{p_\mu\sigma^\mu}\,\xi^s##. In Perskin QFT book I found the explanation

"where it is understood that in taking the square root of a matrix, we take the positive root of each eigenvalue."[pp. 46]
but honestly I can't figure out how this is supposed to work for ##p_1\neq p_2\neq 0##.

I'd like to stress here that I'm not interested in alternative ways of expressing solutions, I'd like to understand why we can write them in this specific form..
 
Physics news on Phys.org
  • #2
Markus Kahn said:
some notes:
There was actually no proof given why ##u^s(p)## or ##v^s(p)## should solve the Dirac equation, only a statement that one could prove it using the identity
$$(\sigma\cdot p)(\bar\sigma\cdot p)=p^2=m^2.$$
We were using the Wely-representation of the ##\gamma##-matrices, if this should be relevant. I think I can prove the statement if someone could explain to me how one should evaluate the expression ##\sqrt{p_\mu\sigma^\mu}\,\xi^s##. In Perskin QFT book I found the explanation

"where it is understood that in taking the square root of a matrix, we take the positive root of each eigenvalue."[pp. 46]
but honestly I can't figure out how this is supposed to work for ##p_1\neq p_2\neq 0##.

I'd like to stress here that I'm not interested in alternative ways of expressing solutions, I'd like to understand why we can write them in this specific form..

Well, you can verify that when you square the quantity:

##A + B \overrightarrow{\sigma} \cdot \overrightarrow{p}##

you get ##A^2 + 2 A B \overrightarrow{\sigma} \cdot \overrightarrow{p} + B^2 p^2##

So if you choose ##A## and ##B## appropriately, you can get this to equal ##E + \overrightarrow{\sigma} \cdot \overrightarrow{p}##.

So we can write: ##\sqrt{E + \overrightarrow{\sigma} \cdot \overrightarrow{p}}## as

##\sqrt{\frac{E+m}{2}} + \sqrt{\frac{1}{2(E+m)}} \overrightarrow{\sigma} \cdot \overrightarrow{p}##

(there are other solutions with minus signs in various places). I'm not sure how this relates to the hint about eigenvalues, though.
 
  • Like
Likes Markus Kahn
  • #3
Thank you for the reply! I could reproduce all of your calculations, so now the only question is what the connection to the eigenvalues is as suggested by Perskin.
 
  • #4
Markus Kahn said:
Thank you for the reply! I could reproduce all of your calculations, so now the only question is what the connection to the eigenvalues is as suggested by Perskin.

You can sort-of connect it with eigenvalues in this way:

You have a 2x2 matrix ##A## and you want to take its square-root. You can try to write ##A## in the form:

##A = U D U^{-1}##

where ##D## is a diagonal matrix whose entries are the eigenvalues of ##A##. In that case, we can write

##\sqrt{A} = U \sqrt{D} U^{-1}##

where ##\sqrt{D}## just means replacing each diagonal entry of ##D## by its square-root. We can verify that

##\sqrt{A} \sqrt{A} = U \sqrt{D} U^{-1} U \sqrt{D} U^{-1} = U \sqrt{D} \sqrt{D} U^{-1} = U \sqrt{D} U^{-1} = A##

So if we knew the matrix ##U## to use, then we could use the eigenvalues to take the square-root.
 
  • #5
Does this work if we some of the eigenvalues are negative? I always thought that this was only possible if ##A## is positive definite...
 
  • #6
Markus Kahn said:
Does this work if we some of the eigenvalues are negative? I always thought that this was only possible if ##A## is positive definite...

Well, if the eigenvalues are negative, then the square-root will necessarily have imaginary eigenvalues. For example, a square-roots of the matrix ##\left( \begin{array} \\ -1 & 0 \\ 0 & -1 \end{array} \right)## are:

##\left( \begin{array} \\ i & 0 \\ 0 & i \end{array} \right)##

(there is more than one square-root of a matrix)
 

FAQ: Understanding solutions of Dirac equation

1. What is the Dirac equation?

The Dirac equation is a relativistic wave equation that describes the behavior of spin-1/2 particles, such as electrons, in quantum mechanics. It was developed by physicist Paul Dirac in 1928 and is considered to be one of the most fundamental equations in physics.

2. How does the Dirac equation differ from the Schrödinger equation?

The Schrödinger equation describes the behavior of non-relativistic particles, while the Dirac equation takes into account the effects of special relativity. The Dirac equation also includes spin, which is not accounted for in the Schrödinger equation.

3. What are the solutions of the Dirac equation?

The solutions of the Dirac equation are wave functions that describe the probability of finding a particle at a certain position and time. These solutions have four components, representing the spin and momentum of the particle, and can be used to calculate various physical properties of the particle.

4. How does the Dirac equation contribute to our understanding of quantum mechanics?

The Dirac equation provides a more complete and accurate description of the behavior of particles at the quantum level, taking into account both special relativity and spin. It has been used to make predictions and calculations in various fields, such as particle physics and solid-state physics.

5. Are there any practical applications of the Dirac equation?

Yes, the Dirac equation has numerous practical applications in modern technology. For example, it is used in the development of transistors and other electronic devices, as well as in medical imaging techniques such as MRI. It also plays a crucial role in the study of high-energy particle collisions in particle accelerators.

Back
Top