Undergrad When should one eigenvector be split into two (same span)?

Click For Summary
The discussion centers on the eigenvectors of the operator defined by the matrix B, which has eigenvalues b, +b, and -b. The eigenvectors corresponding to the eigenvalue b can be expressed as a linear combination of two basis vectors, specifically (1,0,0) and (0,1,i), which span a two-dimensional eigenspace. The confusion arises around whether it is acceptable to further split these eigenvectors into additional components, but it is clarified that while one can represent the solution in various forms, only the basis elements that lie within the eigenspace are meaningful. Ultimately, the choice of representation should facilitate calculations, emphasizing the importance of maintaining the integrity of the eigenspace. The discussion concludes that the separation of eigenvectors is permissible when it aids in understanding the structure of the eigenspace.
PerilousGourd
Messages
5
Reaction score
0
This question was inspired by 3c) on https://people.phys.ethz.ch/~muellrom/qm1_2012/Solutions4.pdf

Given the operator
<br /> \hat{B} = \left(\matrix{b&amp;0&amp;0\\0&amp;0&amp;-ib\\0&amp;ib&amp;0}\right)<br />

I find correctly that the eigenvalues are \lambda = b, \pm b.
To find the eigenvectors for b, I do the following

<br /> \left(\matrix{b&amp;0&amp;0\\0&amp;0&amp;-ib\\0&amp;ib&amp;0}\right) \left(\matrix{x\\y\\z}\right) = b \left(\matrix{x\\y\\z}\right)<br />

<br /> bx = bx \hspace{10em} y = -iz \hspace{10em} z = iy\\<br />
<br /> \hat{x} = \left(\matrix{t\\-iz\\iy}\right)\\<br /> = \left(\matrix{t\\y\\iy}\right)<br />

The pdf then seems to split this into two eigenvectors
<br /> \hat{x}_1 = \left(\matrix{t\\0\\0}\right) = \left(\matrix{1\\0\\0}\right) \hspace{5em} \text{and} \hspace{5em} \hat{x}_2 = \left(\matrix{0\\y\\iy}\right) = y\left(\matrix{0\\1\\i}\right)<br />
which 'span the eigenspace' of \lambda = b.Why is this allowed (separation of one eigenvector into multiple) and when should it be done?

Would it be technically acceptable to divide it further into (1,0,0), y(0,1,0) and y(0,0,i)? My current guess is that doing this would be acceptable but just not practical, and the reason the eigenvector here was divided into two was purely because of the t making it difficult to factor the y out. Is this right, or is there a deeper meaning I'm missing? (All these eigenvectors are pre-normalization)
 
Last edited by a moderator:
Physics news on Phys.org
You appear to have misunderstood what is going on.
The pdf author starts by finding the operator has 2 eigenvalues ... these are +b and -b.
The next step is to find out the conditions that x,y,z must satisfy if they are to be the components of an eigenvector that points to value +b.
The result is 3 simultanious equations that the author places, for convenience, into a vector form.
This vector is not an eigenvector.

The next step is to solve the equations to get the actual eigenvectors.
There are two vectors that satisfy the conditions: these are the sought-after eigenvectors.

The author has, at not time, split any eigenvectors up.
 
Simon Bridge said:
The result is 3 simultanious equations that the author places, for convenience, into a vector form.
This vector is not an eigenvector.
...
There are two vectors that satisfy the conditions: these are the sought-after eigenvectors..

Are you sure? The highlighted wording in the question makes me think otherwise.

BBNmCPV.png
 
PerilousGourd said:
Why is this allowed (separation of one eigenvector into multiple) and when should it be done?
It's about finding the basis for the null space of matrix
$$
\left(\matrix{b&0&0\\0&0&-ib\\0&ib&0}\right) - \left(\matrix{b&0&0\\0&b&0\\0&0&b}\right) = \left(\matrix{0&0&0\\0&-b&-ib\\0&ib&-b}\right)
$$
Using row reduction technique, you should find that any vector in the null space of the above matrix is
$$
\left(\matrix{x\\y\\z}\right) = x\left(\matrix{1\\0\\0}\right) + y\left(\matrix{0\\1\\i}\right)
$$
Which means the basis vectors are ##(1,0,0)^T## and ##(0,1,i)^T## and they also turn out to be already orthogonal.
PerilousGourd said:
Would it be technically acceptable to divide it further into (1,0,0), y(0,1,0) and y(0,0,i)?
You can't do that because the last two vectors are not in the null space of the above matrix.
 
  • Like
Likes PerilousGourd
The solution
PerilousGourd said:
<br /> \hat{x} = \left(\matrix{t\\-iz\\iy}\right)= \left(\matrix{t\\y\\iy}\right)<br />
contains two free variables: ##t## and ##y##. So any vector that can be written in that form, for any ##t,y\in\mathbb{C}##, is an eigenvector with eigenvalue ##b##. That tells us that the space of eigenvectors ('eigenspace') corresponding to eigenvalue ##b## is two-dimensional. To characterize that space we find a basis, which must have two elements. The obvious choice is the pair ## \vec v_1=\left(\matrix{1\\0\\0}\right),\ \vec v_2=\left(\matrix{0\\1\\i}\right)##, so that the solution above is equal to ##t\vec v_1+y\vec v_2##.
PerilousGourd said:
Would it be technically acceptable to divide it further into (1,0,0), y(0,1,0) and y(0,0,i)?
You can certainly write the solution as ##t\left(\matrix{1\\0\\0}\right)+y\left(\matrix{0\\1\\0}\right)+y\left(\matrix{0\\0\\i}\right)## but the second and third items are not in the eigenspace, so writing it in that way has no use. The aim is to write the solution as a sum of the basis elements of the eigenspace, and there can be only two basis elements because the eigenspace is two-dimensional.
 
  • Like
Likes PerilousGourd
I don't see anything in post #3 that counterindicates post #2.
I guess you can think of the intermediate step as an eigenvector if you like... but it is more a collection of valid eigenvectors. Pick 2, any 2. Remember what you need them to be able to do.

The bottom line is you can choose whatever representation you like... so pick the one that makes the maths easier. Hence finding simultanious eigenkets.
 
  • Like
Likes PerilousGourd
Time reversal invariant Hamiltonians must satisfy ##[H,\Theta]=0## where ##\Theta## is time reversal operator. However, in some texts (for example see Many-body Quantum Theory in Condensed Matter Physics an introduction, HENRIK BRUUS and KARSTEN FLENSBERG, Corrected version: 14 January 2016, section 7.1.4) the time reversal invariant condition is introduced as ##H=H^*##. How these two conditions are identical?

Similar threads

  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 16 ·
Replies
16
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 4 ·
Replies
4
Views
4K
  • · Replies 12 ·
Replies
12
Views
3K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
Replies
1
Views
5K
  • · Replies 5 ·
Replies
5
Views
3K