When should one eigenvector be split into two (same span)?

Click For Summary

Discussion Overview

The discussion revolves around the concept of eigenvectors and their representation in the context of a specific operator in quantum mechanics. Participants explore the conditions under which one eigenvector can be split into multiple vectors that span the same eigenspace, particularly focusing on the operator defined in the thread.

Discussion Character

  • Technical explanation
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • One participant questions the splitting of an eigenvector into two separate vectors, seeking clarification on when this is permissible.
  • Another participant asserts that the original vector form presented is not an eigenvector but rather a representation of conditions for eigenvectors.
  • Some participants discuss the basis for the null space of the operator and how it relates to the eigenvectors, suggesting that the basis can be represented by two orthogonal vectors.
  • There is a proposal that while it is possible to express the solution in terms of multiple vectors, only certain combinations are valid within the eigenspace.
  • One participant emphasizes that the eigenspace corresponding to the eigenvalue is two-dimensional, and thus only two basis elements are necessary to describe it.
  • Another participant suggests that the representation of eigenvectors can be flexible, depending on what makes the mathematical treatment easier.

Areas of Agreement / Disagreement

Participants express differing views on the interpretation of the eigenvector representation and whether it is appropriate to split eigenvectors into multiple forms. There is no consensus on the necessity or implications of such splitting, indicating ongoing debate.

Contextual Notes

Some participants note that the representation of eigenvectors involves free variables, which contributes to the dimensionality of the eigenspace. The discussion highlights the importance of understanding the conditions that define valid eigenvectors and their relationships within the eigenspace.

Who May Find This Useful

This discussion may be of interest to students and practitioners in quantum mechanics, linear algebra, and those studying eigenvalue problems in mathematical physics.

PerilousGourd
Messages
5
Reaction score
0
This question was inspired by 3c) on https://people.phys.ethz.ch/~muellrom/qm1_2012/Solutions4.pdf

Given the operator
<br /> \hat{B} = \left(\matrix{b&amp;0&amp;0\\0&amp;0&amp;-ib\\0&amp;ib&amp;0}\right)<br />

I find correctly that the eigenvalues are \lambda = b, \pm b.
To find the eigenvectors for b, I do the following

<br /> \left(\matrix{b&amp;0&amp;0\\0&amp;0&amp;-ib\\0&amp;ib&amp;0}\right) \left(\matrix{x\\y\\z}\right) = b \left(\matrix{x\\y\\z}\right)<br />

<br /> bx = bx \hspace{10em} y = -iz \hspace{10em} z = iy\\<br />
<br /> \hat{x} = \left(\matrix{t\\-iz\\iy}\right)\\<br /> = \left(\matrix{t\\y\\iy}\right)<br />

The pdf then seems to split this into two eigenvectors
<br /> \hat{x}_1 = \left(\matrix{t\\0\\0}\right) = \left(\matrix{1\\0\\0}\right) \hspace{5em} \text{and} \hspace{5em} \hat{x}_2 = \left(\matrix{0\\y\\iy}\right) = y\left(\matrix{0\\1\\i}\right)<br />
which 'span the eigenspace' of \lambda = b.Why is this allowed (separation of one eigenvector into multiple) and when should it be done?

Would it be technically acceptable to divide it further into (1,0,0), y(0,1,0) and y(0,0,i)? My current guess is that doing this would be acceptable but just not practical, and the reason the eigenvector here was divided into two was purely because of the t making it difficult to factor the y out. Is this right, or is there a deeper meaning I'm missing? (All these eigenvectors are pre-normalization)
 
Last edited by a moderator:
Physics news on Phys.org
You appear to have misunderstood what is going on.
The pdf author starts by finding the operator has 2 eigenvalues ... these are +b and -b.
The next step is to find out the conditions that x,y,z must satisfy if they are to be the components of an eigenvector that points to value +b.
The result is 3 simultanious equations that the author places, for convenience, into a vector form.
This vector is not an eigenvector.

The next step is to solve the equations to get the actual eigenvectors.
There are two vectors that satisfy the conditions: these are the sought-after eigenvectors.

The author has, at not time, split any eigenvectors up.
 
Simon Bridge said:
The result is 3 simultanious equations that the author places, for convenience, into a vector form.
This vector is not an eigenvector.
...
There are two vectors that satisfy the conditions: these are the sought-after eigenvectors..

Are you sure? The highlighted wording in the question makes me think otherwise.

BBNmCPV.png
 
PerilousGourd said:
Why is this allowed (separation of one eigenvector into multiple) and when should it be done?
It's about finding the basis for the null space of matrix
$$
\left(\matrix{b&0&0\\0&0&-ib\\0&ib&0}\right) - \left(\matrix{b&0&0\\0&b&0\\0&0&b}\right) = \left(\matrix{0&0&0\\0&-b&-ib\\0&ib&-b}\right)
$$
Using row reduction technique, you should find that any vector in the null space of the above matrix is
$$
\left(\matrix{x\\y\\z}\right) = x\left(\matrix{1\\0\\0}\right) + y\left(\matrix{0\\1\\i}\right)
$$
Which means the basis vectors are ##(1,0,0)^T## and ##(0,1,i)^T## and they also turn out to be already orthogonal.
PerilousGourd said:
Would it be technically acceptable to divide it further into (1,0,0), y(0,1,0) and y(0,0,i)?
You can't do that because the last two vectors are not in the null space of the above matrix.
 
  • Like
Likes   Reactions: PerilousGourd
The solution
PerilousGourd said:
<br /> \hat{x} = \left(\matrix{t\\-iz\\iy}\right)= \left(\matrix{t\\y\\iy}\right)<br />
contains two free variables: ##t## and ##y##. So any vector that can be written in that form, for any ##t,y\in\mathbb{C}##, is an eigenvector with eigenvalue ##b##. That tells us that the space of eigenvectors ('eigenspace') corresponding to eigenvalue ##b## is two-dimensional. To characterize that space we find a basis, which must have two elements. The obvious choice is the pair ## \vec v_1=\left(\matrix{1\\0\\0}\right),\ \vec v_2=\left(\matrix{0\\1\\i}\right)##, so that the solution above is equal to ##t\vec v_1+y\vec v_2##.
PerilousGourd said:
Would it be technically acceptable to divide it further into (1,0,0), y(0,1,0) and y(0,0,i)?
You can certainly write the solution as ##t\left(\matrix{1\\0\\0}\right)+y\left(\matrix{0\\1\\0}\right)+y\left(\matrix{0\\0\\i}\right)## but the second and third items are not in the eigenspace, so writing it in that way has no use. The aim is to write the solution as a sum of the basis elements of the eigenspace, and there can be only two basis elements because the eigenspace is two-dimensional.
 
  • Like
Likes   Reactions: PerilousGourd
I don't see anything in post #3 that counterindicates post #2.
I guess you can think of the intermediate step as an eigenvector if you like... but it is more a collection of valid eigenvectors. Pick 2, any 2. Remember what you need them to be able to do.

The bottom line is you can choose whatever representation you like... so pick the one that makes the maths easier. Hence finding simultanious eigenkets.
 
  • Like
Likes   Reactions: PerilousGourd

Similar threads

  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 16 ·
Replies
16
Views
2K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 4 ·
Replies
4
Views
5K
  • · Replies 12 ·
Replies
12
Views
3K
  • · Replies 3 ·
Replies
3
Views
4K
  • · Replies 1 ·
Replies
1
Views
2K
Replies
1
Views
5K