A straightforward matrix eigenvectors/values problem, I'm a bit rusty?

  • Thread starter Thread starter jeebs
  • Start date Start date
  • Tags Tags
    Bit Matrix
Click For Summary
The discussion revolves around calculating the eigenvalues and normalized eigenvectors of the operator represented by the matrix σ_x in a two-dimensional vector space. The original poster expresses confusion about whether they need to express the eigenvectors in terms of a specific orthonormal basis, |1> and |2>, which are not explicitly defined. It is clarified that the eigenvectors they calculated are indeed already expressed in terms of the basis |1> and |2>, as the eigenvectors are linear combinations of these basis vectors. The conversation emphasizes that while the representation of eigenvectors may change with different bases, the eigenvalues remain constant, reflecting intrinsic properties of the operator. Understanding these concepts is crucial for grasping the relationship between operators, matrices, and their representations in quantum mechanics.
jeebs
Messages
314
Reaction score
5
here's the problem:

"In a two dimensional vector space, consider the operator whose matrix is written as
\sigma_x = \left(\begin{array}{cc}0&1\\1&0\end{array}\right).

in an orthonormal basis {|1>, |2>}.

Calculate the eigenvalues and normalised eigenvectors for the operator in this basis."

I'm uncertain over what I'm being asked to do here. This operator clearly has not been expressed in the basis of its own eigenstates, otherwise it would have non-zero elements (ie. its eigenvalues) in the (1,1) and (2,2) elements, and zeros elsewhere, right?
I know how to find these eigenvalues and eigenstates, and have done this by calling the eigenstates {|\phi_j>} and doing \sigma_x |\phi_j> = \lambda |\phi_j>, where |\phi_j> = \left(\begin{array}{c}\phi_1\\\phi_2\end{array}\right).

I do this and out pops the eigenvalues \lambda = +1 , -1 and the eigenvectors |\phi_a> = \frac{1}{\sqrt{2}} \left(\begin{array}{c}1\\1\end{array}\right) and |\phi_b> = \frac{1}{\sqrt{2}} \left(\begin{array}{c}1\\-1\end{array}\right).

This is where my confusion arises, because the problem asks me to "calculate normalised eigenvectors for the operator in this basis". Does this mean I am somehow supposed to write the eigenvectors |\phi_a_,_b> in terms of the basis vectors |1>, |2> that \sigma_x was originally expressed in?

If so, how do I manage this, given that I am not told what |1>, |2> are?
Or am I mistaken - are they actually what I have already calculated? If so, I am very confused about how these problems work.

Thanks.
 
Last edited:
Physics news on Phys.org
jeebs said:
here's the problem:
Does this mean I am somehow supposed to write the eigenvectors |\phi_a_,_b> in terms of the basis vectors |1>, |2> that \sigma_x was originally expressed in?

If so, how do I manage this, given that I am not told what |1>, |2> are?
Or am I mistaken - are they actually what I have already calculated? If so, I am very confused about how these problems work.

Thanks.
You have already(implicitly) wrote the eigenvectors |\phi_a_,_b> in terms of |1> and |2> (assuming you did the calculations correctly), for instance :
|\phi_a>=\frac{1}{\sqrt{2}}(|1>+|2>)
You don't need to know more about |1> and |2>
 
facenian said:
You have already(implicitly) wrote the eigenvectors |\phi_a_,_b> in terms of |1> and |2> (assuming you did the calculations correctly), for instance :
|\phi_a>=\frac{1}{\sqrt{2}}(|1>+|2>)
You don't need to know more about |1> and |2>

I don't think I understand. Or maybe I do - Am I right in thinking that if you have an n x n matrix for some operation, you can express it in ANY orthonormal basis of n basis vectors at all?
And the \sigma_x I was given made the eigenvectors I calculated turn out to be the way they were, but if \sigma_x was made from some other orthonormal basis I would do the same type of calculations and end up with different looking eigenvectors?

I've had this idea in my head that for one operator matrix there was only one way to write its eigenvectors, no matter what basis the matrix was made from. That must be wrong... if any of my ramblings make sense? I still feel confused...

anyway, how have you arrived at the conclusion that |\phi_a>=\frac{1}{\sqrt{2}}(|1>+|2>)?
what would that make|\phi_b>?
 
jeebs said:
I don't think I understand. Or maybe I do - Am I right in thinking that if you have an n x n matrix for some operation, you can express it in ANY orthonormal basis of n basis vectors at all?
And the \sigma_x I was given made the eigenvectors I calculated turn out to be the way they were, but if \sigma_x was made from some other orthonormal basis I would do the same type of calculations and end up with different looking eigenvectors?
You have the basic idea. The matrix is the representation of the operator with respect to some basis. The operator lives in some abstract operator land, and once you choose a basis, then you can find the corresponding matrix that represents it. Similarly, the vectors live in their vector space, but you don't have their coordinate representation until you decide which basis you're going to use.
 
jeebs said:
I've had this idea in my head that for one operator matrix there was only one way to write its eigenvectors, no matter what basis the matrix was made from. That must be wrong... if any of my ramblings make sense? I still feel confused...
there are only one set of eigenvectors in the space, however there are many ways to express this eigenvector, one way for each basis you choose.
jeebs said:
anyway, how have you arrived at the conclusion that |\phi_a>=\frac{1}{\sqrt{2}}(|1>+|2>)?
what would that make|\phi_b>?
This question suggest that you know how to do the arithmetics but still did not interpret what you are doing. I suggest you to study the fundamentals of linear algebra, for intance what is a linear space? what is a basis? what is a linear transformation? etc.
the answer is |\phi_b>=\frac{1}{\sqrt{2}}(|1>-|2>)[/tex]
 
facenian said:
This question suggest that you know how to do the arithmetics but still did not interpret what you are doing.

this is probably true, although I am familiar with the things you mentioned - I do know what a linear vector space is, what a basis is, and so on, I'm just not very well practiced with problems like this one yet.
 
The concept you need to understand is that the matrices are only representations of the operators and vectors. It's like writing down numbers. The same number can have different representations depending on which base you're working in. For example, the number ten written in base 5 is 20 whereas in base 2 it's 1010. Both 205 and 10102 represent the same number but the actual digits depend on what base you're using. You can separate the abstract idea of "ten" from its representation in a specific number system. You have a similar situation here with operators and vectors. They can be represented by arrays of numbers, but what those numbers actually are depends on which basis you're working in.

In this problem, you were given the matrix that represents an operator relative to a basis. When you used that matrix to find the eigenvectors of the operator, the vectors were automatically written with respect to the same basis.

With respect to the original \vert 1 \rangle and \vert 2 \rangle basis, you have

\begin{align*}<br /> \sigma_x &amp; = \begin{pmatrix} 0 &amp; 1 \\ 1 &amp; 0 \end{pmatrix}_o \\<br /> \vert 1 \rangle &amp; = \begin{pmatrix} 1 \\ 0 \end{pmatrix}_o \\<br /> \vert 2 \rangle &amp; = \begin{pmatrix} 0 \\ 1 \end{pmatrix}_o \\<br /> \vert {+1} \rangle &amp; = \begin{pmatrix} 1/\sqrt{2} \\ 1/\sqrt{2} \end{pmatrix}_o \\<br /> \vert {-1} \rangle &amp; = \begin{pmatrix} 1/\sqrt{2} \\ -1/\sqrt{2} \end{pmatrix}_o<br /> \end{align*}

Relative to the basis comprised of the eigenvectors, you'd have

\begin{align*}<br /> \sigma_x &amp; = \begin{pmatrix} 1 &amp; 0 \\ 0 &amp; -1 \end{pmatrix}_e \\<br /> \vert 1 \rangle &amp; = \begin{pmatrix} 1/\sqrt{2} \\ 1/\sqrt{2} \end{pmatrix}_e \\<br /> \vert 2 \rangle &amp; = \begin{pmatrix} 1/\sqrt{2} \\ -1/\sqrt{2} \end{pmatrix}_e \\<br /> \vert {+1} \rangle &amp; = \begin{pmatrix} 1 \\ 0 \end{pmatrix}_e \\<br /> \vert {-1} \rangle &amp; = \begin{pmatrix} 0 \\ 1 \end{pmatrix}_e<br /> \end{align*}

Note that in either basis, it's true that \sigma_x\vert {+1}\rangle = \vert {+1}\rangle, \sigma_x\vert {-1}\rangle = (-1)\vert {-1}\rangle, \sigma_x\vert 1\rangle = \vert 2\rangle, and \sigma_x \vert 2\rangle = \vert 1\rangle. Relationships between vectors will be the same, independent of the basis.
 
vela said:
The concept you need to understand is that the matrices are only representations of the operators and vectors. It's like writing down numbers. The same number can have different representations depending on which base you're working in. For example, the number ten written in base 5 is 20 whereas in base 2 it's 1010. Both 205 and 10102 represent the same number but the actual digits depend on what base you're using. You can separate the abstract idea of "ten" from its representation in a specific number system. You have a similar situation here with operators and vectors. They can be represented by arrays of numbers, but what those numbers actually are depends on which basis you're working in.

In this problem, you were given the matrix that represents an operator relative to a basis. When you used that matrix to find the eigenvectors of the operator, the vectors were automatically written with respect to the same basis.

Right, thanks, that was a nice way of putting it, I'm clear on this now.

vela said:
With respect to the original \vert 1 \rangle and \vert 2 \rangle basis, you have

\begin{align*}<br /> \sigma_x &amp; = \begin{pmatrix} 0 &amp; 1 \\ 1 &amp; 0 \end{pmatrix}_o \\<br /> \vert 1 \rangle &amp; = \begin{pmatrix} 1 \\ 0 \end{pmatrix}_o \\<br /> \vert 2 \rangle &amp; = \begin{pmatrix} 0 \\ 1 \end{pmatrix}_o \\<br /> \vert {+1} \rangle &amp; = \begin{pmatrix} 1/\sqrt{2} \\ 1/\sqrt{2} \end{pmatrix}_o \\<br /> \vert {-1} \rangle &amp; = \begin{pmatrix} 1/\sqrt{2} \\ -1/\sqrt{2} \end{pmatrix}_o<br /> \end{align*}

I see that here, the \begin{align*}<br /> \vert {+1} \rangle &amp; = \begin{pmatrix} 1/\sqrt{2} \\ 1/\sqrt{2} \end{pmatrix}_o \\<br /> \vert {-1} \rangle &amp; = \begin{pmatrix} 1/\sqrt{2} \\ -1/\sqrt{2} \end{pmatrix}_o<br /> \end{align*}
are the same as what I calculated in my attempt, just I wrote them with the 1/\sqrt{2} taken outside. However, I'm not sure how you determined what \begin{align*}<br /> \vert 1 \rangle &amp; = \begin{pmatrix} 1 \\ 0 \end{pmatrix}_o \\<br /> \vert 2 \rangle &amp; = \begin{pmatrix} 0 \\ 1 \end{pmatrix}_o \\<br /> \end{align*}
are. I mean, they are the same as the columns of that matrix but is this just a coincidence, given that none of the vectors are the same as the matrix columns when you did everything for the second matrix. What's going on here?
 
actually I just did the whole eigenvalue equation thing again using the new representation of the matrix and some other corresponding eigenvectors, and found the same eigenvalues as the first time round (+1 and -1) and the eigenvectors came out the same as the |1> and |2> that you stated earier.
So, then, I can say that the eigenvalues of an operator never change regardless of the basis that its matrix representation is constructed in, - and this would make sense (in a QM context) given that eigenvalues correspond to physical quantities, which have definite values in the real world which should not change depending on how I try and calculate them?
 
Last edited:
  • #10
jeebs said:
However, I'm not sure how you determined what \begin{align*}<br /> \vert 1 \rangle &amp; = \begin{pmatrix} 1 \\ 0 \end{pmatrix}_o \\<br /> \vert 2 \rangle &amp; = \begin{pmatrix} 0 \\ 1 \end{pmatrix}_o \\<br /> \end{align*}
are. I mean, they are the same as the columns of that matrix but is this just a coincidence, given that none of the vectors are the same as the matrix columns when you did everything for the second matrix. What's going on here?
When you write an n-tuple of coordinates

\vert \psi \rangle = \begin{pmatrix} c_1 \\ c_2 \\ \vdots \\ c_n \end{pmatrix}

it's just another way of writing

\vert \psi \rangle = c_1 \vert \phi_1 \rangle + c_2 \vert \phi_2 \rangle + \cdots + c_n \vert \phi_n \rangle

where \{\vert \phi_i \rangle\} is the basis you're working in.

jeebs said:
actually I just did the whole eigenvalue equation thing again using the new representation of the matrix and some other corresponding eigenvectors, and found the same eigenvalues as the first time round (+1 and -1) and the eigenvectors came out the same as the |1> and |2> that you stated earier.
So, then, I can say that the eigenvalues of an operator never change regardless of the basis that its matrix representation is constructed in, - and this would make sense (in a QM context) given that eigenvalues correspond to physical quantities, which have definite values in the real world which should not change depending on how I try and calculate them?
Yes, the eigenvalues are characteristics of the operators, independent of the basis you choose.
 

Similar threads

  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 14 ·
Replies
14
Views
3K
  • · Replies 13 ·
Replies
13
Views
3K
  • · Replies 19 ·
Replies
19
Views
2K
Replies
7
Views
3K
  • · Replies 28 ·
Replies
28
Views
3K
  • · Replies 19 ·
Replies
19
Views
2K
Replies
19
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
Replies
19
Views
3K