Solving QM-like Problem (Shankar, Coupled Mass)

In summary, the conversation discusses solving a coupled mass problem using techniques from QM, specifically in the mathematical introduction of Shankar. The derived equation is obtained by plugging in a solution and using an orthogonal basis. The conversation then moves on to discussing the use of eigenvectors and eigenvalues in solving the problem, with the suggestion to set up a matrix equation and solve for the elements to find the eigenvectors.
  • #1
270
5
This is solving a coupled mass problem using the techniques used in QM, it's in the mathematical introduction of Shankar.

This equation was obtained by using the solution for ##x(t) = x_i(0)cos(\omega _i t)## and plugging it into ##\left| x(t) \right \rangle = \left| I \right \rangle x_i (t)+\left| II \right \rangle x_{II} (t)##

$$\left| x(t) \right \rangle = \left| I \right \rangle x_1 (0)cos(\omega _1 t) + \left| II \right \rangle x_{II} (0) cos(\omega _{II} t)$$

Where the kets of I and II are an orthogonal basis, and this turns into:

$$\left| I \right \rangle \langle I \left| x(0) \right \rangle cos(\omega _I t) + \left| II \right \rangle \langle II \left| x(0) \right \rangle cos(\omega _{II} t) $$

Where did these inner products come from?

Edit: ##\langle I \left| x(0) \right \rangle## is just the projection of ##x(0)## onto the basis of ##\left|I \right \rangle##, right? So this is just reworking the equation in terms of ##x(0)##?

Since I think I've figured out my original question, I'd like to pose a new one. My linear algebra isn't very strong, and I'm having problems with the following.

##\left| \ddot{x}(t) \right \rangle = \Omega \left| x(t) \right \rangle##, $$\Omega = \begin{bmatrix} -\frac{2k}{m} & \frac{k}{m} \\ \frac{k}{m} & -\frac{2k}{m} \end{bmatrix} $$

We want to use the basis that diagnolizes ##\Omega##, we need to find it's eigenvectors.

##\Omega \left| I \right \rangle = - \omega ^2 \left| I \right \rangle ##

How does one go about finding the eigenvalues and eigenvectors? the general formula is ##\det(\Omega - \omega I)=0##.
 
Last edited:
Physics news on Phys.org
  • #2
Yeah, basically what Shankar does is represent the state of the system by |x(t)> and then decompose it onto the basis {|I>. |II>}, where he defines x1(t):=<I|x(t)>. x2(t):=<II|x(t)>. This is basically just the two dimensional version of inserting the identity operator I = Ʃi|i><i| where the |i>'s are the orthonormal basis kets. (Shankar equation 1.6.7, probably one of the most important you'll ever learn.) In that example you only have a two-dimensional phase space, so {|i}}={|I>, |II>}.
 
Last edited:
  • #3
Jolb said:
Yeah, basically what Shankar does is represent the state of the system by |x(t)> and then decompose it onto the basis {|I>. |II>}, where he defines x1(t):=<I|x(t)>. x2(t):=<II|x(t)>. This is basically just the two dimensional version of inserting the identity operator I = Ʃi|i><i| where the |i>'s are the orthonormal basis kets. (Shankar equation 1.6.7, probably one of the most important you'll ever learn.) In that example you only have a two-dimensional phase space, so {|i}}={|I>, |II>}.

Yes, ##\left| I \right \rangle \langle v \left| I \right \rangle## is the projection of v and I, a scalar, times the ##\left| I \right \rangle## basis.
 
  • #4
Astrum said:
This is solving a coupled mass problem using the techniques used in QM, it's in the mathematical introduction of Shankar.

This equation was obtained by using the solution for ##x(t) = x_i(0)cos(\omega _i t)## and plugging it into ##\left| x(t) \right \rangle = \left| I \right \rangle x_i (t)+\left| II \right \rangle x_{II} (t)##

$$\left| x(t) \right \rangle = \left| I \right \rangle x_1 (0)cos(\omega _1 t) + \left| II \right \rangle x_{II} (0) cos(\omega _{II} t)$$

Where the kets of I and II are an orthogonal basis, and this turns into:

$$\left| I \right \rangle \langle I \left| x(0) \right \rangle cos(\omega _I t) + \left| II \right \rangle \langle II \left| x(0) \right \rangle cos(\omega _{II} t) $$

Where did these inner products come from?

Edit: ##\langle I \left| x(0) \right \rangle## is just the projection of ##x(0)## onto the basis of ##\left|I \right \rangle##, right? So this is just reworking the equation in terms of ##x(0)##?

Since I think I've figured out my original question, I'd like to pose a new one. My linear algebra isn't very strong, and I'm having problems with the following.

##\left| \ddot{x}(t) \right \rangle = \Omega \left| x(t) \right \rangle##, $$\Omega = \begin{bmatrix} -\frac{2k}{m} & \frac{k}{m} \\ \frac{k}{m} & -\frac{2k}{m} \end{bmatrix} $$

We want to use the basis that diagnolizes ##\Omega##, we need to find it's eigenvectors.

##\Omega \left| I \right \rangle = - \omega ^2 \left| I \right \rangle ##

How does one go about finding the eigenvalues and eigenvectors? the general formula is ##\det(\Omega - \omega I)##.
Well the general formula is ##\det(\Omega - \omega I)=0##
The more straightforward way to attack this problem is to simply write down what the definition of an eigenvector and then solve for its elements. [Once you have solved for the elements, plug each back into find their corresponding eigenvalues.]

So for your 2x2 matrix Ω, you should set up the matrix equation
[tex]\Omega\begin{pmatrix}a \\ b\end{pmatrix}=\omega\begin{pmatrix}a \\ b\end{pmatrix}[/tex] for a constant [itex]\omega[/itex]. This is the definition of an eigenvector of [itex]\Omega[/itex]. [[itex]\omega[/itex] is called the "eigenvalue", and in general there is one eigenvalue for each eigenvector.] If you plug in your matrix for [itex]\Omega[/itex] and do the matrix multiplication, you'll get two equations for a and b. Solve this system of equations and you should be able to find the eigenvectors. Solving this system of equations is equivalent to solving that determinant equation you said earlier, but if you don't understand determinants very well, it is much more intuitive to do it in this straightforward manner.
 
Last edited:
  • #5
Jolb said:
Well the general formula is ##\det(\Omega - \omega I)=0##
The more straightforward way to attack this problem is to simply write down what the definition of an eigenvector and then solve for its elements. [Once you have solved for the elements, plug each back into find their corresponding eigenvalues.]

So for your 2x2 matrix Ω, you should set up the matrix equation
[tex]\Omega\begin{pmatrix}a \\ b\end{pmatrix}=\omega\begin{pmatrix}a \\ b\end{pmatrix}[/tex] for a constant [itex]\omega[/itex]. This is the definition of an eigenvector of [itex]\Omega[/itex]. [[itex]\omega[/itex] is called the "eigenvalue", and in general there is one eigenvalue for each eigenvector.] If you plug in your matrix for [itex]\Omega[/itex] and do the matrix multiplication, you'll get two equations for a and b. Solve this system of equations and you should be able to find the eigenvectors. Solving this system of equations is equivalent to solving that determinant equation you said earlier, but if you don't understand determinants very well, it is much more intuitive to do it in this straightforward manner.

Doing the matrix multiplication -

$$\begin{bmatrix} (-2k/m)a + (k/m)b \\ (k/m)a-(2k/m)b \end{bmatrix} = -\omega ^2 \begin{bmatrix} a \\ b \end{bmatrix}$$

This gives us two equations with 3 unknowns, and there is only one eigenvalue here. This doesn't make sense to me.
 
  • #6
Alright, well I'll try and show you how it's done. To make my life easier, I'll do the equivalent problem of finding the eigenvalues and eigenvectors of the matrix:[tex]\begin{bmatrix}-2 && 1 \\ 1 && -2\end{bmatrix}[/tex]

I'll write the eigenvalue equation as[tex]
\begin{bmatrix}-2 && 1 \\ 1 && -2\end{bmatrix}\begin{pmatrix}a \\ b\end{pmatrix} = c \begin{pmatrix} a \\ b\end{pmatrix}[/tex].
The bottom line of the matrix equation gives [itex]a-2b=cb[/itex]. Thus [itex]a=(c+2)b[/itex]. So we have shown that the vector [itex]\begin{pmatrix}(c+2)b \\ b\end{pmatrix}[/itex] should be an eigenvector. So we can plug this back into the eigenvalue equation:
[tex]c\begin{pmatrix}(c+2)b \\ b\end{pmatrix}=\begin{bmatrix}-2 && 1 \\ 1 && -2\end{bmatrix}\begin{pmatrix}(c+2)b \\ b\end{pmatrix} =\begin{pmatrix}-2cb-3b\\ cb \end{pmatrix}[/tex]

Reading across the top, we see the equation [tex]c(c+2)b=-2cb-3b[/tex]Or[tex]0=c^2+4c+3=(c+3)(c+1)[/tex] so the two eigenvalues are [itex]c=-3[/itex] and [itex]c=-1[/itex]. Now we plug these two values back into the eigenvalue equation to find the eigenvectors corresponding to each eigenvalue. For [itex]c=-3[/itex] we have:
[tex]
\begin{bmatrix}-2 && 1 \\ 1 && -2\end{bmatrix}\begin{pmatrix}a \\ b\end{pmatrix} = -3 \begin{pmatrix} a \\ b\end{pmatrix}[/tex] which implies [itex]b=-a[/itex]. This means any vector of the form [itex]\begin{pmatrix}a \\ -a \end{pmatrix}[/itex] is an eigenvector with eigenvalue -3. We want our states to be normalized though, so we pick [itex]a=\frac{1}{\sqrt{2}}[/itex].

See if you can find the other eigenvector (the one corresponding to [itex]c=-1[/itex]) for yourself, and then see if you can use your answers to solve the original problem with the k's and m's.
 
Last edited:
  • Like
Likes 1 person

Suggested for: Solving QM-like Problem (Shankar, Coupled Mass)

Replies
2
Views
632
Replies
2
Views
643
Replies
10
Views
586
Replies
3
Views
97
Replies
2
Views
300
Replies
21
Views
730
Replies
2
Views
737
Replies
6
Views
851
Back
Top