Solving QM-like Problem (Shankar, Coupled Mass)

  • Context: Graduate 
  • Thread starter Thread starter Astrum
  • Start date Start date
  • Tags Tags
    Coupled Mass Shankar
Click For Summary

Discussion Overview

The discussion revolves around solving a coupled mass problem using quantum mechanics techniques as presented in Shankar's mathematical introduction. Participants explore the mathematical formulation of the problem, including the representation of states and the determination of eigenvalues and eigenvectors for a specific matrix.

Discussion Character

  • Technical explanation
  • Mathematical reasoning
  • Debate/contested

Main Points Raised

  • One participant describes the process of representing the state of the system using |x(t)> and decomposing it onto an orthogonal basis {|I>, |II>}, referencing Shankar's work.
  • Another participant explains the projection of |x(0)> onto the basis |I> and |II> and discusses the implications of this projection in the context of the problem.
  • There is a question about how to find the eigenvalues and eigenvectors of the matrix Ω, with one participant stating the general formula for eigenvalues as det(Ω - ωI) = 0.
  • Another participant suggests a more straightforward approach to finding eigenvectors by setting up the matrix equation and solving for its elements.
  • One participant expresses confusion regarding the number of equations and unknowns when attempting to solve for eigenvalues and eigenvectors, noting that there seems to be only one eigenvalue.
  • A later reply provides a worked example using a simplified matrix to illustrate the process of finding eigenvalues and eigenvectors, arriving at two eigenvalues and corresponding eigenvectors.

Areas of Agreement / Disagreement

Participants generally agree on the methods of representing the state and the approach to finding eigenvalues and eigenvectors, but there is some confusion and disagreement regarding the implications of the equations and the number of eigenvalues present.

Contextual Notes

Some participants express uncertainty about linear algebra concepts, particularly regarding determinants and eigenvalue problems. There are unresolved questions about the relationship between the number of equations and the eigenvalues in the context of the specific matrix being discussed.

Astrum
Messages
269
Reaction score
5
This is solving a coupled mass problem using the techniques used in QM, it's in the mathematical introduction of Shankar.

This equation was obtained by using the solution for ##x(t) = x_i(0)cos(\omega _i t)## and plugging it into ##\left| x(t) \right \rangle = \left| I \right \rangle x_i (t)+\left| II \right \rangle x_{II} (t)##

$$\left| x(t) \right \rangle = \left| I \right \rangle x_1 (0)cos(\omega _1 t) + \left| II \right \rangle x_{II} (0) cos(\omega _{II} t)$$

Where the kets of I and II are an orthogonal basis, and this turns into:

$$\left| I \right \rangle \langle I \left| x(0) \right \rangle cos(\omega _I t) + \left| II \right \rangle \langle II \left| x(0) \right \rangle cos(\omega _{II} t) $$

Where did these inner products come from?

Edit: ##\langle I \left| x(0) \right \rangle## is just the projection of ##x(0)## onto the basis of ##\left|I \right \rangle##, right? So this is just reworking the equation in terms of ##x(0)##?

Since I think I've figured out my original question, I'd like to pose a new one. My linear algebra isn't very strong, and I'm having problems with the following.

##\left| \ddot{x}(t) \right \rangle = \Omega \left| x(t) \right \rangle##, $$\Omega = \begin{bmatrix} -\frac{2k}{m} & \frac{k}{m} \\ \frac{k}{m} & -\frac{2k}{m} \end{bmatrix} $$

We want to use the basis that diagnolizes ##\Omega##, we need to find it's eigenvectors.

##\Omega \left| I \right \rangle = - \omega ^2 \left| I \right \rangle ##

How does one go about finding the eigenvalues and eigenvectors? the general formula is ##\det(\Omega - \omega I)=0##.
 
Last edited:
Physics news on Phys.org
Yeah, basically what Shankar does is represent the state of the system by |x(t)> and then decompose it onto the basis {|I>. |II>}, where he defines x1(t):=<I|x(t)>. x2(t):=<II|x(t)>. This is basically just the two dimensional version of inserting the identity operator I = Ʃi|i><i| where the |i>'s are the orthonormal basis kets. (Shankar equation 1.6.7, probably one of the most important you'll ever learn.) In that example you only have a two-dimensional phase space, so {|i}}={|I>, |II>}.
 
Last edited:
Jolb said:
Yeah, basically what Shankar does is represent the state of the system by |x(t)> and then decompose it onto the basis {|I>. |II>}, where he defines x1(t):=<I|x(t)>. x2(t):=<II|x(t)>. This is basically just the two dimensional version of inserting the identity operator I = Ʃi|i><i| where the |i>'s are the orthonormal basis kets. (Shankar equation 1.6.7, probably one of the most important you'll ever learn.) In that example you only have a two-dimensional phase space, so {|i}}={|I>, |II>}.

Yes, ##\left| I \right \rangle \langle v \left| I \right \rangle## is the projection of v and I, a scalar, times the ##\left| I \right \rangle## basis.
 
Astrum said:
This is solving a coupled mass problem using the techniques used in QM, it's in the mathematical introduction of Shankar.

This equation was obtained by using the solution for ##x(t) = x_i(0)cos(\omega _i t)## and plugging it into ##\left| x(t) \right \rangle = \left| I \right \rangle x_i (t)+\left| II \right \rangle x_{II} (t)##

$$\left| x(t) \right \rangle = \left| I \right \rangle x_1 (0)cos(\omega _1 t) + \left| II \right \rangle x_{II} (0) cos(\omega _{II} t)$$

Where the kets of I and II are an orthogonal basis, and this turns into:

$$\left| I \right \rangle \langle I \left| x(0) \right \rangle cos(\omega _I t) + \left| II \right \rangle \langle II \left| x(0) \right \rangle cos(\omega _{II} t) $$

Where did these inner products come from?

Edit: ##\langle I \left| x(0) \right \rangle## is just the projection of ##x(0)## onto the basis of ##\left|I \right \rangle##, right? So this is just reworking the equation in terms of ##x(0)##?

Since I think I've figured out my original question, I'd like to pose a new one. My linear algebra isn't very strong, and I'm having problems with the following.

##\left| \ddot{x}(t) \right \rangle = \Omega \left| x(t) \right \rangle##, $$\Omega = \begin{bmatrix} -\frac{2k}{m} & \frac{k}{m} \\ \frac{k}{m} & -\frac{2k}{m} \end{bmatrix} $$

We want to use the basis that diagnolizes ##\Omega##, we need to find it's eigenvectors.

##\Omega \left| I \right \rangle = - \omega ^2 \left| I \right \rangle ##

How does one go about finding the eigenvalues and eigenvectors? the general formula is ##\det(\Omega - \omega I)##.
Well the general formula is ##\det(\Omega - \omega I)=0##
The more straightforward way to attack this problem is to simply write down what the definition of an eigenvector and then solve for its elements. [Once you have solved for the elements, plug each back into find their corresponding eigenvalues.]

So for your 2x2 matrix Ω, you should set up the matrix equation
[tex]\Omega\begin{pmatrix}a \\ b\end{pmatrix}=\omega\begin{pmatrix}a \\ b\end{pmatrix}[/tex] for a constant [itex]\omega[/itex]. This is the definition of an eigenvector of [itex]\Omega[/itex]. [[itex]\omega[/itex] is called the "eigenvalue", and in general there is one eigenvalue for each eigenvector.] If you plug in your matrix for [itex]\Omega[/itex] and do the matrix multiplication, you'll get two equations for a and b. Solve this system of equations and you should be able to find the eigenvectors. Solving this system of equations is equivalent to solving that determinant equation you said earlier, but if you don't understand determinants very well, it is much more intuitive to do it in this straightforward manner.
 
Last edited:
Jolb said:
Well the general formula is ##\det(\Omega - \omega I)=0##
The more straightforward way to attack this problem is to simply write down what the definition of an eigenvector and then solve for its elements. [Once you have solved for the elements, plug each back into find their corresponding eigenvalues.]

So for your 2x2 matrix Ω, you should set up the matrix equation
[tex]\Omega\begin{pmatrix}a \\ b\end{pmatrix}=\omega\begin{pmatrix}a \\ b\end{pmatrix}[/tex] for a constant [itex]\omega[/itex]. This is the definition of an eigenvector of [itex]\Omega[/itex]. [[itex]\omega[/itex] is called the "eigenvalue", and in general there is one eigenvalue for each eigenvector.] If you plug in your matrix for [itex]\Omega[/itex] and do the matrix multiplication, you'll get two equations for a and b. Solve this system of equations and you should be able to find the eigenvectors. Solving this system of equations is equivalent to solving that determinant equation you said earlier, but if you don't understand determinants very well, it is much more intuitive to do it in this straightforward manner.

Doing the matrix multiplication -

$$\begin{bmatrix} (-2k/m)a + (k/m)b \\ (k/m)a-(2k/m)b \end{bmatrix} = -\omega ^2 \begin{bmatrix} a \\ b \end{bmatrix}$$

This gives us two equations with 3 unknowns, and there is only one eigenvalue here. This doesn't make sense to me.
 
Alright, well I'll try and show you how it's done. To make my life easier, I'll do the equivalent problem of finding the eigenvalues and eigenvectors of the matrix:[tex]\begin{bmatrix}-2 && 1 \\ 1 && -2\end{bmatrix}[/tex]

I'll write the eigenvalue equation as[tex] \begin{bmatrix}-2 && 1 \\ 1 && -2\end{bmatrix}\begin{pmatrix}a \\ b\end{pmatrix} = c \begin{pmatrix} a \\ b\end{pmatrix}[/tex].
The bottom line of the matrix equation gives [itex]a-2b=cb[/itex]. Thus [itex]a=(c+2)b[/itex]. So we have shown that the vector [itex]\begin{pmatrix}(c+2)b \\ b\end{pmatrix}[/itex] should be an eigenvector. So we can plug this back into the eigenvalue equation:
[tex]c\begin{pmatrix}(c+2)b \\ b\end{pmatrix}=\begin{bmatrix}-2 && 1 \\ 1 && -2\end{bmatrix}\begin{pmatrix}(c+2)b \\ b\end{pmatrix} =\begin{pmatrix}-2cb-3b\\ cb \end{pmatrix}[/tex]

Reading across the top, we see the equation [tex]c(c+2)b=-2cb-3b[/tex]Or[tex]0=c^2+4c+3=(c+3)(c+1)[/tex] so the two eigenvalues are [itex]c=-3[/itex] and [itex]c=-1[/itex]. Now we plug these two values back into the eigenvalue equation to find the eigenvectors corresponding to each eigenvalue. For [itex]c=-3[/itex] we have:
[tex] \begin{bmatrix}-2 && 1 \\ 1 && -2\end{bmatrix}\begin{pmatrix}a \\ b\end{pmatrix} = -3 \begin{pmatrix} a \\ b\end{pmatrix}[/tex] which implies [itex]b=-a[/itex]. This means any vector of the form [itex]\begin{pmatrix}a \\ -a \end{pmatrix}[/itex] is an eigenvector with eigenvalue -3. We want our states to be normalized though, so we pick [itex]a=\frac{1}{\sqrt{2}}[/itex].

See if you can find the other eigenvector (the one corresponding to [itex]c=-1[/itex]) for yourself, and then see if you can use your answers to solve the original problem with the k's and m's.
 
Last edited:
  • Like
Likes   Reactions: 1 person

Similar threads

  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 31 ·
2
Replies
31
Views
3K
  • · Replies 11 ·
Replies
11
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 26 ·
Replies
26
Views
2K
  • · Replies 15 ·
Replies
15
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K