Time Evolution and Hamiltonian Problem

Click For Summary

Homework Help Overview

The problem involves a physical system represented in a three-dimensional state space, with a Hamiltonian matrix provided. The task is to find the time evolution of the system's state by determining eigenvalues and eigenvectors, expanding the initial state in terms of these eigenstates, and applying time evolution principles.

Discussion Character

  • Exploratory, Conceptual clarification, Mathematical reasoning

Approaches and Questions Raised

  • Participants discuss finding eigenvalues and eigenvectors of the Hamiltonian, with one participant expressing uncertainty about the correctness of their results. There is a focus on normalizing eigenvectors and constructing projection matrices from them. Questions arise regarding the properties of projection matrices and their relation to the identity matrix.

Discussion Status

The discussion is ongoing, with participants providing hints and guidance on verifying eigenvector properties and constructing projection matrices. Some participants are exploring the implications of orthonormality and completeness of the eigenvectors, while others seek clarification on the matrix representation of projections.

Contextual Notes

Participants are working under the constraints of homework guidelines, which may limit the type of assistance they can provide. There is an emphasis on understanding the mathematical concepts involved rather than simply obtaining solutions.

phil ess
Messages
67
Reaction score
0

Homework Statement



Consider a physical system with a three-dimensional state space. In this space the Hamiltonian is represented by the matrix:

[tex]H = hbar\omega \[ \left( \begin{array}{ccc}<br /> 0 & 0 & 2 \\<br /> 0 & 1 & 0 \\<br /> 2 & 0 & 0 \end{array} \right)\][/tex]

The state of the system at t = 0 in coordinate representation is:

[tex]s(t=0) = \[ \left( \begin{array}{ccc}<br /> \sqrt{2} \\<br /> 1 \\<br /> 1 \end{array} \right)\][/tex]

Find the state s(t=/=0) by doing the following steps:

i) Find the eigenvectors and eigenvalues of the Hamiltonian.
ii) Expand the initial state in eigenstates of the Hamiltonian.
iii) Use your knowledge of the time evolution of the eigenstates to find the state of the system s(t).

The Attempt at a Solution



i) I can get the eigenvalues:

[tex]\[ \chi(\lambda) = \left| \begin{array}{ccc}<br /> -\lambda & 0 & 1 \\<br /> 0 & 2-\lambda & 0 \\<br /> 1 & 0 & -\lambda \end{array} \right| = 0\][/tex]

which gives:

[tex]\lambda = 2,-2,1[/tex]

and then the eigenvectors are:

[tex]\[ \left( \begin{array}{ccc}<br /> 1 \\<br /> 0 \\<br /> 1 \end{array} \right)\[ \left( \begin{array}{ccc}<br /> 1 \\<br /> 0 \\<br /> -1 \end{array} \right)\] \] \[ \left( \begin{array}{ccc}<br /> 0 \\<br /> 1 \\<br /> 0 \end{array} \right)\][/tex]

Is this right?

ii) Now I am not sure how exactly to expand the initial states in eigenstates of the hamiltonian

Any hints are appreciated!
 
Physics news on Phys.org
phil ess said:
Is this right?
Basically, yes. But anyway, you really should know how to check for yourself. Just apply H to each of these vectors. If the result is proportional to the original, then it is an eigenvector. And, the proportionality factor is the eigenvalue. (There is one slight issue: eigenvectors are typically defined to be unit-normalized.)

phil ess said:
ii) Now I am not sure how exactly to expand the initial states in eigenstates of the hamiltonian
Use the fact the the eigenvectors form a complete 3-D basis. (Well, first you should verify that this is true.) So, you can construct an identity matrix out of projection matrices, which you can in turn construct from the eigenvectors. (This is where the unit-normalization comes in handy.) Then, Iv=v, where I is the identitiy matrix and v is any vector. Are you familiar with bra-ket notation?
 
Right ok so first I normalize the eigenvectors:

[tex]\[ \left( \begin{array}{ccc}<br /> 1/\sqrt{2} \\<br /> 0 \\<br /> 1/\sqrt{2} \end{array} \right) \[ \left( \begin{array}{ccc}<br /> 1/{\sqrt{2} \\<br /> 0 \\<br /> -1/\sqrt{2} \end{array} \right)\[ \left( \begin{array}{ccc}<br /> 0 \\<br /> 1 \\<br /> 0 \end{array} \right)\]\]\][/tex]

And now I want to construct projection matricies from these? Ok as far as I know a projection matrix has to be hermitian and idempotent right? Or does it only have to be hermitian for an orthogonal projection matrix?

Well either way this is what I put together:

[tex]A = \[ \left( \begin{array}{ccc}<br /> 1/\sqrt{2} & 0 & 1/\sqrt{2} \\<br /> 0 & 1 & 0 \\<br /> 1/\sqrt{2} & 0 & -1/\sqrt{2} \end{array} \right)\][/tex]

Which is hermitian, but it certainly isn't idempotent, since A2 = I

Obviously I am missing something important! Thanks for the help so far!
 
Use the fact that the eigenvectors are orthonormal (and again, you should verify this if you have not done so already). I'll call them v+1, v+2, and v-2. Then, for example:

v+1 ( v+1v+1 ) = v+1 (1) = v+1

v+1 ( v+1v+2 ) = v+1 (0) = 0

v+1 ( v+1v-2 ) = v+1 (0) = 0

So, v+1 ( v+1v ) projects v onto the v+1 "direction". You can write this as a 3x3 matrix acting on v. That matrix is the projection matrix for the eigenvector v+1. You can find two more matrices for the other two eigenvectors in the same way. (The reason that I asked you about bra-ket notation is that it has a very simple notational implementation: e.g. |+1><+1| is the projector for |+1>.) So, you should find 3 projection matrices. The sum of these three projections matrices will actually be the 3x3 identity matrix! This is a very important concept.

BTW, please let me know if the notation is difficutl to read; if so, I will switch to LaTeX.
 
I understand what youre saying when you say "v+1 ( v+1 ⋅ v ) projects v onto the v+1 direction", but I don't know how to write this as a 3x3 matrix.

The way I've been taught to make a projection matrix is using:

Proj V x = A(ATA)-1AT x

Where A is a matrix constructed from basis vectors of V, and the eigenvectors I have are basis vectors for R3.

But When I do this I just get the identity matrix back which obviously isn't right. Could you explain how you construct the projection matrix your way? Thanks for all the help so far I am stll struggling with this..

EDIT: "The sum of these three projections matrices will actually be the 3x3 identity matrix! This is a very important concept"

Oh is this why I got the identity matrix when I used the formula above?
 
phil ess said:
I understand what youre saying when you say "v+1 ( v+1 ⋅ v ) projects v onto the v+1 direction", but I don't know how to write this as a 3x3 matrix.
Write those relations that I gave to you in component form. Hint: the dot product can be written in component form as:

v.w = ∑jvjwj

and the action of a matrix on a vector can be written in component form as:

Av → ∑jAijvj

So, the row vector is like a row of a matrix ...

phil ess said:
Where A is a matrix constructed from basis vectors of V, and the eigenvectors I have are basis vectors for R3.
Possibly you only need to recognize that V = ℝ3 in order to resolve your confusion. (Just in case your browser doesn't support that one, that is V = R3.) I'm not sure what exactly you are projecting onto there (V is some subspace?), or how you are constructing A "from the basis vectors of V". But, anyway, the method that I suggest is a much simpler concept, once you can figure it out.

phil ess said:
EDIT: "The sum of these three projections matrices will actually be the 3x3 identity matrix! This is a very important concept"

Oh is this why I got the identity matrix when I used the formula above?
Probably. It isn't clear to me what you've done so far.
 

Similar threads

  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K
Replies
1
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 8 ·
Replies
8
Views
2K
Replies
46
Views
7K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 21 ·
Replies
21
Views
2K