1. Not finding help here? Sign up for a free 30min tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Compute e^(itH)

  1. Sep 7, 2014 #1
    1. The problem statement, all variables and given/known data
    Compute, using wolfram is an option, eitH where H is:

    ##H = \begin{pmatrix}
    0 & -1 & 0 & 0 & 0& 0\\
    -1 & 0 & -1 & 0 & 0& 0\\
    0 & -1 & 0 & -1 & 0& 0\\
    0 & 0 & -1 & 0 & -1& 0\\
    0 & 0 & 0 & -1 & 0& -1\\
    0 & 0 & 0 & 0 & -1& 0\\
    \end{pmatrix}##

    2. Relevant equations



    3. The attempt at a solution

    I tried putting this into wolfram but I couldn't get it to work. So, I then tried to get the eigenvector matrix and it's transpose to do as suggested from this page. But, when I did S-1HS I got another filthy matrix that I couldn't extract any information from.

    All of the following steps from this problem seem easy enough, I just can't figure this out.

    Thanks for any help.
     
  2. jcsd
  3. Sep 8, 2014 #2

    Matterwave

    User Avatar
    Science Advisor
    Gold Member

    If you get the eigenvectors to H and do S-1HS the resulting matrix should be diagonal and consist of its eigenvalues. So in fact all you have to do to get eitH after that is to take the exponential of each of the eigenvalues, and then multiply by S again to rotate it back to the original basis.
     
  4. Sep 8, 2014 #3

    jedishrfu

    Staff: Mentor

    Where did you get this problem? Is it math or physics related?

    Also your link appears to be broken.
     
  5. Sep 8, 2014 #4

    ShayanJ

    User Avatar
    Gold Member

    The link lacks a : .Just remove the http// part!
    Anyway
    A good point you can use, is that the trace of a matrix is independent of the basis you're using to write the matrix in. The trace of your matrix is zero, and it should be zero in any other basis too, including the one that makes the matrix diagonal. And because we know that the diagonal elements are the eigenvalues and their order doesn't matter, we can say that the diagonal form of H should be [itex] \left( \begin{array} \\ &a \ \ &0 \ \ &0 \ \ &0 \ \ &0 \ \ &0 \\&0 &-a &0 &0 &0 &0 \\&0 &0 &b &0 &0 &0\\&0 &0 &0 \ &-b &0 &0\\&0 &0 &0 &0 &c &0\\&0 &0 &0 &0 &0 &-c\end{array} \right) [/itex].
    So you should only find three eigenvalues with distinct absolute values and then you can write the diagonal form and H and then you can simply exponentiate it.
     
  6. Sep 8, 2014 #5
    This is for a physics problem but I suppose I could have put this part in math as well. It is Hamiltonian for an electron on a six site chain.

    I've gotten eigenvalues from using online calculators, actually doing all of this by hand seems like a huge time sink. I don't get exact values from calculators and I think that maybe that is why I wasn't getting a diagonal matrix before (all of the values were pretty close to zero).

    This is what I have then:

    The eigenvector matrix:
    ##S= \begin{pmatrix}
    -0.232 & 0.232 & 0.521 & 0.521 & -0.418 & 0.418\\
    0.418 & 0.418 & -0.232 & 0.232 & 0.521 & 0.521\\
    -0.521 & 0.521 & -0.418 & -0.418 & -0.232 & 0.232\\
    0.521 & 0.521 & 0.418 & -0.418 & -0.232 & -0.232\\
    -0.418 & 0.418 & 0.232 & 0.232 & 0.521 & -0.522\\
    0.232 & 0.232 & -0.521 & 0.521 & -0.418 & -0.418\\
    \end{pmatrix} ##

    Multiplied against:

    ##H_{diagonal} = \begin{pmatrix}
    e^{-i1.802t} & 0 & 0 & 0 & 0& 0\\
    0 & e^{i1.802t} & 0 & 0 & 0& 0\\
    0 & 0 & e^{-i0.445t} & 0 & 0& 0\\
    0 & 0 & 0 & e^{i0.445t} & 0& 0\\
    0 & 0 & 0 & 0 & e^{-i1.247t}& 0\\
    0 & 0 & 0 & 0 & 0& e^{i1.247t}\\
    \end{pmatrix}##

    then multipled against the inverse of S

    ##S^{-1}= \begin{pmatrix}
    -0.232 & 0.418 & -0.521 & 0.521 & -0.418 & 0.232\\
    0.232 & 0.418 & 0.521 & 0.521 & 0.418 & 0.232\\
    0.521 & -0.232 & -0.418 & 0.418 & 0.232 & -0.521\\
    0.521 & 0.232 & -0.418 & -0.418 & 0.232 & 0.521\\
    -0.418 & 0.521 & -0.232 & -0.232 & 0.521 & -0.418\\
    0.418 & 0.521 & 0.232 & -0.232 & -0.521 & -0.418\\
    \end{pmatrix}##

    And, after all that, I'm done?
     
  7. Sep 8, 2014 #6

    Matterwave

    User Avatar
    Science Advisor
    Gold Member

    You should be done, if that was done correctly, yes. haha
     
  8. Sep 8, 2014 #7

    AlephZero

    User Avatar
    Science Advisor
    Homework Helper

    If you are trying to get an "exact" result rather than doing it numerically, the fact that H2 is the diagonal matrix (1,2,2,2,2,1) should simplify things quite a bit.

    You could just write it out as an infinite series, group the odd and even powers together, and then sum the two series of diagonal matrices as in the first example in your link.
     
  9. Sep 8, 2014 #8
    So, then I would just need to sum ##\frac{(1t)^n}{n!}## and ##\frac{(2t)^n}{n!}## from 0 to infinity. giving me a diagonal matrix of just
    I incorrectly wrote the exponential before, it should have been ##e^{-iHt}##. Forgot the negative.

    ##H_{diagonal} = \begin{pmatrix}
    e^{-it} & 0 & 0 & 0 & 0& 0\\
    0 & e^{-i2t} & 0 & 0 & 0& 0\\
    0 & 0 & e^{-i2t} & 0 & 0& 0\\
    0 & 0 & 0 & e^{-i2t} & 0& 0\\
    0 & 0 & 0 & 0 & e^{-i2t}& 0\\
    0 & 0 & 0 & 0 & 0& e^{-it}\\
    \end{pmatrix}##

    If that's the case the H2 property is something worth knowing. I don't recall this from linear algebra (I don't recall a lot, actually).
     
  10. Sep 9, 2014 #9

    AlephZero

    User Avatar
    Science Advisor
    Homework Helper

    That's the right idea but not quite the right answer. Let ##D## be the diagonal matrix ##\{1,\sqrt{2},\dots, 1\}## so ##D^2 = H^2##.

    ##e^{-iHt} = \cos Ht - i\sin Ht##.

    For the real part, ##\cos Ht = 1 - \frac{H^2t^2}{2!} + \dots = 1 - \frac{D^2t^2}{2!} + \dots## so the sums are cosine functions of the diagonals of ##D##.

    For the imaginary part, ##\sin Ht = Ht - \frac{H^3t^3}{3!} + \dots = H\left(t - \frac{D^2t^3}{3!} + \dots\right)##.
    To make that into a sine function of D, you have tweak it into
    ##HD^{-1}\left(Dt - \frac{D^3t^3}{3!} + \dots\right)##.
     
  11. Sep 9, 2014 #10

    AlephZero

    User Avatar
    Science Advisor
    Homework Helper

    I don't think that property has a "special" name. I spotted it from the chess-board pattern of zeros and nonzeros in ##H##. That means there will be a pattern of zero terms in ##H^2##, whatever value the non-zero terms in ##H## have.
     
  12. Sep 10, 2014 #11

    vela

    User Avatar
    Staff Emeritus
    Science Advisor
    Homework Helper
    Education Advisor

    Mathematica yields a different result for H2:
    $$H^2 = \begin{bmatrix}
    1 & 0 & 1 & 0 & 0 & 0 \\
    0 & 2 & 0 & 1 & 0 & 0 \\
    1 & 0 & 2 & 0 & 1 & 0 \\
    0 & 1 & 0 & 2 & 0 & 1 \\
    0 & 0 & 1 & 0 & 2 & 0 \\
    0 & 0 & 0 & 1 & 0 & 1
    \end{bmatrix}$$
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted



Similar Discussions: Compute e^(itH)
  1. H and E (Replies: 4)

  2. Limit computation (Replies: 5)

  3. Computing a comutator (Replies: 14)

  4. Ee ->e+e+ scattering (Replies: 9)

  5. Quantum computation (Replies: 0)

Loading...