Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

I Solve the particle in a box problem using matrix mechanics?

  1. Mar 25, 2017 #1
    How do we solve the particle in a box (infinite potential well) problem using matrix mechanics rather that using Schrodingers Equation? Schrodingers Equation for this particular problem is a simple partial differential equation and is easy for me to follow. The solution has the following form,\begin{equation}\psi_n(x)= \sqrt{\dfrac{2}{L}}\sin{\dfrac{n\pi}{L}}x\end{equation}
    However, I am interested in solving this problem using the methods of matrix mechanics using bras and kets. As I understand, the dirac notation, ##\langle \vec{x}|\psi\rangle## is equivalent to the above notation for ##\psi_n(x)## or \begin{equation}\langle \vec{x}|\psi\rangle\equiv\psi_n(x)\end{equation}
    Both notations express the idea of a function, however, the term on the left in Equation (2) (the Dirac notation) indicates (to me) a dot product of two vectors whereas the term on the right is an expression for a continuous function obtained by solving the Schrodinger Equation.

    I would like to attempt to solve this problem using the methods of matrix mechanics and not the Schrodinger Equation.

    Here is a description of the problem...
    To get the state vector(s) ##\psi## for the particle in the box, using the methods of matrix mechanics, how should I start?

    Here is the Hamiltonian
    \begin{equation}-\dfrac{\hbar^2}{2m} \dfrac{d^2\psi(x)}{dx^2} = E\psi(x)\end{equation}

    And here is the momentum operator \begin{equation}\hat{p}=-\imath\hbar \frac{\partial}{\partial x} = -\imath\hbar\nabla\end{equation}

    And I think this is the Hamiltonian Operator
    \begin{equation}\hat{H}=-\dfrac{\hbar^2}{2m} \nabla^2- E\end{equation}

    I think what I have to do at this point is find the eigenvalues and eigenvectors of the Hamiltonian Operator.(Equation (5))

    This is where I get lost. How do I set up that matrix for the particle in the box?
     
    Last edited: Mar 25, 2017
  2. jcsd
  3. Mar 25, 2017 #2

    vanhees71

    User Avatar
    Science Advisor
    2016 Award

    I don't think that you can solve this academic problem in matrix mechanics since it's not clear how to incorporate the boundary conditions, which is easy in the position representation (wave mechanics).
     
  4. Mar 25, 2017 #3
    Here are the boundary conditions....

    The box is one-dimensional and has length = L. The probability of finding the particle outside the box is zero, which implies that \begin{equation}\psi(x=0)=\psi(x=L) = 0\end{equation} or using brakets \begin{equation}\langle x=0|\psi\rangle = \langle x=L|\psi\rangle = 0 \end{equation}Can you be more specific about what it is, about this problem, which makes it not possible to solve using matrix mechanics? Is it because it is only one-dimensional? How could we change it to make it solvable using matrix mechanics?

    How would I implement those boundary conditions in matrix mechanics?
     
    Last edited: Mar 25, 2017
  5. Mar 25, 2017 #4

    PeterDonis

    User Avatar
    2016 Award

    Staff: Mentor

    No, it indicates an infinite family of such dot products, since the bra ##\langle \vec{x} \vert## does not refer to a single (dual) vector, but to the infinite family of such vectors that constitute the position basis.

    That is not the Hamiltonian. It's the eigenvalue equation for the free particle Hamiltonian (which is not the Hamiltonian for this problem, since there is a nonzero potential).

    Not quite. The first term on the LHS is correct, but the second is not. The potential energy as a function of ##x## is what should appear there, not the eigenvalue ##E##.

    At least, that's what the Hamiltonian should look like in the Schrodinger formalism. But in the matrix formalism, you need a matrix, not a differential operator. The key question is this: how many rows/columns should this matrix have? Answering this question will tell you why this particular problem is not easily solved in the matrix mechanics formalism.
     
  6. Mar 26, 2017 #5

    vanhees71

    User Avatar
    Science Advisor
    2016 Award

    Of course, these are the boundary conditions, and in the position representation it's a pretty easily solved problem (although there's more mathematics in it than usually taught in QM1, e.g., the fact that there exists no momentum observable, i.e., ##-\mathrm{i} \partial_x## is only Hermitean but not self-adjoint).

    As I said, I have no idea, how to solve this in matrix mechanics, because you cannot so easily implement these boundary conditions, but why would one like to solve it with matrix mechanics? The choice of the right coordinate system is already key to the solution of classical-mechanics problems. So it is in quantum mechanics: Here the position representation is the natural choice, and thus the problem is solved with it most easily.
     
  7. Mar 26, 2017 #6
    Since there are an infinite number of possible energy states I would say that there should be an infinite number of rows and columns in the matrix. Are you trying to tell me that Matrix Mechanics cannot solve a problem when there are an infinite number of eigenvalues? If so, why not?
     
    Last edited: Mar 26, 2017
  8. Mar 26, 2017 #7
    There are two reasons why I would like to solve this with Matrix Mechanics(MM)...

    First, I have read, many times, that the Schrodinger method and the Heisenberg Method(Matrix Mechanics) are equivalent methods for solving quantum mechanical problems. If that is true, then I think I should expect to be able to solve the simplest of all problems using both methods. However, I was not able to find any example of solving the particle in a box problem using Matrix Mechanics....

    The second reason is, I am still perplexed by the Matrix Mechanics method in general. I still have absolutely no idea where the initial state vector ##|\psi\rangle## comes from. As I said in a previous thread, it looks to me like the MM method depends upon state vector rather than discovers what it is. I know this is wrong, but I do not see, yet, how to start the MM method without already knowing what ##\psi## is. Starting with the particle in the box problem, I thought, would be a good way to expose the MM method from its roots.

    What is the simplest problem that can be solved using the MM method? If you point me in that direction I will look that up.
     
    Last edited: Mar 26, 2017
  9. Mar 26, 2017 #8

    PeterDonis

    User Avatar
    2016 Award

    Staff: Mentor

    Yes, exactly.

    No, I'm just saying it's not at all easy.

    Try an example where there are just two eigenvalues, such as the spin of a spin-1/2 particle.
     
  10. Mar 26, 2017 #9

    vanhees71

    User Avatar
    Science Advisor
    2016 Award

    There are a few eigenvalue problems you can solve with the abstract Dirac formalism only, i.e., by only using the commutator relations of the involved observables. The key to the solution are the underlying symmetry principles (imho 99% of physics is applied Lie-group theory anyway ;-)). The examples that come to my mind are:

    (a) position and momentum: You use the fact that the momentum component in a fixed direction generates translations, i.e., assuming that there's one eigen vector for ##\hat{x}## with eigenvector ##x##, you get eigenvectors for any eigenvalue ##x \in \mathbb{R}##. One should of course be aware, that this is the typical physicist's abuse of mathematics: The position and also the momentum eigenvectors in QT are not real vectors in the Hilbert space, but they are in the dual of the domain of these operators, i.e., they are generalized vectors in the sense of distributions.

    (b) energy eigenvalue problem for the harmonic oscillator: Here you find ladder operators, such that you can build the energy eigenvectors from the ground state, and you get also the eigenvalues ##E_{n}=(n+1/2)\hbar \omega##, where ##n \in \mathbb{N}_0##.

    (c) eigenvalue problem of the angular-momentum algebra, where you can show that you can diagonalize simultaneously ##\hat{\vec{J}}^2## and one component, usually ##\hat{J}_z## is used, where also ladder operators come into use (this is not an accident but due to the fact that the two-dimensional symmetric harmonic oscillator has a dynamical SU(2) symmetry, and the Lie-algebra of SU(2) is just the Liealgebra of the rotation group, and that's nothing else than the angular-momentum algebra, because angular momentum is generating rotations).

    (d) eigenvalue problem of the Hamiltonian of the hydrogen atom: It's solved due to the fact that it has an SO(4) dynamical symmetry, whose Lie algebra is equivalent to a direct sum of to su(2) algebras, and you can just reuse what you've learnt in considering (c). Funnily enough for ##E<0## you get a representation of SO(4), for ##E=0## of the Galilei group, and for ##E>0## the proper orthochronous Lorentz group.
     
  11. Mar 26, 2017 #10

    Nugatory

    User Avatar

    Staff: Mentor

    You'll find a worked example for a spin-1/2 particle (PeterDonis is right about that being one of the simplest examples) here: http://scipp.ucsc.edu/~dine/ph101/bands.pdf

    It's given to you, in exactly the same way that is given to you when you're solve a problem using the Schrodinger formalism. Either way, the problem is "Given a quantum system in this initial state at time zero and subject to this Hamiltonian, what is its future state?". You'll see this in section 2 of the notes I linked above, in the sentence that starts "At ##t=0##, .....".
    For that matter, it's the same in classical physics: If I ask you to calculate where a fired cannonball is going to land, Newton's laws give you all the physical knowledge and mathematical machinery you need to calculate the trajectory from arbitrary initial conditions, but I still have to give you the initial position and velocity of the cannonball as it leaves the cannon before you can calculate where it will land.
     
    Last edited: Mar 26, 2017
  12. Mar 26, 2017 #11
    Ah, I finally understand! The initial state vector represents the initial condition. That makes sense. The matrix mechanics method has to start somewhere.

    And I suppose this is part of the problem with using MM to solve the particle in the box problem? We do not know what is the initial condition, ie what is the initial energy state for the particle in the box? (That is a question not a statement.)

    With the Schrodinger Equation we can use boundary conditions instead of initial conditions to solve for the wave equation.

    How do we translate the boundary conditions for the particle in a box to an initial state vector?

    If we knew the initial state vector could we then solve the particle-in-the-box problem easily using MM?

    (I am reading the article you linked to)
     
  13. Mar 26, 2017 #12

    Nugatory

    User Avatar

    Staff: Mentor

    You don't know it when you use the Schrodinger formalism either. See below.
    You don't, whether you're using the Schrodinger or the Heisenberg formalism. Review the steps you went through to solve this problem using the Schrodinger formalism: First you solved the time-independent Schrodinger equation to find the eigenvalues and eigenfunctions of the Hamiltonian operator, and you used the boundary conditions for that. But that didn't give you the state of the particle, it gave you the base vectors that you can use to write any given state of the particle; in general that state will be a superposition of those vectors.

    Next, you took the initial state (which you've been given as part of the complete problem statement) and you wrote it in terms of those eigenvectors. Sometimes this is really easy. For example in the particle-in-a-box problem we're often told that the particle has been prepared with a particular energy (or equivalently, momentum); then the initial state is just the corresponding energy eigenfunction and its time evolution is trivial. However, the initial condition might be something like: The particle is localized at the left side of the box and moving to the right at a given speed (both of which can be only specified to within the limits allowed by the uncertainty principle). In this case, we would treat the initial state as a Gaussian superposition of many different energy eigenfunctions, and its time evolution would show it moving back and forth in the box.
     
  14. Mar 26, 2017 #13
    Yes, I agree solving the Schrodinger Equation did not give us the initial state, it gave us the eigenvalues and eigenvectors with which we can express any given state of the particle. Thats a lot. (You are not suggesting that we have to solve the Schrodinger Equation first to get the eigenvalues and eigenvectors before we can use the methods of Matrix Mechanics, are you?)

    But the question I am struggling with, is why can't we get the same eigenvalues and eigenvectors using Matrix Mechanics? Why is it so difficult to solve this simple problem using Matrix Mechanics? You have shown me that we did not need the initial state to be able to solve the Schrodinger Equation and we do need the initial state vector to solve it using MM. Is that what is missing?

    In MM we have a matrix of simultaneous equations of some sort, one equation per row. What does each row in the matrix represent? Why is it so difficult to fill in the values for this matrix?
     
  15. Mar 26, 2017 #14

    PeterDonis

    User Avatar
    2016 Award

    Staff: Mentor

    This is almost correct. The only wrong word is the word "energy". In the Schrodinger formalism you were using the position representation. If you use the matrix formalism with the same representation, which is implied by the bra ##\langle \vec{x} \vert## you wrote down in the OP, then that implies that the rows/columns of the matrix should not represent an infinite number of possible energy states, but an infinite number of possible...what kind of states?

    Answering the above question will tell you.
     
  16. Mar 27, 2017 #15
    Your answer (or rather question) is not what I am looking for. I suppose you are trying to say that the rows represent positions in some way. I am not looking for such a general answer. I am looking for the actual values that we would calculate to start to populate the matrix. We do not have to calculate all the values, just the first few elements of the first few rows. That will tell me everything I need to know. The problem is the particle in the well. We know the Hamiltonian for that problem. And we know the matrix equation ##H\psi=E\psi## How do we form the matrix which will lead us to the eigenvalues for the problem? What are the knowns and what are the unknowns? I think, almost everything, except the eigenvalues has to be known in order to solve this problem using the Matrix Method. How do we incorporate the boundary conditions into the matrix equation?

    In order to use the MM method, I am coming to believe that we have to have some actual measurements of the particles position and I guess momentum. Using the Schrodinger Equation, we were able to determine the eigenvalues in an abstract way, knowing only the boundary conditions. The Schrodinger equation allowed us to predict the eigenvalues. The MM method does not seem to me to be a way to predict, but rather a way to confirm. If this sounds vague, you are right. I know something is missing but I do not know what it is yet...

    If I assume the MM method requires us to measure the position and momentum of the particle in the box, how many measurements would we have to make and how dense would they have to be? And what would happen if we only "sampled" the positions and momentums, say only 100 measurements. I am guessing that if we took 100 measurements that would result in a 100x100 matrix. But what would that do for us? We know there are infinite number of eigenvalues. So would we learn anything if we only took 100 measurements? Would that matrix eigenvalues relate to the real eigenvalues in any meaningful way?

    As I think about the measurement scenario, if we sampled the positions and momentums of the particle in the box they would have to be made when the particle was in different energy states for the measurements to have any value. If all the measurements are made with the particle in the same energy state, the rows in the resulting matrix would be "degenerate" . Each row in the matrix would have to correspond to a measurement made when the particle was in a different energy state. This seems to me to be a very difficult requirement for a single particle in the well.
     
    Last edited: Mar 27, 2017
  17. Mar 27, 2017 #16

    hilbert2

    User Avatar
    Science Advisor
    Gold Member

    The 1D time independent Schrödinger equation can actually be converted to a tridiagonal matrix eigenvalue equation by replacing the wavefunction ##\psi (x)## with a vector of discrete values ##\psi_n = \psi(n\Delta x )## and then changing the second derivative operator ##\frac{d^2}{dx^2}## to its finite difference version ##\frac{\psi_{n-1}-2\psi_n + \psi_{n+1}}{[\Delta x]^2}##. As far as I know, this kind of discretization comes with a built-in boundary condition that the wavefunction vanishes at the endpoints of its domain (as it does in the infinite well problem).

    http://www.dartmouth.edu/~pawan/final project.pdf
     
  18. Mar 27, 2017 #17

    vanhees71

    User Avatar
    Science Advisor
    2016 Award

    I think we first should get an understanding, what's meant by "matrix mechanics". To understand different representations of QT, it's good to use the representation free Hilbert-space formalism, i.e., for physicists, the Dirac bra-ket formalism.

    It starts with an abstract Hilbert space with vectors written as ##|\psi \rangle## and a scalar product written as ##\langle \phi|\psi \rangle##. The ##|\psi \rangle## can be used to represent a pure state of a quantum system. Then you have observables, which are represented in the formalism as self-adjoing operators ##\hat{A}## with a domain (which by definition equals their co-domain), which is in general only a dense subspace of Hilbert space. A (perhaps generalized) eigenvector of ##\hat{A}## with eigenvalue ##a \in \mathbb{R}## is denoted as ##|a \rangle##. It obeys by definition
    $$\hat{A} |a \rangle=a |a \rangle.$$
    To define a representation you choose an arbitrary complete set of commuting self-adjoint operators (also called a complete compatible set of observables), ##\hat{A}_1,\ldots, \hat{A}_N##. There are common eigenvectors of such sets of operators and complete means that giving the eigenvalues ##(a_1,\ldots,a_N)##, the eigenvector is (up to a non-zero factor) uniquely defined (in other words the common eigenspaces to given ##(a_1,\ldots,a_N)## eigenvalues are one-dimensional).

    These are also complete sets of orthonormal eigenvectors (with the proper normalization). Matrix mechanics now is defined in the case of a completely discrete set of eigenvalues, and then all the eigenvectors are proper normalizable eigenvectors, i.e., you can choose them such that
    $$\langle a_1,\ldots,a_N|a_1',\ldots,a_N' \rangle=\delta_{a_1,a_1'} \cdots \delta_{a_N,a_N'}$$
    and
    $$\sum_{a_1,\ldots,a_N} |a_1,\ldots a_N \rangle \langle a_1,\ldots a_N|=\hat{1}.$$
    This means you can uniquely describe any state by it's components
    $$\psi_{a_1,\ldots,a_N} = \langle a_1,\ldots,a_N|\psi \rangle$$
    and any operator by its matrix elements
    $$A_{a_1,\ldots,a_N;a_1',\ldots,a_N'} = \langle a_1,\ldots,a_N|\hat{A}|a_1',\ldots,a_N' \rangle.$$
    It is then easy to show by using the completeness relation several times that the components of ##|phi \rangle=\hat{A} |\psi \rangle## are given by
    $$\phi_{a_1,\ldots,a_N}= \sum_{a_1',\ldots,a_N'} A_{a_1,\ldots,a_N;a_1',\ldots, a_N'} \psi_{a_1',\ldots,a_N'}.$$
    This is like matrix-vector multiplication, and the composition of two operators maps to matrix products, and so on. That's why it's called "matrix mechanics".

    So for your energy-eigenvalue problem for a particle in a rigid box to define it in terms of matrix mechanics you first have to find an appropriate basis, in which you work, and then you should be able to calculate the Hamiltonian's matrix elements with respect to this basis. This is not so easy. In fact the only "natural" basis that comes to my mind are indeed the energy eigenstates themselves, and to find them here the representation in the position representation is good. Here the basis are the generalized position eigenvectors ##|x \rangle## (I consider the 1D case for simplicity). These are living in the dual of the domain of the position operator and ##x## can take all real values (or here in this pretty artificial problem restricted to the interval ##[0,L]##). Then the "orthonormalization and completeness relations" have to be generalized to
    $$\langle x|x' \rangle=\delta(x-x'), \quad \int_0^L \mathrm{d} x |x \rangle \langle x|=\hat{1},$$
    and you have to use that in the position representation (i.e., matrix mechanics) the Hamiltonian is given as the differential operator
    $$\hat{H}=-\frac{\hbar}{2m} \partial_x^2.$$
    It's living on a proper subset of the square-integrable functions, which represent the states ##\psi(x)=\langle x|\psi \rangle##. The Hilbert space is further specified (and that's what you can do only in the position representation) by ##\psi(0)=\psi(L)=0##. With this you can solve the eigenvalue problem for ##\hat{H}##, and then use the eigenvectors of ##\hat{H}## for other matrix-mechanics calculations.
     
  19. Mar 27, 2017 #18

    hilbert2

    User Avatar
    Science Advisor
    Gold Member

    Here's an R-Code that forms the Hamiltonian matrix for the discretized square well problem and solves the eigenstate n=3 (the ground state is n=1), plotting the probability density as an output:

    Code (Text):
    L <- 1                                               # Length of the domain
    N <- 100                            # Number of discrete points
    dx <- L/N
    A = matrix(nrow = N, ncol = N)                # Hamiltonian matrix
    n_state <- 3                        # Number of the eigenstate to be calculated

    for(m in c(1:N))                    # Fill the Hamiltonian matrix with elements appropriate for an infinite square well problem
    {
    for(n in c(1:N))
    {
    A[m,n]=0
    if(m == n) A[m,n] = 2/dx^2
    if(m == n-1 || m == n+1) A[m,n]=-1/dx^2
    }
    }

    v = eigen(A)                       # Solve the eigensystem
    vec = v$vectors

    soln = c(1:N)
    xaxis = c(1:N)*L/N

    for(m in c(1:N))
    {
    soln[m] = vec[m,n_state]               # Fill the vector "soln" with the wavefunction values
    }

    jpeg(file = "plot.jpg")                   # Plot the probability density
    plot(xaxis,abs(soln)^2)
    lines(xaxis,abs(soln)^2)
    dev.off()
    The plot that this code produces looks just like you'd expect from a square of a sine function.

    25tjl3m.jpg
     
  20. Mar 27, 2017 #19
    Thank you for the time it took to compose your reply. I have made comments to express (I hope) what I think you mean. I am sure I misinterpreted a lot but I think I also got much of it. Thanks.

    The Dirac bra-ket formalism is a representation free way to talk about quantum mechanics.

    You are defining, in simple terms the properties of a Hilbert Space.

    The observables are represented by operators which are square matrices. The eigenvalues of the matrices(observables) are the allowed values for the observable. The associated eigenvector is the quantum state associated with that eigenvalue.

    This is where you choose the basis isn't it?

    The eigenvectors are orthogonal.

    and normalized.

    We can express any state as a superposition of the eigenvectors of the observables.

    Eureka!
    This is where the matrix elements are calculated!
    The matrix elements are obtained from the eigenvectors that were defined by the observable operators.

    I think I know what basis means, but I am not sure I know what appropriate basis means.

    I do not understand what you are saying here.

    You are saying that basis for this problem appears to be the position eigenvectors, which are delta functions. I am not sure why you say they are living in the dual of the domain of the position operator.

    I think that what you are saying is that we have to use the position basis to solve this problem but the position basis is not easy to work with because there are an infinite number of eigenvalues and the eigenvectors are delta functions. But we should be able to calculate the elements of ##\hat{A}## using the formula from above.
     
  21. Mar 27, 2017 #20
    Thank you for posting the link to this article. http://www.dartmouth.edu/~pawan/final project.pdf

    The thing that jumps out at me from both the paper and the code you provided is that the solution is completely independent of the Energy! From the code it is obvious that the matrix is composed entirely of spatial coordinates (because the potential, V(x), is zero). The code makes it very clear that the eigenvalues are the eigenvalues of the operator, which, in this case, is simply the second derivative operator.

    Having said that, one thing I noticed in the code was there is no dependence, at all, on the Energy. There should have been an energy term on the diagonal of the matrix but there is none. So I went back to look at the paper and I found the following quote. from page 5 of the paper
    What justification is there for removing E from the diagonal entries? I do not understand that. What do the eigenvalues represent if we subtracted E from the diagonal elements?
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted



Similar Discussions: Solve the particle in a box problem using matrix mechanics?
  1. Particle in a box (Replies: 3)

Loading...