Recognitions:

## What makes matrix mechanics matrix mechanics?

The important point is that all pictures of time evolution are unitary equivalent by definition. This ensures that the physical meaning of the quantum theoretical formalism is independent of the choice of the picture. Neither state-reprsenting vectors nor eigenvectors of operators, representing observables, are measurable quantities but only the probabilities
$$P_{\psi}(a)=|\langle a| \psi \rangle|^2.$$
The time dependence of the mathematical quantities $|\psi \rangle$, $\hat{A}$ (and following from this $| a \rangle$ can be quite arbitrarily chosen. Usually it depends on the problem you want to solve, which choice is best. For the final result it doesn't matter at all.

The wave function in the "a basis", $\psi(a,t)=\langle a|\psi \rangle$ is, by the way, a picture independent quantity.
 Mentor The Heisenberg picture and the Schrödinger picture have very little to do with the difference between matrix mechanics and wave mechanics. (Edit: "Wave mechanics" is what some people call the quantum theory of a single spin-0 particle influenced by a potential, i.e. the theory that's taught in introductory classes. I think it's essentially the same as Schrödinger's theory). My understanding* is that Schrödinger was working with the semi-inner product space of square-integrable functions, while Heisenberg was working with an inner product space of "column vectors with infinitely many rows". (Yes, this seems rather ill-defined, but Heisenberg wasn't trying to do rigorous mathematics). *) I could be wrong, since I haven't read any of the original sources. I'm really just guessing based on what I know about Hilbert spaces and the stuff that's being taught in introductory QM classes. If you define two members of Schrödinger's inner product space to be equivalent if they differ only on a set of measure zero, and then consider the set of equivalence classes, you can easily give it the structure of an inner product space. This inner product space turns out to be a separable Hilbert space. The rows of Heisenberg's "∞×1 matrices" can be thought of as the components in some specific orthonormal basis, of a vector in a separable Hilbert space. This is the sense in which von Neumann's Hilbert space approach "unifies" the two original approaches by Heisenberg and Schrödinger. The two "pictures" on the other hand are only different opinions about whether the time dependent factors in an expression like ##\langle\alpha|e^{iHt}A e^{-iHt}|\alpha\rangle## should be considered part of the state vectors or part of the operators, i.e. should we say that ##|\alpha\rangle## is the state and ##e^{iHt}A e^{-iHt}## is the observable (Heisenberg picture) or should we say that ##e^{-iHt}|\alpha\rangle## is the state and ##A## the observable (Schrödinger)?

 Quote by ardie diagonalising a matrix is equivalent to rearranging its eigenbasis such that they are all linearly independent, and thereby equivalent to solving a set of differential equations coefficients to find unique solutions for each one. for more information you may refer to a mathematical methods for physics and engineering, I recommend the Cambridge version which has a whole section on eigenfunction methods for differential equations. A differential equation of rank n can always be solved provided n coefficients are given, that should be a clue. in your methods you first choose a guess eigenbasis and insert in the matrix, if the matrix can be diagonalised then you have found the unique solution, else you try a different combination. So you skip over having to solve each differential equation to find the exact coefficient of the eigenbasis, because you simply do not have all the information about the system, correct?
This is of course true. This is the method of solving systems of differential equations that I learned in my "differential equations with linear algebra" class my sophomore year of college. This (the use of matrix algebra to solve differential equations where each eigenvector of the fock operator is an "orbital" with a particular energy) I had always thought was just the simplest way of doing "Schrodinger wave mechanics" because I was THINKING of the orbitals in the way that Schrodinger/Born specified. My question was, is this actually "Matrix Mechanics" because I'm using matrix algebra and, if not, how WOULD I solve these problems (or any other) with matrix mechanics.

 Quote by Naty1 I've been out of school WAAAAAAAY too long to be able to dive into such details....so they remain 'above my paygrade'.... But I like your question and have wondered about the equivalency of the formulations beyond the general for some time. The extraction of physicality from mathematical formulations provides a lifetime of interesting issues.
Cheers As with so many times back in school or in research group meetings today, this is one of those questions that I was reticent to ask because I thought I'd look stupid but and very glad to find out that my lack of understanding (of this topic, at least) doesn't make me so.

 Quote by kith Maybe this helps a bit. If I have a system with initial state vector |ψ(0)> and want to calculate expecatation values for another time t, I can do this in two different ways: 1) Apply the time evolution operator to the state and get a new vector |ψ(t)> = U(t)|ψ(0)>. From there I can calculate expectation values by <ψ(t)|A|ψ(t)>. 2) Apply the time evolution operator to the observable/matrix and get a new observable/matrix A(t) = U+(t)AU(t). From there I can calculate expectation values by <ψ(0)|A(t)|ψ(0)>. In terms of equations, (1) corresponds to solving the Schrödinger equation, which is an equation for state vectors. (2) corresponds to solving the Heisenberg equation, which is an equation for observables/matrices.
 Quote by ardie in the Heisenberg picture, the states are constant in time, and the evolution is carried on by the operators, thereby allowing you to put the equations in tensorial form (matrix), as the gradient of a state tensor is always of the second rank, this simplifies the calculations provided you know the state tensor and the initial conditions. In the Schrödinger picture, one assumes the operators of the system are constant's in time. For example a bound particle that may not escape in the course of a measurement. So long as the Hamiltonian remains the same, the states (the solutions) will remain the same. But in this case the Hamiltonian is time changing, which means the states will also change accordingly. Now you may combine these 2 tensors to form the tonsorial (matrix) object that you need to solve, and it is a well known theory in linear algebra, it's slightly less intuitive and uncommon to use this method though. So everyone has stuck to the way Heisenberg solved things because it was already very common among mathematicians to work in vector algebra.
This question is NOT about the differences in time evolution of a quantum system as represented by the Heisenberg PICTURE or the Schrodinger PICTURE, it's about the two approaches to pioneer quantum mechanics, Schrodinger's "wave mechanics" dealing with differential equations versus Heisenberg's "matrix mechanics" dealing with the algebra of matricies. My confusion was that I had always thought about things in terms of wave functions and orbitals (or the Kohn-Sham equivalents), but was using matrix algebra to solve them. I now want to know which method I'm actually using and how I would solve these problems if I were using the other (necessarily equivalent) method.

 Quote by Fredrik The Heisenberg picture and the Schrödinger picture have very little to do with the difference between matrix mechanics and wave mechanics. (Edit: "Wave mechanics" is what some people call the quantum theory of a single spin-0 particle influenced by a potential, i.e. the theory that's taught in introductory classes. I think it's essentially the same as Schrödinger's theory). My understanding* is that Schrödinger was working with the semi-inner product space of square-integrable functions, while Heisenberg was working with an inner product space of "column vectors with infinitely many rows". (Yes, this seems rather ill-defined, but Heisenberg wasn't trying to do rigorous mathematics). *) I could be wrong, since I haven't read any of the original sources. I'm really just guessing based on what I know about Hilbert spaces and the stuff that's being taught in introductory QM classes. If you define two members of Schrödinger's inner product space to be equivalent if they differ only on a set of measure zero, and then consider the set of equivalence classes, you can easily give it the structure of an inner product space. This inner product space turns out to be a separable Hilbert space. The rows of Heisenberg's "∞×1 matrices" can be thought of as the components in some specific orthonormal basis, of a vector in a separable Hilbert space. This is the sense in which von Neumann's Hilbert space approach "unifies" the two original approaches by Heisenberg and Schrödinger. The two "pictures" on the other hand are only different opinions about whether the time dependent factors in an expression like ##\langle\alpha|e^{iHt}A e^{-iHt}|\alpha\rangle## should be considered part of the state vectors or part of the operators, i.e. should we say that ##|\alpha\rangle## is the state and ##e^{iHt}A e^{-iHt}## is the observable (Heisenberg picture) or should we say that ##e^{-iHt}|\alpha\rangle## is the state and ##A## the observable (Schrödinger)?
So the two spaces of Schrodinger and Heisenberg are isomorphic and isometric and are identical in the details of Hilbert space, but what is the difference FUNCTIONALLY for solving problems (such as self-consistent field problems) between the two?
 the mathematician ofcourse, tends to put hard problems in terms of problems that have been solved before. There is a combination of theories that goes behind why you can solve the wave equations in this manner, and going back to the beginning of what I was trying to say, it really boils down to the information you are able to supply or extract from your system experimentally. If you really want to know it for sure, I recommend you take some lessons in partial differential equations, familiarise your self with the Sturm-Liouville equations and solutions, and then work with green's functions. here I supply the relevant page if you are in fact interested: http://en.wikipedia.org/wiki/Green's_function it so happens that green's functions solves any linear differential operator, and it so happens that matrices are linear mathematical objects (not by coincidence!). This is matrix mechanics but only if your Hamiltonian has a maximum of second order derivatives. any higher and you would need tensor algebra to solve the equations of motion. It has not a lot to do with matrices themselves but matrices so happen to take the shape of the tensors that we use, as the Hamiltonian is of second order in almost every case. the matrix algebra that we use simplifies things a lot of course. You will not be able to get far with solving just linear equations, but systems of linear equations, and to do so you need both the things you have learnt in schrodinger and the heisenberg picture.

 Quote by ardie the mathematician ofcourse, tends to put hard problems in terms of problems that have been solved before. There is a combination of theories that goes behind why you can solve the wave equations in this manner, and going back to the beginning of what I was trying to say, it really boils down to the information you are able to supply or extract from your system experimentally. If you really want to know it for sure, I recommend you take some lessons in partial differential equations, familiarise your self with the Sturm-Liouville equations and solutions, and then work with green's functions. here I supply the relevant page if you are in fact interested: http://en.wikipedia.org/wiki/Green's_function it so happens that green's functions solves any linear differential operator, and it so happens that matrices are linear mathematical objects (not by coincidence!). This is matrix mechanics but only if your Hamiltonian has a maximum of second order derivatives. any higher and you would need tensor algebra to solve the equations of motion. It has not a lot to do with matrices themselves but matrices so happen to take the shape of the tensors that we use, as the Hamiltonian is of second order in almost every case. the matrix algebra that we use simplifies things a lot of course. You will not be able to get far with solving just linear equations, but systems of linear equations, and to do so you need both the things you have learnt in schrodinger and the heisenberg picture.
I can't tell if you're understanding what I'm asking about or not. I'm NOT saying "I don't know how to solve problem X, can someone please tell me what I need to learn to do so". I have studied differential equations and the Sturm-Liouville equation and the properties of its solutions under different initial/boundary conditions. I have gone through the exercises of solving Schordinger's equation for particular potentials and all of that and also have found the orbitals that minimize the energy for a system using matrix algebra (SCF problems and the like). My question is simply in terms of the different approaches of the pioneer quantum mechanical methods, what approach makes one way of solving, say, a SCF problem "matrix mechanics" as opposed to "wave mechanics".

You've made a couple of allusions to what can be taken from experiments will determine which method you use; could you please focus on that aspect? What situations (that is, given what experimental values for what properties) would lead you to use "matrix mechanics" instead of "wave mechanics" and vice versa?
 i really really think its obvious. for example for the stern-gerlacht experiment, you know the state of the initial particle (spin) as you prepare them in a particular way, and you know the state will change because they will go through the potential, therefore state is time dependant and is given by solving the relevant matrix of equations, which takes into account 2 hamiltonians. In the case of the hydrogen molecules, you make the gross assumption that the system is in equilibrium, therefore the states are time independant, and thereby you can solve using linear differential equations. in most cases you are sending particles through potentials and you already know their states are going to change. you work out what the states are going to be by prior experiments, and then you work out what the potential looks like by prior experiments, and then you put your particle through and solve the matrix of equations.

 Quote by Fredrik The Heisenberg picture and the Schrödinger picture have very little to do with the difference between matrix mechanics and wave mechanics.
I always thought of it like this: Wave mechanics is Schrödinger picture QM in the position basis and matrix mechanics is Heisenberg picture QM in the energy basis. I haven't read Heisenberg's paper in detail, but IIRC his first big insight there is that the classical amplitude of an oscillating electron, x(t), has to be replaced by quantities x(n,n-a) exp(iw(n,n-a)t). Which are the matrix elements of the position observable in the Heisenberg picture in the energy basis. Or am I missing your point?

 Quote by Einstein Mcfly So the two spaces of Schrodinger and Heisenberg are isomorphic and isometric and are identical in the details of Hilbert space, but what is the difference FUNCTIONALLY for solving problems (such as self-consistent field problems) between the two?
I still think that matrix mechanics is essentially the same as the Heisenberg picture. So the difference would be that you are solving Heisenberg's equation of motion instead of Schrödinger's. In their first paper, Born, Heisenberg and Jordan give a detailed solution of the harmonic oscillator using matrix mechanics.

 Quote by kith I always thought of it like this: Wave mechanics is Schrödinger picture QM in the position basis and matrix mechanics is Heisenberg picture QM in the energy basis. I haven't read Heisenberg's paper in detail, but IIRC his first big insight there is that the classical amplitude of an oscillating electron, x(t), has to be replaced by quantities x(n,n-a) exp(iw(n,n-a)t). Which are the matrix elements of the position observable in the Heisenberg picture in the energy basis. Or am I missing your point?
In an SCF calculation, I'm using one-electron eigenfunctions of the fock operator, so in that sense I'm doing matrix mechanics? Is this what you're saying? Once those are done, I take the coefficients and use them to construct the orbitals in 3D space, but since I did the calculation in the energy basis, it was matrix mechanics all along? Am I right?

 Quote by Einstein Mcfly In an SCF calculation, I'm using one-electron eigenfunctions of the fock operator, so in that sense I'm doing matrix mechanics? Is this what you're saying? Once those are done, I take the coefficients and use them to construct the orbitals in 3D space, but since I did the calculation in the energy basis, it was matrix mechanics all along? Am I right?
No, I didn't say that every calculation in the energy basis is matrix mechanics. I don't know much about quantum chemistry and have never heard of SCF. But if you start with an equation for states/vectors/eigenfunctions, you are working in the Schrödinger picture. If you work in the position or momentum basis, you are also doing wave mechanics.

(I have written another post just before your post. Maybe the wikipedia-article about matrix mechanics also helps.)
 ok there's one detail that I have ignored that may have been what you were looking for. wave mechanics failed to comply with the relativistic energy. at the time dirac solved this by introducing matrices that solved the wave equation if inserted, instead of just the single wave function. for details I recommend:http://en.wikipedia.org/wiki/Dirac_equation the idea is that in the new tonsorial equation, the wave function must be a tensor itself, thereby introducing matrix mechanics to wave physics.

 Quote by ardie ok there's one detail that I have ignored that may have been what you were looking for. wave mechanics failed to comply with the relativistic energy. at the time dirac solved this by introducing matrices that solved the wave equation if inserted, instead of just the single wave function. for details I recommend:http://en.wikipedia.org/wiki/Dirac_equation the idea is that in the new tonsorial equation, the wave function must be a tensor itself, thereby introducing matrix mechanics to wave physics.
No, this is not my question. My question is in starting from first principals looking to calculate the total energy of an atom or molecule, how would I accomplish this using "matrix mechanics" and how would I accomplish this using "wave mechanics". I know I must get the same answer regardless but they are distinct techniques. I know how I DO do this in calculations, but I don't know to which method what I do belongs or if it's some mixture of both.