What makes matrix mechanics matrix mechanics?

In summary, the conversation discusses the differences between Heisenberg's formulation and Schrödinger's formulation in quantum mechanics. Heisenberg's formulation deals with matrix representations of operators while Schrödinger's formulation involves differential equations. The main question revolves around whether using matrix algebra to solve a time-independent case in quantum chemistry is considered Matrix Mechanics or simply using matrices in Schrödinger wave mechanics. It is clarified that all of quantum chemistry is based on the Schrödinger equation, but the confusion lies in the procedure and whether it is truly different between the two formulations. The Heisenberg picture is deemed to be more physically realizable in experiments, while the Schrödinger picture is rarely used
  • #1
Einstein Mcfly
162
3
Hello all.

I haven't worked hard enough yet (reading the original papers etc) to really get at the differences here, but I thought I'd ask the local experts if there was more to it than I already know. From what I've gathered so far, Heisenberg's formulation dealt only with matrix representations of operators expressed in some basis and derived observables from them (I assume by diagonalizing them). There were no "orbitals" in the sense of functions defined in space with particular "shapes", because all he cared about was finding a way to get the spectra correct. If I'm not right up to this point, please correct me...

So, now when I'm, say, doing quantum chemistry and I have the fock operator or some such thing and I'm diagonalizing it to get orbitals and energies and doing the rinse and repeat needed to converge, and I doing "matrix mechanics"? Is phrasing the problem in terms of matricies rather than a differential equation all it takes, or am I just using the algebra of matricies to do Schrodinger wave mechanics? I know that they're necessarily equivalent, but its the differences in perspective that I'm interested in.

Thanks for any helpful comments.
 
Physics news on Phys.org
  • #2
in the Heisenberg picture, the states are constant in time, and the evolution is carried on by the operators, thereby allowing you to put the equations in tensorial form (matrix), as the gradient of a state tensor is always of the second rank, this simplifies the calculations provided you know the state tensor and the initial conditions.
In the Schrödinger picture, one assumes the operators of the system are constant's in time. For example a bound particle that may not escape in the course of a measurement. So long as the Hamiltonian remains the same, the states (the solutions) will remain the same. But in this case the Hamiltonian is time changing, which means the states will also change accordingly. Now you may combine these 2 tensors to form the tonsorial (matrix) object that you need to solve, and it is a well known theory in linear algebra, it's slightly less intuitive and uncommon to use this method though.
So everyone has stuck to the way Heisenberg solved things because it was already very common among mathematicians to work in vector algebra.
 
  • #3
ardie said:
So everyone has stuck to the way Heisenberg solved things because it was already very common among mathematicians to work in vector algebra.

What? Heisenberg picture is barely mentioned in many standard QM books. In fact, when Heisenberg came up with his theory no one in the physics community knew what matrices were (not even him). The Schroedinger picture, on the other hand, involed tons differential equations
which everyone knew well. That's why the Schroedinger picture stuck.
 
  • #4
ardie said:
in the Heisenberg picture, the states are constant in time, and the evolution is carried on by the operators, thereby allowing you to put the equations in tensorial form (matrix), as the gradient of a state tensor is always of the second rank, this simplifies the calculations provided you know the state tensor and the initial conditions.
In the Schrödinger picture, one assumes the operators of the system are constant's in time. For example a bound particle that may not escape in the course of a measurement. So long as the Hamiltonian remains the same, the states (the solutions) will remain the same. But in this case the Hamiltonian is time changing, which means the states will also change accordingly. Now you may combine these 2 tensors to form the tonsorial (matrix) object that you need to solve, and it is a well known theory in linear algebra, it's slightly less intuitive and uncommon to use this method though.
So everyone has stuck to the way Heisenberg solved things because it was already very common among mathematicians to work in vector algebra.

I learned these pictures in class, but it was always just a matter of whether you folded the time evolution operators into the states (SP) or the operators (HP). My question was whether or not solving the problem using matrix algebra for a time-independent case (variationally minimizing the energy by diagonalizing the fock matrix in a self-consistent procedure, for example) is what is meant by Matrix Mechanics or am I just using matricies to do Schrodinger wave mechanics.

Is this a proper question? Is it clear what I'm getting at? There have been a lot of reads on this thread but only a few replies...
 
  • #5
I don't see why you'd be using matrix mechanics. All of quantum chemistry is based on the Schroedinger equation, and in fact HF theory is about of finding an approximate solution to the (time-independent)Schroedinger equation for a many-electron atom (or molecule). Just because you writing everything down using a finite basis-set so you can solve it on a computer doesn't mean you're doing matrix mechanics (I think), everything is still derived from the SE.
 
  • #6
Amok said:
I don't see why you'd be using matrix mechanics. All of quantum chemistry is based on the Schroedinger equation, and in fact HF theory is about of finding an approximate solution to the (time-independent)Schroedinger equation for a many-electron atom (or molecule). Just because you writing everything down using a finite basis-set so you can solve it on a computer doesn't mean you're doing matrix mechanics (I think), everything is still derived from the SE.

I think you're seeing where my confusion comes in. Obviously solving these problems using matrix mechanics is 100% equivalent to solving the SE, so what is the real difference IN THE PROCEDURE that makes it one or the other?
 
  • #7
The Heisenberg picture is the one that is physically realisable in real experiments. In reality the particle is moving through some potential which leads to changes in it's basis and we can do experiments to find out what that potential looks like. we can make some guesses about what the state looks like at the beginning and thereby manage to solve the problems at hand for the final states. The schrodinger picture is almost never used, unless you can manage to lock the system into a standstill (time-independent cases)
 
  • #8
ardie said:
The schrodinger picture is almost never used

Almost never used by whom?
 
  • #9
Einstein: Seems like you have the background to be able to read Wikipedia's Heisenberg MATRIX MECHANICS and then SCHRODINGER EQUATION and maybe answer
your question.

for example, the former has this description:

...Heisenberg, after a collaboration with Kramers,[4] began to understand that the transition probabilities were not quite classical quantities, because the only frequencies that appear in the Fourier series should be the ones that are observed in quantum jumps, not the fictional ones that come from Fourier-analyzing sharp classical orbits. He replaced the classical Fourier series with a matrix of coefficients, a fuzzed-out quantum analog of the Fourier series. Classically, the Fourier coefficients give the intensity of the emitted radiation, so in quantum mechanics the magnitude of the matrix elements of the position operator were the intensity of radiation in the bright-line spectrum.
If this example is helpful, check out the articles. [I'd answer
you question if I could!]
 
  • #10
ardie said:
The Heisenberg picture is the one that is physically realisable in real experiments. In reality the particle is moving through some potential which leads to changes in it's basis and we can do experiments to find out what that potential looks like. we can make some guesses about what the state looks like at the beginning and thereby manage to solve the problems at hand for the final states. The schrodinger picture is almost never used, unless you can manage to lock the system into a standstill (time-independent cases)

The schrodinger picture and the Heisenberg picture are entirely equivelant...
 
  • #11
Naty1 said:
Einstein: Seems like you have the background to be able to read Wikipedia's Heisenberg MATRIX MECHANICS and then SCHRODINGER EQUATION and maybe answer
your question.

for example, the former has this description:




If this example is helpful, check out the articles. [I'd answer
you question if I could!]

Yes indeed, I already read through those. They didn't really clarify anything vis a vis what is DONE functionally in a matrix mechanics calculation.

From the responses in this thread, it seems like I'm not the only one who's confused about this.
 
  • #12
Amok said:
Almost never used by whom?
I wouldn't say never, but practicing chemists definitely prefer matrix mechanics.
 
  • #13
Jorriss said:
I wouldn't say never, but practicing chemists definitely prefer matrix mechanics.

It's great that you're responding so confidently because I'm a practicing quantum chemist and haven't know if what I'm doing was MM or SWM. So what I do in QC calculations is choose some basis functions, construct linear combinations of them, store the coefficients in a matrix, diagonalize that matrix to get molecular orbitals, rinse and repeat until the energy reaches a minimum (hopefully) and stops changing.

You're saying that what I'm doing here is matrix mechanics. Now here's the big question: If I wanted to do the same thing using the Schrodinger wave picture, what would I do instead?
 
  • #14
ardie said:
The Heisenberg picture is the one that is physically realisable in real experiments. In reality the particle is moving through some potential which leads to changes in it's basis and we can do experiments to find out what that potential looks like. we can make some guesses about what the state looks like at the beginning and thereby manage to solve the problems at hand for the final states. The schrodinger picture is almost never used, unless you can manage to lock the system into a standstill (time-independent cases)

This certainly isn't true, the two are physically equivalent. The following is from page 3 of Zettlli's textbook:

The second formulation, called wave mechanics, was due to Schrödinger (1926); it is a
generalization of the de Broglie postulate. This method, more intuitive than matrix mechanics,
describes the dynamics of microscopic matter by means of a wave equation, called the
Schrödinger equation; instead of the matrix eigenvalue problem of Heisenberg, Schrödinger
obtained a differential equation. The solutions of this equation yield the energy spectrum and
the wave function of the system under consideration. In 1927 Max Born proposed his probabilistic
interpretation of wave mechanics: he took the square moduli of the wave functions that
are solutions to the Schrödinger equation and he interpreted them as probability densities.
These two ostensibly different formulations—Schrödinger’s wave formulation and Heisenberg’s
matrix approach—were shown to be equivalent. Dirac then suggested a more general
formulation of quantum mechanics which deals with abstract objects such as kets (state vectors),
bras, and operators. The representation of Dirac’s formalism in a continuous basis—the
position or momentum representations—gives back Schrödinger’s wave mechanics. As for
Heisenberg’s matrix formulation, it can be obtained by representing Dirac’s formalism in a
discrete basis. In this context, the approaches of Schrödinger and Heisenberg represent, respectively,
the wave formulation and the matrix formulation of the general theory of quantum
mechanics.
 
  • #15
diagonalising a matrix is equivalent to rearranging its eigenbasis such that they are all linearly independent, and thereby equivalent to solving a set of differential equations coefficients to find unique solutions for each one.
for more information you may refer to a mathematical methods for physics and engineering, I recommend the Cambridge version which has a whole section on eigenfunction methods for differential equations. A differential equation of rank n can always be solved provided n coefficients are given, that should be a clue. in your methods you first choose a guess eigenbasis and insert in the matrix, if the matrix can be diagonalised then you have found the unique solution, else you try a different combination. So you skip over having to solve each differential equation to find the exact coefficient of the eigenbasis, because you simply do not have all the information about the system, correct?
 
Last edited:
  • #16
From the responses in this thread, it seems like I'm not the only one who's confused about this

I've been out of school WAAAAAAAY too long to be able to dive into such details...so they remain 'above my paygrade'...

But I like your question and have wondered about the equivalency of the formulations beyond the general for some time. The extraction of physicality from mathematical formulations provides a lifetime of interesting issues.
 
  • #17
Einstein Mcfly said:
I think you're seeing where my confusion comes in. Obviously solving these problems using matrix mechanics is 100% equivalent to solving the SE, so what is the real difference IN THE PROCEDURE that makes it one or the other?
Maybe this helps a bit. If I have a system with initial state vector |ψ(0)> and want to calculate expecatation values for another time t, I can do this in two different ways:
1) Apply the time evolution operator to the state and get a new vector |ψ(t)> = U(t)|ψ(0)>. From there I can calculate expectation values by <ψ(t)|A|ψ(t)>.
2) Apply the time evolution operator to the observable/matrix and get a new observable/matrix A(t) = U+(t)AU(t). From there I can calculate expectation values by <ψ(0)|A(t)|ψ(0)>.

In terms of equations, (1) corresponds to solving the Schrödinger equation, which is an equation for state vectors. (2) corresponds to solving the Heisenberg equation, which is an equation for observables/matrices.
 
  • #18
The important point is that all pictures of time evolution are unitary equivalent by definition. This ensures that the physical meaning of the quantum theoretical formalism is independent of the choice of the picture. Neither state-reprsenting vectors nor eigenvectors of operators, representing observables, are measurable quantities but only the probabilities
[tex]P_{\psi}(a)=|\langle a| \psi \rangle|^2.[/tex]
The time dependence of the mathematical quantities [itex]|\psi \rangle[/itex], [itex]\hat{A}[/itex] (and following from this [itex]| a \rangle[/itex] can be quite arbitrarily chosen. Usually it depends on the problem you want to solve, which choice is best. For the final result it doesn't matter at all.

The wave function in the "a basis", [itex]\psi(a,t)=\langle a|\psi \rangle[/itex] is, by the way, a picture independent quantity.
 
  • #19
The Heisenberg picture and the Schrödinger picture have very little to do with the difference between matrix mechanics and wave mechanics. (Edit: "Wave mechanics" is what some people call the quantum theory of a single spin-0 particle influenced by a potential, i.e. the theory that's taught in introductory classes. I think it's essentially the same as Schrödinger's theory).

My understanding* is that Schrödinger was working with the semi-inner product space of square-integrable functions, while Heisenberg was working with an inner product space of "column vectors with infinitely many rows". (Yes, this seems rather ill-defined, but Heisenberg wasn't trying to do rigorous mathematics).

*) I could be wrong, since I haven't read any of the original sources. I'm really just guessing based on what I know about Hilbert spaces and the stuff that's being taught in introductory QM classes.

If you define two members of Schrödinger's inner product space to be equivalent if they differ only on a set of measure zero, and then consider the set of equivalence classes, you can easily give it the structure of an inner product space. This inner product space turns out to be a separable Hilbert space.

The rows of Heisenberg's "∞×1 matrices" can be thought of as the components in some specific orthonormal basis, of a vector in a separable Hilbert space.

This is the sense in which von Neumann's Hilbert space approach "unifies" the two original approaches by Heisenberg and Schrödinger.

The two "pictures" on the other hand are only different opinions about whether the time dependent factors in an expression like ##\langle\alpha|e^{iHt}A e^{-iHt}|\alpha\rangle## should be considered part of the state vectors or part of the operators, i.e. should we say that ##|\alpha\rangle## is the state and ##e^{iHt}A e^{-iHt}## is the observable (Heisenberg picture) or should we say that ##e^{-iHt}|\alpha\rangle## is the state and ##A## the observable (Schrödinger)?
 
Last edited:
  • #20
ardie said:
diagonalising a matrix is equivalent to rearranging its eigenbasis such that they are all linearly independent, and thereby equivalent to solving a set of differential equations coefficients to find unique solutions for each one.
for more information you may refer to a mathematical methods for physics and engineering, I recommend the Cambridge version which has a whole section on eigenfunction methods for differential equations. A differential equation of rank n can always be solved provided n coefficients are given, that should be a clue. in your methods you first choose a guess eigenbasis and insert in the matrix, if the matrix can be diagonalised then you have found the unique solution, else you try a different combination. So you skip over having to solve each differential equation to find the exact coefficient of the eigenbasis, because you simply do not have all the information about the system, correct?

This is of course true. This is the method of solving systems of differential equations that I learned in my "differential equations with linear algebra" class my sophomore year of college. This (the use of matrix algebra to solve differential equations where each eigenvector of the fock operator is an "orbital" with a particular energy) I had always thought was just the simplest way of doing "Schrodinger wave mechanics" because I was THINKING of the orbitals in the way that Schrodinger/Born specified. My question was, is this actually "Matrix Mechanics" because I'm using matrix algebra and, if not, how WOULD I solve these problems (or any other) with matrix mechanics.
 
  • #21
Naty1 said:
I've been out of school WAAAAAAAY too long to be able to dive into such details...so they remain 'above my paygrade'...

But I like your question and have wondered about the equivalency of the formulations beyond the general for some time. The extraction of physicality from mathematical formulations provides a lifetime of interesting issues.

Cheers:smile: As with so many times back in school or in research group meetings today, this is one of those questions that I was reticent to ask because I thought I'd look stupid but and very glad to find out that my lack of understanding (of this topic, at least) doesn't make me so.
 
  • #22
kith said:
Maybe this helps a bit. If I have a system with initial state vector |ψ(0)> and want to calculate expecatation values for another time t, I can do this in two different ways:
1) Apply the time evolution operator to the state and get a new vector |ψ(t)> = U(t)|ψ(0)>. From there I can calculate expectation values by <ψ(t)|A|ψ(t)>.
2) Apply the time evolution operator to the observable/matrix and get a new observable/matrix A(t) = U+(t)AU(t). From there I can calculate expectation values by <ψ(0)|A(t)|ψ(0)>.

In terms of equations, (1) corresponds to solving the Schrödinger equation, which is an equation for state vectors. (2) corresponds to solving the Heisenberg equation, which is an equation for observables/matrices.

ardie said:
in the Heisenberg picture, the states are constant in time, and the evolution is carried on by the operators, thereby allowing you to put the equations in tensorial form (matrix), as the gradient of a state tensor is always of the second rank, this simplifies the calculations provided you know the state tensor and the initial conditions.
In the Schrödinger picture, one assumes the operators of the system are constant's in time. For example a bound particle that may not escape in the course of a measurement. So long as the Hamiltonian remains the same, the states (the solutions) will remain the same. But in this case the Hamiltonian is time changing, which means the states will also change accordingly. Now you may combine these 2 tensors to form the tonsorial (matrix) object that you need to solve, and it is a well known theory in linear algebra, it's slightly less intuitive and uncommon to use this method though.
So everyone has stuck to the way Heisenberg solved things because it was already very common among mathematicians to work in vector algebra.

This question is NOT about the differences in time evolution of a quantum system as represented by the Heisenberg PICTURE or the Schrodinger PICTURE, it's about the two approaches to pioneer quantum mechanics, Schrodinger's "wave mechanics" dealing with differential equations versus Heisenberg's "matrix mechanics" dealing with the algebra of matricies. My confusion was that I had always thought about things in terms of wave functions and orbitals (or the Kohn-Sham equivalents), but was using matrix algebra to solve them. I now want to know which method I'm actually using and how I would solve these problems if I were using the other (necessarily equivalent) method.
 
  • #23
Fredrik said:
The Heisenberg picture and the Schrödinger picture have very little to do with the difference between matrix mechanics and wave mechanics. (Edit: "Wave mechanics" is what some people call the quantum theory of a single spin-0 particle influenced by a potential, i.e. the theory that's taught in introductory classes. I think it's essentially the same as Schrödinger's theory).

My understanding* is that Schrödinger was working with the semi-inner product space of square-integrable functions, while Heisenberg was working with an inner product space of "column vectors with infinitely many rows". (Yes, this seems rather ill-defined, but Heisenberg wasn't trying to do rigorous mathematics).

*) I could be wrong, since I haven't read any of the original sources. I'm really just guessing based on what I know about Hilbert spaces and the stuff that's being taught in introductory QM classes.

If you define two members of Schrödinger's inner product space to be equivalent if they differ only on a set of measure zero, and then consider the set of equivalence classes, you can easily give it the structure of an inner product space. This inner product space turns out to be a separable Hilbert space.

The rows of Heisenberg's "∞×1 matrices" can be thought of as the components in some specific orthonormal basis, of a vector in a separable Hilbert space.

This is the sense in which von Neumann's Hilbert space approach "unifies" the two original approaches by Heisenberg and Schrödinger.

The two "pictures" on the other hand are only different opinions about whether the time dependent factors in an expression like ##\langle\alpha|e^{iHt}A e^{-iHt}|\alpha\rangle## should be considered part of the state vectors or part of the operators, i.e. should we say that ##|\alpha\rangle## is the state and ##e^{iHt}A e^{-iHt}## is the observable (Heisenberg picture) or should we say that ##e^{-iHt}|\alpha\rangle## is the state and ##A## the observable (Schrödinger)?

So the two spaces of Schrodinger and Heisenberg are isomorphic and isometric and are identical in the details of Hilbert space, but what is the difference FUNCTIONALLY for solving problems (such as self-consistent field problems) between the two?
 
  • #24
the mathematician ofcourse, tends to put hard problems in terms of problems that have been solved before. There is a combination of theories that goes behind why you can solve the wave equations in this manner, and going back to the beginning of what I was trying to say, it really boils down to the information you are able to supply or extract from your system experimentally.
If you really want to know it for sure, I recommend you take some lessons in partial differential equations, familiarise your self with the Sturm-Liouville equations and solutions, and then work with green's functions. here I supply the relevant page if you are in fact interested:
http://en.wikipedia.org/wiki/Green's_function
it so happens that green's functions solves any linear differential operator, and it so happens that matrices are linear mathematical objects (not by coincidence!).
This is matrix mechanics but only if your Hamiltonian has a maximum of second order derivatives. any higher and you would need tensor algebra to solve the equations of motion. It has not a lot to do with matrices themselves but matrices so happen to take the shape of the tensors that we use, as the Hamiltonian is of second order in almost every case. the matrix algebra that we use simplifies things a lot of course.
You will not be able to get far with solving just linear equations, but systems of linear equations, and to do so you need both the things you have learned in schrodinger and the Heisenberg picture.
 
  • #25
ardie said:
the mathematician ofcourse, tends to put hard problems in terms of problems that have been solved before. There is a combination of theories that goes behind why you can solve the wave equations in this manner, and going back to the beginning of what I was trying to say, it really boils down to the information you are able to supply or extract from your system experimentally.
If you really want to know it for sure, I recommend you take some lessons in partial differential equations, familiarise your self with the Sturm-Liouville equations and solutions, and then work with green's functions. here I supply the relevant page if you are in fact interested:
http://en.wikipedia.org/wiki/Green's_function
it so happens that green's functions solves any linear differential operator, and it so happens that matrices are linear mathematical objects (not by coincidence!).
This is matrix mechanics but only if your Hamiltonian has a maximum of second order derivatives. any higher and you would need tensor algebra to solve the equations of motion. It has not a lot to do with matrices themselves but matrices so happen to take the shape of the tensors that we use, as the Hamiltonian is of second order in almost every case. the matrix algebra that we use simplifies things a lot of course.
You will not be able to get far with solving just linear equations, but systems of linear equations, and to do so you need both the things you have learned in schrodinger and the Heisenberg picture.

I can't tell if you're understanding what I'm asking about or not. I'm NOT saying "I don't know how to solve problem X, can someone please tell me what I need to learn to do so". I have studied differential equations and the Sturm-Liouville equation and the properties of its solutions under different initial/boundary conditions. I have gone through the exercises of solving Schordinger's equation for particular potentials and all of that and also have found the orbitals that minimize the energy for a system using matrix algebra (SCF problems and the like). My question is simply in terms of the different approaches of the pioneer quantum mechanical methods, what approach makes one way of solving, say, a SCF problem "matrix mechanics" as opposed to "wave mechanics".

You've made a couple of allusions to what can be taken from experiments will determine which method you use; could you please focus on that aspect? What situations (that is, given what experimental values for what properties) would lead you to use "matrix mechanics" instead of "wave mechanics" and vice versa?
 
  • #26
i really really think its obvious. for example for the stern-gerlacht experiment, you know the state of the initial particle (spin) as you prepare them in a particular way, and you know the state will change because they will go through the potential, therefore state is time dependant and is given by solving the relevant matrix of equations, which takes into account 2 hamiltonians. In the case of the hydrogen molecules, you make the gross assumption that the system is in equilibrium, therefore the states are time independant, and thereby you can solve using linear differential equations.
in most cases you are sending particles through potentials and you already know their states are going to change. you work out what the states are going to be by prior experiments, and then you work out what the potential looks like by prior experiments, and then you put your particle through and solve the matrix of equations.
 
  • #27
Fredrik said:
The Heisenberg picture and the Schrödinger picture have very little to do with the difference between matrix mechanics and wave mechanics.
I always thought of it like this: Wave mechanics is Schrödinger picture QM in the position basis and matrix mechanics is Heisenberg picture QM in the energy basis. I haven't read Heisenberg's paper in detail, but IIRC his first big insight there is that the classical amplitude of an oscillating electron, x(t), has to be replaced by quantities x(n,n-a) exp(iw(n,n-a)t). Which are the matrix elements of the position observable in the Heisenberg picture in the energy basis. Or am I missing your point?
 
Last edited:
  • #28
Einstein Mcfly said:
So the two spaces of Schrodinger and Heisenberg are isomorphic and isometric and are identical in the details of Hilbert space, but what is the difference FUNCTIONALLY for solving problems (such as self-consistent field problems) between the two?
I still think that matrix mechanics is essentially the same as the Heisenberg picture. So the difference would be that you are solving Heisenberg's equation of motion instead of Schrödinger's. In their first paper, Born, Heisenberg and Jordan give a detailed solution of the harmonic oscillator using matrix mechanics.
 
  • #29
kith said:
I always thought of it like this: Wave mechanics is Schrödinger picture QM in the position basis and matrix mechanics is Heisenberg picture QM in the energy basis. I haven't read Heisenberg's paper in detail, but IIRC his first big insight there is that the classical amplitude of an oscillating electron, x(t), has to be replaced by quantities x(n,n-a) exp(iw(n,n-a)t). Which are the matrix elements of the position observable in the Heisenberg picture in the energy basis. Or am I missing your point?

In an SCF calculation, I'm using one-electron eigenfunctions of the fock operator, so in that sense I'm doing matrix mechanics? Is this what you're saying? Once those are done, I take the coefficients and use them to construct the orbitals in 3D space, but since I did the calculation in the energy basis, it was matrix mechanics all along? Am I right?
 
  • #30
Einstein Mcfly said:
In an SCF calculation, I'm using one-electron eigenfunctions of the fock operator, so in that sense I'm doing matrix mechanics? Is this what you're saying? Once those are done, I take the coefficients and use them to construct the orbitals in 3D space, but since I did the calculation in the energy basis, it was matrix mechanics all along? Am I right?
No, I didn't say that every calculation in the energy basis is matrix mechanics. I don't know much about quantum chemistry and have never heard of SCF. But if you start with an equation for states/vectors/eigenfunctions, you are working in the Schrödinger picture. If you work in the position or momentum basis, you are also doing wave mechanics.

(I have written another post just before your post. Maybe the wikipedia-article about matrix mechanics also helps.)
 
  • #31
ok there's one detail that I have ignored that may have been what you were looking for. wave mechanics failed to comply with the relativistic energy. at the time dirac solved this by introducing matrices that solved the wave equation if inserted, instead of just the single wave function. for details I recommend:http://en.wikipedia.org/wiki/Dirac_equation
the idea is that in the new tonsorial equation, the wave function must be a tensor itself, thereby introducing matrix mechanics to wave physics.
 
  • #32
ardie said:
ok there's one detail that I have ignored that may have been what you were looking for. wave mechanics failed to comply with the relativistic energy. at the time dirac solved this by introducing matrices that solved the wave equation if inserted, instead of just the single wave function. for details I recommend:http://en.wikipedia.org/wiki/Dirac_equation
the idea is that in the new tonsorial equation, the wave function must be a tensor itself, thereby introducing matrix mechanics to wave physics.
No, this is not my question. My question is in starting from first principals looking to calculate the total energy of an atom or molecule, how would I accomplish this using "matrix mechanics" and how would I accomplish this using "wave mechanics". I know I must get the same answer regardless but they are distinct techniques. I know how I DO do this in calculations, but I don't know to which method what I do belongs or if it's some mixture of both.
 

1. What is matrix mechanics?

Matrix mechanics is a mathematical formalism used in quantum mechanics to describe the behavior of particles at the atomic and subatomic level. It was developed by Werner Heisenberg and Max Born in the 1920s and is one of the two main formulations of quantum mechanics, along with wave mechanics.

2. What makes matrix mechanics different from other formulations of quantum mechanics?

Matrix mechanics differs from other formulations of quantum mechanics, such as wave mechanics, in that it does not use the concept of a wave function to describe the behavior of particles. Instead, it uses matrices to represent physical quantities, such as position and momentum, and their corresponding operators.

3. How does matrix mechanics work?

In matrix mechanics, the state of a particle is described by a vector in a mathematical space called the Hilbert space. Physical quantities, such as position and momentum, are represented by matrices and their corresponding operators act on the state vector to give the possible outcomes of a measurement. The probabilities of these outcomes are then calculated using the principles of quantum mechanics.

4. What are the advantages of using matrix mechanics?

Matrix mechanics has several advantages over other formulations of quantum mechanics. It is mathematically simpler and more elegant, making it easier to apply to complex systems. It also allows for the calculation of physical quantities without the need for a wave function, which can be difficult to determine for some systems.

5. What are the limitations of matrix mechanics?

While matrix mechanics is a powerful tool for understanding the behavior of particles at the quantum level, it has some limitations. It is not as intuitive as other formulations, such as wave mechanics, and can be difficult to visualize. It also does not fully explain certain phenomena, such as the wave-particle duality of particles. As such, it is often used in conjunction with other formulations to provide a more complete understanding of quantum mechanics.

Similar threads

  • Quantum Physics
Replies
1
Views
799
  • Quantum Physics
Replies
2
Views
1K
Replies
0
Views
604
Replies
30
Views
2K
Replies
41
Views
8K
  • Quantum Physics
Replies
1
Views
1K
Replies
44
Views
3K
  • Quantum Physics
Replies
6
Views
1K
Replies
7
Views
1K
Back
Top