Matrix Mechanics and non-linear least squares analogy?

In summary, the conversation discusses the use of non-linear least squares curve fitting in fitting a Gaussian curve to a set of data. The solution is found using well-defined methods of Linear Algebra. The conversation then shifts to understanding how Matrix Mechanics is used in quantum physics, with the analogy of fitting a model matrix to observed results. It is explained that matrix mechanics is used to calculate the values of operators and the state of the system can be represented as a column vector of coefficients. The conversation ends with a discussion about the eigenstates of the Hamiltonian and their relationship to the Schrodinger Equation.
  • #1
mike1000
271
20
I have some experience with non-linear least squares curve fitting. For instance, if I want to fit a Gaussian curve to a set of data, I would use a non-linear least squares technique. A "model" matrix is implemented and combined with the observed data. The solution is found by applying well defined methods of Linear Algebra.

Here is a link to a non-linear least squares method I have implemented many times in the past.
http://mathworld.wolfram.com/NonlinearLeastSquaresFitting.html

I bring this up because I am trying to understand how Matrix Mechanics is used in quantum physics. The above curve-fitting example is the closest I can come, from my own experience, to understanding what might be going on.

Does matrix mechanics depend on the observed results? Is Matrix Mechanics, as used in the quantum world, fitting the observed results to a "model" matrix of some kind? (density/scatter matrix?) And the eigenvalues of the combined system(observed results + model matrix) are the solutions?

Is this a possible analogy? I am not looking to be right, I am just trying to figure out how Matrix Mechanics is actually used. Does it operate on a "wave function". And if so, where does that "wave-function" come from? It is my understanding that Matrix Mechanics and Schrodinger Equation are completely different , but equivalent, methods, but , to my understanding, only the Schrodinger Equation makes use of a wave equation.
 
Last edited:
Physics news on Phys.org
  • #2
I am not completely familiar what the history of quantum mechanics, but I don't think the analogy of "fitting" is the right one to understand the reasoning of Heisenberg. I think that Wikipedia has a pretty good description of that: https://en.wikipedia.org/wiki/Matrix_mechanics

As for the modern usage of matrices, it can be understood simply by considering a discrete complete basis set ##\{ \phi_i \}##, such as the eigenstates of the Hamiltonian,
$$
\hat{H} \phi_i = E_i \phi_i
$$
Then the sated of the system can be expressed exactly as
$$
\psi = \sum_i c_i \phi_i
$$
Taking the basis set to be orthonormal, then one gets the coefficients using
$$
\int \phi_i^* \psi d\tau = \sum_j c_j \int \phi_i^* \phi_j d \tau = c_i
$$
The wave function can then be represented simply by the set of coefficients ##\{ c_i \}##.

Likewise, one can calculate for any operator ##\hat{A}##
$$
A_{ij} = \int \phi_i^* \hat{A} \phi_j d \tau
$$
The values ##A_{ij}## can be taken as the elements of a matrix, and likewise ##\psi## can be expressed as a column vector of the coefficients ##\{ c_i \}##. It is then easy to show that the action of the operator on the state, ##\hat{A} \psi##, is equivalently described by the matrix-vector product ##\mathrm{A} \bar{\psi}##.
 
  • Like
Likes BvU
  • #3
DrClaude said:
I am not completely familiar what the history of quantum mechanics, but I don't think the analogy of "fitting" is the right one to understand the reasoning of Heisenberg. I think that Wikipedia has a pretty good description of that: https://en.wikipedia.org/wiki/Matrix_mechanics

As for the modern usage of matrices, it can be understood simply by considering a discrete complete basis set ##\{ \phi_i \}##, such as the eigenstates of the Hamiltonian,
$$
\hat{H} \phi_i = E_i \phi_i
$$
Then the sated of the system can be expressed exactly as
$$
\psi = \sum_i c_i \phi_i
$$
Taking the basis set to be orthonormal, then one gets the coefficients using
$$
\int \phi_i^* \psi d\tau = \sum_j c_j \int \phi_i^* \phi_j d \tau = c_i
$$
The wave function can then be represented simply by the set of coefficients ##\{ c_i \}##.

Likewise, one can calculate for any operator ##\hat{A}##
$$
A_{ij} = \int \phi_i^* \hat{A} \phi_j d \tau
$$
The values ##A_{ij}## can be taken as the elements of a matrix, and likewise ##\psi## can be expressed as a column vector of the coefficients ##\{ c_i \}##. It is then easy to show that the action of the operator on the state, ##\hat{A} \psi##, is equivalently described by the matrix-vector product ##\mathrm{A} \bar{\psi}##.

When you perform a quantum experiment do you know the eigenstates of the Hamiltonian BEFORE you do the experiment or do the results of the experiment determine the eigenstates? I thought the eigenstates were the unknows that are to be determined.
 
  • #4
mike1000 said:
When you perform a quantum experiment do you know the eigenstates of the Hamiltonian BEFORE you do the experiment or do the results of the experiment determine the eigenstates? I thought the eigenstates were the unknows that are to be determined.
The eigenstates are given by the Hamiltonian, hence by the characteristics of the physical system (and the relevant environment). The state of the system can be known or unknown, depending on how it is prepared.

Take for instance the Stern-Gerlach experiment. The relevant eigenstates are the spin-up and spin-down state for the atom. For a given atom, its state can be known (for example, because the atoms are coming out from another Stern-Gerlach apparatus before coming in the SG apparatus that we are analyzing) or unknown (thermal source of atoms). In the latter case, the density operator formalism is required.
 
  • #5
DrClaude said:
The eigenstates are given by the Hamiltonian, hence by the characteristics of the physical system (and the relevant environment). The state of the system can be known or unknown, depending on how it is prepared.

Take for instance the Stern-Gerlach experiment. The relevant eigenstates are the spin-up and spin-down state for the atom. For a given atom, its state can be known (for example, because the atoms are coming out from another Stern-Gerlach apparatus before coming in the SG apparatus that we are analyzing) or unknown (thermal source of atoms). In the latter case, the density operator formalism is required.

How do you get the eigenstates of the Hamiltonian?

So what purpose is the Schrodinger Equation if you already know the eigenstates and I suppose the wave function?
 
  • #6
mike1000 said:
How do you get the eigenstates of the Hamiltonian?
So what purpose is the Schrodinger Equation if you already know the eigenstates and I suppose the wave function?
1) After we prepare a system in a given state (which may or may not be an eigenstate of the Hamiltonian or any other observable) it doesn't (in general) stay in that state; Schrodinger's equation allows us to calculate what state it will be in at any given time in the future. Look at the time-dependent form of the equation to see how this works.
2) We may be able to measure the energy, but until we've solved Schrodinger's equation for the system we won't know the wavefunction that corresponds to that energy. An example would be electron orbitals; we knew the energy levels of the hydrogen atom many years before we knew solutions to Schrodinger's equation that had those energy levels as eigenvalues.
 
  • #7
Nugatory said:
1) After we prepare a system in a given state (which may or may not be an eigenstate of the Hamiltonian or any other observable) it doesn't (in general) stay in that state; Schrodinger's equation allows us to calculate what state it will be in at any given time in the future. Look at the time-dependent form of the equation to see how this works.
2) We may be able to measure the energy, but until we've solved Schrodinger's equation for the system we won't know the wavefunction that corresponds to that energy. An example would be electron orbitals; we knew the energy levels of the hydrogen atom many years before we knew solutions to Schrodinger's equation that had those energy levels as eigenvalues.

I do not know why it took me so long to understand this. I think what you are saying and what the Eigenfunction nature of the problem is saying, is that , the system can only exist in one of the predefined states exposed by the Hamiltonian. (or in a linear combination of those predefined states). While the mathematics belie what I am about to say, I think the eigenfunction dependency of the quantum world actually makes things easier to analyze and not harder(assuming you are well versed in the mathematics which I am currently not!) I say this, because once you find the wave function for any allowed state, it should be easy to find the wave functions of all the allowed states because the they are all just scaler multiples of the same base function?(eigenfunctions) Or do I have this messed up as well? ( I have a tendency to jump to conclusions, which, in retrospect are somewhat true, but not exactly true)
 
  • #8
mike1000 said:
the system can only exist in one of the predefined states exposed by the Hamiltonian. (or in a linear combination of those predefined states
Usually (generally/always?) the solutions of the SE (eigenfunctions of the Hamiltonian) form a basis and any state can be expressed as a linear combination of the eigenfunctions. So that only isn't a restriction.
 
  • #9
BvU said:
Usually (generally/always?) the solutions of the SE (eigenfunctions of the Hamiltonian) form a basis and any state can be expressed as a linear combination of the eigenfunctions. So that only isn't a restriction.

So how do you get the eigenfunctions of the Hamiltonian operator? At some point the results of the experiment must come into play.
 
  • #10
Hehe, you solve ##H\Psi = E\Psi## :smile:

[edit] I realize I'm repeating drClaude... sorry.
 
  • #11
mike1000 said:
the system can only exist in one of the predefined states exposed by the Hamiltonian.
Ah, no... That is most emphatically not how it works. The energy eigenstates (and any other eigenbasis from any other operator) provide a basis I can use to describe the state, rather as I can describe which way the wind is blowing after I've chosen a "north/south" and "east/west" basis. However, choosing that basis doesn't mean that the wind can only blow in those two directions: ##5\hat{N}-5\hat{E}## is a moderate breeze blowing towards the northwest.
mike1000 said:
it should be easy to find the wave functions of all the allowed states because the they are all just scalar multiples of the same base function?(eigenfunctions)
Nope. The allowed states can be written as sums of scalar multiples of the different eigenfunctions, just as the allowed wind speed vectors can be written as sums of scalar multiples of the different base vectors ##\hat{N}## and ##\hat{E}##.
 
  • #12
Nugatory said:
Ah, no... That is most emphatically not how it works. The energy eigenstates (and any other eigenbasis from any other operator) provide a basis I can use to describe the state, rather as I can describe which way the wind is blowing after I've chosen a "north/south" and "east/west" basis. However, choosing that basis doesn't mean that the wind can only blow in those two directions: ##5\hat{N}-5\hat{E}## is a moderate breeze blowing towards the northwest.
Nope. The allowed states can be written as sums of scalar multiples of the different eigenfunctions, just as the allowed wind speed vectors can be written as sums of scalar multiples of the different base vectors ##\hat{N}## and ##\hat{E}##.

Thank you as always. I do understand that an allowed state can be a linear combination of the basis states.

Where does the actual results of an experiment get incorporated into the solution process? Are they incorporated, in some way, into the hamiltonian operator?
 
  • #13
mike1000 said:
Thank you as always. I do understand that an allowed state can be a linear combination of the basis states.

Where does the actual results of an experiment get incorporated into the solution process? Are they incorporated, in some way, into the hamiltonian operator?

I got it now.

It is a eigenvalue problem, Hx=λx, or (H-λ)x = 0.

All we have to do is find the eigenvalues of the above equation and their associated eigenvectors. The eigenvectors are the basis for the wave functions. (I do realize that finding eigenvalues of large matrices is never an easy process, as in finding roots of polynomials)

The key to the entire process must be in setting up the Hamiltonian operator.

Here is a link to a paper describing the unbelievably complex process of setting up the Hamiltonian for an electron in an electromagnetic field.

http://folk.uio.no/helgaker/talks/Hamiltonian.pdf
 
Last edited:
  • #14
mike1000 said:
Where does the actual results of an experiment get incorporated into the solution process?
Comparison of the results of QM can be compared to experiments, checking that all necessary elements are included in the Hamiltonian.
mike1000 said:
The key to the entire process must be in setting up the Hamiltonian operator.
Yes. This is no different than classical analytical mechanics, where one sets up the Hamiltonian (or Lagrangian) for the desired physical system.

mike1000 said:
Here is a link to a paper describing the unbelievably complex process of setting up the Hamiltonian for an electron in an electromagnetic field.

http://folk.uio.no/helgaker/talks/Hamiltonian.pdf
That's at a higher level than the overall discussion in this thread.
 
  • #15
mike1000 said:
I got it now.

It is a eigenvalue problem, Hx=λx, or (H-λ)x = 0.

All we have to do is find the eigenvalues of the above equation and their associated eigenvectors. The eigenvectors are the basis for the wave functions. (I do realize that finding eigenvalues of large matrices is never an easy process, as in finding roots of polynomials)

The key to the entire process must be in setting up the Hamiltonian operator.

Here is a link to a paper describing the unbelievably complex process of setting up the Hamiltonian for an electron in an electromagnetic field.

http://folk.uio.no/helgaker/talks/Hamiltonian.pdf

I still do not have this right.

As far as I know, we cannot find the eigenvalues of the matrix, H, because the Hamiltonian involves the gradient operator, ∇, which must operate on the unknown vector x. There is no way to complete the matrix, H. To find the eigenvalues of H, all elements of H must be resolved into complex numbers, there can be no unresolved terms, such as ∇.
 
  • #16
mike1000 said:
we cannot find the eigenvalues of the matrix, H, because the Hamiltonian involves the gradient operator, ∇,

You're mixing up the Heisenberg and Schrodinger representations. H is a matrix in the Heisenberg representation, but in that representation, there is no "gradient operator" ##\nabla##, and the matrix H does not contain it. Matrices operate on vectors, not functions, and ##\nabla## operates on functions.

In the Schrodinger representation, H is not a matrix, it's a differential operator, and contains the operator ##\nabla##; but H operates on functions, not vectors, and so the eigenvalue problem is the problem of finding functions ##\psi(x)## that obey the eigenvalue equation ##H \psi(x) = \lambda \psi(x)##. This is just solving the appropriate differential equation.
 
  • #17
PeterDonis said:
You're mixing up the Heisenberg and Schrodinger representations. H is a matrix in the Heisenberg representation, but in that representation, there is no "gradient operator" ##\nabla##, and the matrix H does not contain it. Matrices operate on vectors, not functions, and ##\nabla## operates on functions.

In the Schrodinger representation, H is not a matrix, it's a differential operator, and contains the operator ##\nabla##; but H operates on functions, not vectors, and so the eigenvalue problem is the problem of finding functions ##\psi(x)## that obey the eigenvalue equation ##H \psi(x) = \lambda \psi(x)##. This is just solving the appropriate differential equation.

Thanks.
Can you please tell me what is the equivalent matrix equation in the Heisenberg representation?

(I do know that ∇ operates on functions, but I was under the impression that the elements of the vector X could be functions)
 
  • #18
mike1000 said:
Can you please tell me what is the equivalent matrix equation in the Heisenberg representation?

It's ##H \vert \psi \rangle = \lambda \vert \psi \rangle##, where ##H## is the Hamiltonian matrix and ##\vert \psi \rangle## is a state vector.

mike1000 said:
I was under the impression that the elements of the vector X could be functions

No, they're complex numbers. But there might be an infinite set of them for each vector, depending on the Hilbert space in question.
 
  • #19
PeterDonis said:
It's ##H \vert \psi \rangle = \lambda \vert \psi \rangle##, where ##H## is the Hamiltonian matrix and ##\vert \psi \rangle## is a state vector.
No, they're complex numbers. But there might be an infinite set of them for each vector, depending on the Hilbert space in question.

Thank you for that. I found a Feynman lecture on the Hamiltonian Matrix. I would like to post a link to it for future reference.

http://www.feynmanlectures.caltech.edu/III_08.html
 
  • #20
PeterDonis said:
You're mixing up the Heisenberg and Schrodinger representations. H is a matrix in the Heisenberg representation, but in that representation, there is no "gradient operator" ##\nabla##, and the matrix H does not contain it. Matrices operate on vectors, not functions, and ##\nabla## operates on functions.

In the Schrodinger representation, H is not a matrix, it's a differential operator, and contains the operator ##\nabla##; but H operates on functions, not vectors, and so the eigenvalue problem is the problem of finding functions ##\psi(x)## that obey the eigenvalue equation ##H \psi(x) = \lambda \psi(x)##. This is just solving the appropriate differential equation.

It turns out that the gradient operator is defined for kets, |Ψ>.

https://lh3.googleusercontent.com/3eK6B3aShOfxJlGrPGyrX4WsRcI_G9NYinPhTq17ZyjS-maaoArohnlE6QrPCvxvK8qUYeEgEufbTO6KY9Lp0tMUoNDwwe7vhW-fTys1OieTRUVWHOazFVb3LGAxHPNobFWmlXHQpBlrslCl-XnwVtijrAOu6H8setGDXAhXNK_eeFyUaDIPYwN3J0uw1Qy-ARW43rFbQEDWIfqw9KEeYm7WgakxUGbN0hGJcWl2AUU2yeZDoVK0L9O03vE1b9s4Qfy-ds93mxtLK9vt_o9_7KE1IKwFBNP-8xeoyj6OCc7tRCiy3pyGHka8hWR2AnCqFn81eTQdfvC2fSsohdiHTEsTS7Myk-GFCfE9xrtTeT8uEnpQiWa8TVWyqOhAmIWHzQcevHctR2IChJPTyjXZ9osgffT7MLS83RIymsGy4afo41fDL7k3DvexzikRzGdgBaeMa6zUofmYl3pWK10dlapPSINCWa7h-J5XCnhj3bmUcIAjNISFzyv2lku8sYHWEs03lBOae0cAqmMFv2h_jwo4hxWtdwnDmg2xhGYF48iWScZcRNneXelG7KUOwWYYpqZ_bgviwItf3MXm9mwgBbxODuNoCZxnoWiknP84J4SOEoGIdkqf=w612-h767-no
 
  • #21
Through away the book, where this comes from. It's just wrong!

A ket ##|\psi \rangle## does not depend on ##\vec{x}##. The wave function depends on ##\vec{x}## and is defined by
$$\psi(\vec{x})=\langle \vec{x}|\psi \rangle,$$
where ##|\vec{x} \rangle## is the set of generalized position eigenvectors, ##\vec{x} \in \mathbb{R}^3##. Then indeed you have
$$\hat{\vec{p}} \psi(\vec{x}) = \langle \vec{x} |\hat{\vec{p}} \psi \rangle=-\mathrm{i} \hbar \vec{\nabla} \psi(\vec{x}),$$
and then everything makes sense!
 
  • #22
vanhees71 said:
Through away the book, where this comes from. It's just wrong!

A ket ##|\psi \rangle## does not depend on ##\vec{x}##. The wave function depends on ##\vec{x}## and is defined by
$$\psi(\vec{x})=\langle \vec{x}|\psi \rangle,$$
where ##|\vec{x} \rangle## is the set of generalized position eigenvectors, ##\vec{x} \in \mathbb{R}^3##. Then indeed you have
$$\hat{\vec{p}} \psi(\vec{x}) = \langle \vec{x} |\hat{\vec{p}} \psi \rangle=-\mathrm{i} \hbar \vec{\nabla} \psi(\vec{x}),$$
and then everything makes sense!

In your last equation, what allowed you to switch the order of ##\hat{\vec{p}}## and ##\hat{\vec{x}}##?

Isn't $$\hat{\vec{p}} \psi(\vec{x}) = \hat{\vec{p}}\langle \vec{x}|\psi \rangle$$ ?

And, isn't ##\langle \vec{x}|\psi \rangle## a scaler?

Ballentine calls ##\langle \vec{x}|\psi \rangle## a functional (I think)
 
Last edited:
  • #23
vanhees71 said:
Through away the book, where this comes from. It's just wrong!

A ket ##|\psi \rangle## does not depend on ##\vec{x}##. The wave function depends on ##\vec{x}## and is defined by
$$\psi(\vec{x})=\langle \vec{x}|\psi \rangle,$$
where ##|\vec{x} \rangle## is the set of generalized position eigenvectors, ##\vec{x} \in \mathbb{R}^3##. Then indeed you have
$$\hat{\vec{p}} \psi(\vec{x}) = \langle \vec{x} |\hat{\vec{p}} \psi \rangle=-\mathrm{i} \hbar \vec{\nabla} \psi(\vec{x}),$$
and then everything makes sense!

I get this, ##\psi(\vec{x})=\langle \vec{x}|\psi \rangle##.

However, I do not think you need the middle part of the last equation, do you, because $$\hat{\vec{p}} \psi(\vec{x}) = -\mathrm{i} \hbar \vec{\nabla} \psi(\vec{x}),$$ follows from direct application of the momentum operator.

But, can't you take this one step further by recoginizing the following...$$-\mathrm{i} \hbar \vec{\nabla} \psi(\vec{x}) = -i\hbar | \psi \rangle$$
 
  • #24
To be very precise you have to distinguish the operators acting on the abstract kets, like the momentum operator acting on the state ket, ##\hat{\vec{p}}|\psi \rangle## and the operator defined in a specific representation, i.e., with respect to a basis. The analogy known in linear algebra is that you have an abstract linear operator (like a rotation) and it's representing matrix in a basis.

Your last line is again utterly wrong. You cannot equate a wave function (i.e., the representation of the state vector with respect to the generalized position basis) with the abstract ket vector. The right notation is
$$\hat{\vec{p}}\psi(\vec{x})=-\mathrm{i} \hbar \vec{\nabla} \psi(\vec{x}).$$
It's defined as I wrote above, it's the position representation of the vector ##\hat{\vec{p}} |\psi \rangle##, i.e., ##\langle \vec{x}|\hat{\vec{p}} \psi \rangle##.
 
  • #25
vanhees71 said:
To be very precise you have to distinguish the operators acting on the abstract kets, like the momentum operator acting on the state ket, ##\hat{\vec{p}}|\psi \rangle## and the operator defined in a specific representation, i.e., with respect to a basis. The analogy known in linear algebra is that you have an abstract linear operator (like a rotation) and it's representing matrix in a basis.

Your last line is again utterly wrong. You cannot equate a wave function (i.e., the representation of the state vector with respect to the generalized position basis) with the abstract ket vector. The right notation is
$$\hat{\vec{p}}\psi(\vec{x})=-\mathrm{i} \hbar \vec{\nabla} \psi(\vec{x}).$$
It's defined as I wrote above, it's the position representation of the vector ##\hat{\vec{p}} |\psi \rangle##, i.e., ##\langle \vec{x}|\hat{\vec{p}} \psi \rangle##.

I suspect what you are trying to tell me is very important. To me it is also something very subtle that I must be missing. However, I do not think the last equation I wrote is equating the wave function with the abstract ket vector. I think I am equating the gradient of the wave function with the abstract ket vector.

I am thinking of the function Ψ(##\hat{\vec{x}})## as the dot product of the vector x with the state vector as in Ψ(x)=x1a1+ x2a2+...+xnan where the ai are the components of the ket vector, |Ψ>. If you take the gradient of that function arn't we left with the ket vector, |Ψ>?

As I was writing the above paragraph I think I understand what you are trying to tell me. You are saying that the ket vector, |Ψ> cannot be associated with any basis vectors. Is that what you are saying? It has to be an abstract vector (ie not associated with any basis. And, if you take the gradient you are associating the vector with some basis)
 
  • #26
What I'm saying is that you cannot equate wave functions (or the result of acting with an operator on a wave function) with an abstract ket.

Take an analogy: In usual Euclidean analytic geometry you introduce vectors as arrows connecting two points, ##\vec{a}##. These are geometrical objects. On the other hand you can define a basis of this vector space of arrows and by definition you can then decompose any vector in a unique way in its components with respect to this basis,
$$\vec{a}=\sum_{i=1}^n a_i \vec{b}_i.$$
This provides a one-to-one mapping between the "geometrical vectors" ##\vec{a}## and the vectors provided by the ##n##-tupels of real numbers, ##\vec{a} \leftrightarrow (a_1,a_2,\ldots,a_n)##. It's a one-to-one mapping but it doesn't make sense to set equal a geometrical vector ("an arrow") with an ##n##-tupel of real numbers.
 
  • #27
vanhees71 said:
What I'm saying is that you cannot equate wave functions (or the result of acting with an operator on a wave function) with an abstract ket.

Take an analogy: In usual Euclidean analytic geometry you introduce vectors as arrows connecting two points, ##\vec{a}##. These are geometrical objects. On the other hand you can define a basis of this vector space of arrows and by definition you can then decompose any vector in a unique way in its components with respect to this basis,
$$\vec{a}=\sum_{i=1}^n a_i \vec{b}_i.$$
This provides a one-to-one mapping between the "geometrical vectors" ##\vec{a}## and the vectors provided by the ##n##-tupels of real numbers, ##\vec{a} \leftrightarrow (a_1,a_2,\ldots,a_n)##. It's a one-to-one mapping but it doesn't make sense to set equal a geometrical vector ("an arrow") with an ##n##-tupel of real numbers.

I am obviously missing something very important...

What is the difference between ##\hat {\vec{p}}|\psi \rangle## and |##\hat {\vec{p}}\psi \rangle##?
 
  • #28
There is no difference in these two notations. It just says you act with the momentum operator on the ket.
 
  • #29
vanhees71 said:
There is no difference in these two notations. It just says you act with the momentum operator on the ket.

What do you mean by "abstract ket"? And is |##\hat {\vec{p}}\psi \rangle## an abstract ket? If not, can you please tell me why not?
 
  • #30
mike1000 said:
What do you mean by "abstract ket"? And is |##\hat {\vec{p}}\psi \rangle## an abstract ket? If not, can you please tell me why not?

Can I make the following equivalency? $$\phi = |\hat{\vec{p}}\psi\rangle$$ Where ##\phi## is a new state vector? And can I then write it as $$|\phi\rangle$$ And if I do that is ##|\phi\rangle## an abstract ket designating a state vector?
 
  • #31
Yes,
$$|\phi \rangle = \hat{\vec{p}} |\psi \rangle$$
is a correct notation. Note that these are three vectors, because ##\hat{\vec{p}}## are the three operators for the three components of momentum.
 

1. What is Matrix Mechanics?

Matrix Mechanics is a mathematical framework used in quantum mechanics to describe the behavior and properties of systems at the atomic and subatomic level. It involves the use of matrices to represent the state of a system and to calculate the probabilities of different outcomes of measurements.

2. How does Matrix Mechanics relate to non-linear least squares?

Matrix Mechanics and non-linear least squares are both mathematical tools used to analyze and model complex systems. While Matrix Mechanics is primarily used in quantum mechanics, non-linear least squares is used in various fields such as statistics, physics, and engineering. Both methods involve the use of matrices and optimization techniques to find the best fit for a given set of data.

3. What are the limitations of using Matrix Mechanics and non-linear least squares analogy?

One limitation of using Matrix Mechanics is that it can only be applied to quantum mechanical systems and cannot be extended to macroscopic systems. Additionally, the calculations involved in Matrix Mechanics can become computationally intensive for larger systems. Non-linear least squares also has limitations, such as the assumption of a linear relationship between variables and the sensitivity to outliers in the data.

4. How is the accuracy of Matrix Mechanics and non-linear least squares analogy evaluated?

The accuracy of Matrix Mechanics is evaluated by comparing the calculated probabilities of different outcomes to the experimental results. In non-linear least squares, the accuracy is evaluated by measuring the goodness of fit, such as the sum of squared errors or the coefficient of determination.

5. Can Matrix Mechanics and non-linear least squares analogy be used together?

Yes, Matrix Mechanics and non-linear least squares can be used together in certain cases. For example, in quantum field theory, non-linear least squares methods can be used to fit experimental data to theoretical predictions based on Matrix Mechanics. However, the two methods have different underlying principles and cannot be directly applied to each other's domains.

Similar threads

Replies
26
Views
2K
Replies
16
Views
1K
Replies
8
Views
1K
Replies
6
Views
1K
Replies
19
Views
1K
  • Quantum Physics
Replies
1
Views
798
Replies
9
Views
1K
Replies
18
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
7
Views
486
Back
Top