Matrix or differential operator?

1Kris
Messages
22
Reaction score
0
Hi,
I've been reading a couple of different books on quantum mechanics and have come a mathematical difficulty. I understand that the Hamiltonian is an operator but in some books, it represented as a matrix and in others as a differential operator? How can they both be equivalent approaches?

I understand that they have some of the same properties, eg. are non-commutative. But how can both give the same results? I think (although I may be wrong) that I am asking for the relations between a function and a vector.

Thanks
 
Physics news on Phys.org
See this post (or any book on linear algebra) for the relationship between matrices and linear operators between finite-dimensional vector spaces.
 
Thanks for the speedy and clear response. Does that suggest that any linear operator A can be written as a matrix and if so is there a general method?
 
1Kris said:
Thanks for the speedy and clear response. Does that suggest that any linear operator A can be written as a matrix and if so is there a general method?
Yes it does, when the vector spaces are finite-dimensional. The stuff I wrote about there is the method. I decided to edit my post, but you replied while I was typing so I'm posting the edited version here instead:See the quote below (or any book on linear algebra) for the relationship between matrices and linear operators between finite-dimensional vector spaces.
Fredrik said:
Suppose A:U\rightarrow V is linear, and that \{u_j\} is a basis for U, and \{v_i\} is a basis for V. Consider the equation y=Ax, and expand in basis vectors.

y=y_i v_i

Ax=A(x_j u_j)=x_j Au_j= x_j (Au_j)_i v_i

I'm using the Einstein summation convention: Since we're always supposed to do a sum over the indices that appear exactly twice, we can remember that without writing any summation sigmas (and since the operator is linear, it wouldn't matter if we put the summation sigma to the left or right of the operator). Now define A_{ij}=(Au_j)_i. The above implies that

y_i=x_j(Au_j)_i=A_{ij}x_j

Note that this can be interpreted as a matrix equation in component form. y_i is the ith component of y in the basis \{v_i\}. x_j is the jth component of x in the basis \{u_j\}. A_{ij} is row i, column j, of the matrix of A in the pair of bases \{u_j\}, \{v_i\}.
Also, consider the special case U=V, with an orthonormal basis \{u_i\}. (Set v_i=u_i).

\langle u_i,Au_j\rangle=\langle u_i,(Au_j)_k u_k\rangle=(Au_j)_k\langle u_i,u_k\rangle=(Au_j)_k\delta_{ik}=(Au_j)_i=A_{ij}
 
Last edited:
1Kris said:
I think (although I may be wrong) that I am asking for the relations between a function and a vector.

Using discrete approximations may help seeing what's going on.

Suppose you are interested in some functions f:[0,1]\to\mathbb{C}.

You can approximate the interval [0,1] as a discrete set

<br /> \big\{0,\frac{1}{N},\frac{2}{N},\ldots,\frac{N-1}{N}\big\}<br />

with some large N. Then the functions f:[0,1]\to\mathbb{C} can approximated as functions

<br /> f:\big\{0,\frac{1}{N},\frac{2}{N},\ldots,\frac{N-1}{N}\big\}\to\mathbb{C}.<br />

But now these new functions are merely vectors in N-dimensional vector space. Like

<br /> (v_1,v_2,\ldots, v_N) \approx \Big(f(0), f\big(\frac{1}{N}\big), f\big(\frac{2}{N}\big),\ldots, f\big(\frac{N-1}{N}\big)\Big)<br />

Any operator that was form A:D(A)\to \mathbb{C}^{[0,1]} with some domain D(A)\subset \mathbb{C}^{[0,1]}, can now be approximated as a N\times N-matrix.

This approach gives some insight into what it means that function spaces are infinite dimensional vector spaces.
 
So roughly speaking are we saying that each value of a function can be thought of as one of infinitely many components of a vector?
 
1Kris said:
So roughly speaking are we saying that each value of a function can be thought of as one of infinitely many components of a vector?

Yes the argument becomes the index and the value of the function at that argument becomes the component of that index.
 
Back
Top