Understanding the Notation of Matrices in Quantum Mechanics

rsaad
Messages
76
Reaction score
0

Homework Statement



The problem is I am unable to understand the proof. I understand how it is done but I do not know how it is related to the theorem. It is probably because I am unable to understand the notation of matrices, the one involving k.
It is given that
I=δ_ij
= 1 0 0...0
0 1 0...0
...0
...1
So now how do I relate k with it?

So before you explain to me what's going on, please elaborate on the notation.
Thank you.
 

Attachments

  • question.png
    question.png
    54.9 KB · Views: 454
Physics news on Phys.org
If you mean that you don't understand how to go from the first line to the second, all they've done is to insert the identity operator in the form ##1=\sum_k|k\rangle\langle k|##. If you mean that you don't understand how to go from the second row to the third, then you need to learn about the relationship between linear operators and matrices.

Let U and V be vector spaces. Let ##T:U\to V## be linear. Let ##A=(u_1,\dots,u_n)## and ##B=(v_1,\dots,v_m)## be ordered bases for U and V respectively. The matrix [T] of the linear operator T with respect to the pair (A,B) of ordered bases is defined by
$$[T]_{ij}=(Tu_j)_i.$$ The right-hand side is interpreted as "the ith component of the vector ##Tu_j## in the ordered basis B".

You should find it very easy to verify that if B is an orthonormal ordered basis, we have ##[T]_{ij}=\langle v_i,Tu_j\rangle##.

The reason for the definition of [T] can be seen by doing a simple calculation. Suppose that Tx=y. I won't write any summation sigmas, since we can remember to do a sum over each index that appears twice.
$$\begin{align}y &=y_i v_i\\
Tx &= T(x_j u_j)=x_jT(u_j)=x_j(Tu_j)_i v_i.\end{align}$$ Since the v_i are linearly independent, this implies that ##y_i=(Tu_j)_i x_j##. This can be interpreted as a matrix equation [y]=[T][x], if we define [y] and [x] in the obvious ways, and [T] as above. (Recall that the definition of matrix multiplication is ##(AB)_{ij}=A_{ik}B_{kj}##).

When U=V, it's convenient to choose A=B, and we can talk about the matrix of a linear operator with respect to an ordered basis, instead of with respect to a pair of ordered bases.

Notations like [T] are typically only used in explanations like this. I think most books would use T both for the linear operator and the corresponding matrix with respect to a pair of ordered bases.

Edit: It would be a good exercise for you to prove that if ##T:U\to V## and ##S:V\to W## are linear, then ##[S\circ T]=[T]##. This result is the main reason why matrix multiplication is defined the way it is.
 
Last edited:
To solve this, I first used the units to work out that a= m* a/m, i.e. t=z/λ. This would allow you to determine the time duration within an interval section by section and then add this to the previous ones to obtain the age of the respective layer. However, this would require a constant thickness per year for each interval. However, since this is most likely not the case, my next consideration was that the age must be the integral of a 1/λ(z) function, which I cannot model.
Back
Top