Matrix Elements of Operators & Orthonormal Basis Sets

Click For Summary
The discussion focuses on the calculation of matrix elements of operators and expansion coefficients in relation to orthonormal and non-orthonormal basis sets. It is established that the standard formulas for these calculations work only with orthonormal bases, and when dealing with non-orthonormal bases, orthogonalization is recommended. The process involves calculating the overlap matrix and forming its inverse square root to create an orthonormal basis. Additionally, it is noted that working directly in a non-orthogonal basis requires careful handling of covariant and contravariant components. The conversation also touches on practical applications, such as orthogonalizing measurement data in engineering contexts.
Amok
Messages
254
Reaction score
1
So, the rule for finding the matrix elements of an operator is:

\langle b_i|O|b_j\rangle

Where the "b's" are vector of the basis set. Does this rule work if the basis is not orthonormal? Because I was checking this with regular linear algebra (in R3) (finding matrix elements of linear transformations) and this only seems to work with the canonical basis. The same goes for the rule that allows you to find the coefficients of the expansion of a vector in a given basis:

|\psi\rangle =\[<br /> \sum_{i=1}^{\infty} c_i |\psi\rangle<br /> \]

with

c_i = \langle b_i|\psi\rangle
 
Physics news on Phys.org
Amok said:
So, the rule for finding the matrix elements of an operator is:
\langle b_i|O|b_j\rangle
Where the "b's" are vector of the basis set. Does this rule work if the basis is not orthonormal?
It works precisely when the basis is orthonormal, and your other formula is corrected to
Amok said:
|\psi\rangle =\[<br /> \sum_{i=1}^{\infty} c_i |b_i\rangle<br /> \]
 
And how would you go about finding matrix elements and expansion coefficients if the basis is not orthonormal?
 
Amok said:
And how would you go about finding matrix elements and expansion coefficients if the basis is not orthonormal?

In general, I would orthogonalize the basis.

But if the Hilbert space is finite-dimensional, I would convert to ordinary matrix notation, and then apply the standard rules of linear algebra.
 
A. Neumaier said:
In general, I would orthogonalize the basis.

But if the Hilbert space is finite-dimensional, I would convert to ordinary matrix notation, and then apply the standard rules of linear algebra.

Orthonormal basis are so cool :/
 
As A. Neumaier said, the easiest way of handling non-orthogonal basis sets is to orthogonalize them. This would usually be done by a symmetric orthogonalization:
Calculate the overlap matrix S_{\mu\nu}=\langle\mu|\nu\rangle, then form the inverse square root \mathbf{S}^{-1/2} (by diagonalization. it's symmetric), then the rows of S give you expansion vectors for an orthonormal basis system. In this orthonormal basis all the nifty standard projection stuff works as expected.

Alternatively, you can also work directly in the non-orthogonal basis. In that case, however, you need to differentiate between covariant components and contravariant components of vectors and tensors. (Matrix elements would typically be calculated in an all-covariant form and then translated into something else, or have some of their indices contracted to contravariant quantities). In order to convert between the co- and contravariant components you again need both the inverse overlap matrix (for "raising indices", i.e., converting co-variant indices to contra-variant indices) and its inverse (for "lowering indices", i.e., converting contra-variant indices to co-variant indices).

There are, however, few cases in which this non-orthogonal formalism is called for.
 
Thank you guys.
 
cgk said:
As A. Neumaier said, the easiest way of handling non-orthogonal basis sets is to orthogonalize them. This would usually be done by a symmetric orthogonalization:
Calculate the overlap matrix S_{\mu\nu}=\langle\mu|\nu\rangle, then form the inverse square root \mathbf{S}^{-1/2} (by diagonalization. it's symmetric), then the rows of S give you expansion vectors for an orthonormal basis system. In this orthonormal basis all the nifty standard projection stuff works as expected.

I am interested in using this approach to solve an engineering problem. I apologise in advance for my poor use of mathematical language. I take a set of measurements in 2 dimensions. Ideally the measurements are simply Cartesian. However in practice the x and y-axis is not orthogonal, the angle between them can vary from ~45 to ~135 degrees. The processing I wish to perform on the measurements requires that the data comes from an orthogonal basis. So I want to orthogonalize the data first. I know the angle between the axes, but it can vary between sets of measurements. So my question is, how do I calculate the overlap matrix for this situation? It seems to me that knowing this, and taking its inverse square root, I can orthogonalize the data.
 

Similar threads

  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 8 ·
Replies
8
Views
4K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 21 ·
Replies
21
Views
3K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 9 ·
Replies
9
Views
4K