Matrix Elements of Operators & Orthonormal Basis Sets

Click For Summary

Discussion Overview

The discussion revolves around the calculation of matrix elements of operators and expansion coefficients in the context of orthonormal and non-orthonormal basis sets. Participants explore the implications of using non-orthonormal bases in quantum mechanics and linear algebra, as well as methods for orthogonalizing such bases.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • Some participants question whether the rule for finding matrix elements, \(\langle b_i|O|b_j\rangle\), applies when the basis is not orthonormal.
  • One participant asserts that the rule works specifically for orthonormal bases and suggests a correction to the formula for vector expansion.
  • Another participant inquires about methods for finding matrix elements and expansion coefficients in non-orthonormal bases.
  • Some participants propose orthogonalizing the basis as a solution, particularly in finite-dimensional Hilbert spaces, and suggest using standard linear algebra techniques.
  • A participant describes a method involving the overlap matrix and its inverse square root for orthogonalization, emphasizing the need to differentiate between covariant and contravariant components in non-orthogonal bases.
  • One participant expresses interest in applying orthogonalization techniques to an engineering problem involving non-orthogonal measurements and seeks guidance on calculating the overlap matrix.

Areas of Agreement / Disagreement

Participants generally agree that orthogonalization is a viable approach for handling non-orthonormal bases, but there is no consensus on the best method or the implications of using non-orthonormal bases in specific contexts.

Contextual Notes

Participants mention the dependence on the dimensionality of the Hilbert space and the variability of angles in practical measurements, which may affect the application of discussed methods.

Amok
Messages
254
Reaction score
1
So, the rule for finding the matrix elements of an operator is:

\langle b_i|O|b_j\rangle

Where the "b's" are vector of the basis set. Does this rule work if the basis is not orthonormal? Because I was checking this with regular linear algebra (in R3) (finding matrix elements of linear transformations) and this only seems to work with the canonical basis. The same goes for the rule that allows you to find the coefficients of the expansion of a vector in a given basis:

|\psi\rangle =\[<br /> \sum_{i=1}^{\infty} c_i |\psi\rangle<br /> \]

with

c_i = \langle b_i|\psi\rangle
 
Physics news on Phys.org
Amok said:
So, the rule for finding the matrix elements of an operator is:
\langle b_i|O|b_j\rangle
Where the "b's" are vector of the basis set. Does this rule work if the basis is not orthonormal?
It works precisely when the basis is orthonormal, and your other formula is corrected to
Amok said:
|\psi\rangle =\[<br /> \sum_{i=1}^{\infty} c_i |b_i\rangle<br /> \]
 
And how would you go about finding matrix elements and expansion coefficients if the basis is not orthonormal?
 
Amok said:
And how would you go about finding matrix elements and expansion coefficients if the basis is not orthonormal?

In general, I would orthogonalize the basis.

But if the Hilbert space is finite-dimensional, I would convert to ordinary matrix notation, and then apply the standard rules of linear algebra.
 
A. Neumaier said:
In general, I would orthogonalize the basis.

But if the Hilbert space is finite-dimensional, I would convert to ordinary matrix notation, and then apply the standard rules of linear algebra.

Orthonormal basis are so cool :/
 
As A. Neumaier said, the easiest way of handling non-orthogonal basis sets is to orthogonalize them. This would usually be done by a symmetric orthogonalization:
Calculate the overlap matrix S_{\mu\nu}=\langle\mu|\nu\rangle, then form the inverse square root \mathbf{S}^{-1/2} (by diagonalization. it's symmetric), then the rows of S give you expansion vectors for an orthonormal basis system. In this orthonormal basis all the nifty standard projection stuff works as expected.

Alternatively, you can also work directly in the non-orthogonal basis. In that case, however, you need to differentiate between covariant components and contravariant components of vectors and tensors. (Matrix elements would typically be calculated in an all-covariant form and then translated into something else, or have some of their indices contracted to contravariant quantities). In order to convert between the co- and contravariant components you again need both the inverse overlap matrix (for "raising indices", i.e., converting co-variant indices to contra-variant indices) and its inverse (for "lowering indices", i.e., converting contra-variant indices to co-variant indices).

There are, however, few cases in which this non-orthogonal formalism is called for.
 
Thank you guys.
 
cgk said:
As A. Neumaier said, the easiest way of handling non-orthogonal basis sets is to orthogonalize them. This would usually be done by a symmetric orthogonalization:
Calculate the overlap matrix S_{\mu\nu}=\langle\mu|\nu\rangle, then form the inverse square root \mathbf{S}^{-1/2} (by diagonalization. it's symmetric), then the rows of S give you expansion vectors for an orthonormal basis system. In this orthonormal basis all the nifty standard projection stuff works as expected.

I am interested in using this approach to solve an engineering problem. I apologise in advance for my poor use of mathematical language. I take a set of measurements in 2 dimensions. Ideally the measurements are simply Cartesian. However in practice the x and y-axis is not orthogonal, the angle between them can vary from ~45 to ~135 degrees. The processing I wish to perform on the measurements requires that the data comes from an orthogonal basis. So I want to orthogonalize the data first. I know the angle between the axes, but it can vary between sets of measurements. So my question is, how do I calculate the overlap matrix for this situation? It seems to me that knowing this, and taking its inverse square root, I can orthogonalize the data.
 

Similar threads

  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 8 ·
Replies
8
Views
4K
  • · Replies 8 ·
Replies
8
Views
2K
Replies
2
Views
1K
  • · Replies 21 ·
Replies
21
Views
3K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 16 ·
Replies
16
Views
1K
  • · Replies 7 ·
Replies
7
Views
2K