Definition of Orthogonal Matrix: Case 1 or 2?

  • Thread starter Thread starter sjeddie
  • Start date Start date
  • Tags Tags
    Matrix Orthogonal
sjeddie
Messages
18
Reaction score
0
Is the definition of an orthogonal matrix:

1. a matrix where all rows are orthonormal AND all columns are orthonormal

OR

2. a matrix where all rows are orthonormal OR all columns are orthonormal?

On my textbook it said it is AND (case 1), but if that is true, there's a problem:
Say we have a square matrix A, and we find its eigenvectors, they are all distinct so A is diagonalizable. We put the normalized eigenvectors of A as the columns of a matrix P, and (our prof told us) P becomes orthogonal and P^-1 = P^T. My question is how did P become orthogonal straight away? By only normalizing its columns how did we guarantee that its rows are also orthonormal?
 
Physics news on Phys.org
It turns out that the rows of a square matrix are orthonormal if and only if the columns are orthonormal. Another way to express that the condition that all columns are orthonormal is that A^T A = I (think about why this is). Then we see that if v \in \mathbb{R}^n, \parallel x \parallel^2 = x^T x = x^T ( A^T A ) x = ( A x )^T ( A x ) = \parallel A x \parallel^2, and therefore A is injective. Since we are working with finite-dimensional spaces, A must also be surjective, so for v \in \mathbb{R}^n, there exists w \in \mathbb{R}^n with v = Aw, and therefore A A^T v = A A^T A w = A w = v, so A A^T = I as well. You can check this this implies that the rows of A are orthonormal. The proof of the converse is similar.

Note that this argument relies on the finite-dimensionality of our vector space. If you move up to infinite dimensional spaces, there may be transforms T with T^*T = I but T T^* \neq I. This type of behavior is what makes functional analysis and operator algebras fun! :smile:
 
Last edited:
Theres actually an easier way to see that A^T A = I implies A is injective, I just tend to think in terms of isometries like I wrote. If v is such that Av = 0, then 0 = A^T 0 = A^T A v = v, so A is injective. Some may prefer this purely algebraic argument.
 
Ah I see, thank you rochfor1, the (A^T)(A) = I thing makes a lot of sense :)
 
Thread 'Derivation of equations of stress tensor transformation'
Hello ! I derived equations of stress tensor 2D transformation. Some details: I have plane ABCD in two cases (see top on the pic) and I know tensor components for case 1 only. Only plane ABCD rotate in two cases (top of the picture) but not coordinate system. Coordinate system rotates only on the bottom of picture. I want to obtain expression that connects tensor for case 1 and tensor for case 2. My attempt: Are these equations correct? Is there more easier expression for stress tensor...
Back
Top