Definition of Orthogonal Matrix: Case 1 or 2?

  • Thread starter Thread starter sjeddie
  • Start date Start date
  • Tags Tags
    Matrix Orthogonal
sjeddie
Messages
18
Reaction score
0
Is the definition of an orthogonal matrix:

1. a matrix where all rows are orthonormal AND all columns are orthonormal

OR

2. a matrix where all rows are orthonormal OR all columns are orthonormal?

On my textbook it said it is AND (case 1), but if that is true, there's a problem:
Say we have a square matrix A, and we find its eigenvectors, they are all distinct so A is diagonalizable. We put the normalized eigenvectors of A as the columns of a matrix P, and (our prof told us) P becomes orthogonal and P^-1 = P^T. My question is how did P become orthogonal straight away? By only normalizing its columns how did we guarantee that its rows are also orthonormal?
 
Physics news on Phys.org
It turns out that the rows of a square matrix are orthonormal if and only if the columns are orthonormal. Another way to express that the condition that all columns are orthonormal is that A^T A = I (think about why this is). Then we see that if v \in \mathbb{R}^n, \parallel x \parallel^2 = x^T x = x^T ( A^T A ) x = ( A x )^T ( A x ) = \parallel A x \parallel^2, and therefore A is injective. Since we are working with finite-dimensional spaces, A must also be surjective, so for v \in \mathbb{R}^n, there exists w \in \mathbb{R}^n with v = Aw, and therefore A A^T v = A A^T A w = A w = v, so A A^T = I as well. You can check this this implies that the rows of A are orthonormal. The proof of the converse is similar.

Note that this argument relies on the finite-dimensionality of our vector space. If you move up to infinite dimensional spaces, there may be transforms T with T^*T = I but T T^* \neq I. This type of behavior is what makes functional analysis and operator algebras fun! :smile:
 
Last edited:
Theres actually an easier way to see that A^T A = I implies A is injective, I just tend to think in terms of isometries like I wrote. If v is such that Av = 0, then 0 = A^T 0 = A^T A v = v, so A is injective. Some may prefer this purely algebraic argument.
 
Ah I see, thank you rochfor1, the (A^T)(A) = I thing makes a lot of sense :)
 
The world of 2\times 2 complex matrices is very colorful. They form a Banach-algebra, they act on spinors, they contain the quaternions, SU(2), su(2), SL(2,\mathbb C), sl(2,\mathbb C). Furthermore, with the determinant as Euclidean or pseudo-Euclidean norm, isu(2) is a 3-dimensional Euclidean space, \mathbb RI\oplus isu(2) is a Minkowski space with signature (1,3), i\mathbb RI\oplus su(2) is a Minkowski space with signature (3,1), SU(2) is the double cover of SO(3), sl(2,\mathbb C) is the...
Back
Top