Why do orthogonal matrices have a specific number of independent parameters?

Click For Summary
An n x n orthogonal matrix has n(n-1)/2 independent parameters due to its representation as a product of rotation matrices that fix n-2 basis vectors while rotating the remaining two. This counting arises from the combinatorial selection of pairs of dimensions to rotate. In contrast, an n x n unitary matrix has n^2-1 independent parameters because each matrix entry has both real and imaginary components, effectively doubling the degrees of freedom. The orthogonality condition can be used to demonstrate these properties, as orthogonal matrices map to the identity matrix through their transpose. Understanding these concepts requires clarity on what constitutes an independent parameter in the context of matrix transformations.
Ed Quanta
Messages
296
Reaction score
0
What does it mean to say that a n x n orthogonal matrix has n(n-1)/2 independent parameters? And why is this so? Can this be shown using the equation the summation with respect to i of the product aij(aik)= bjk

where j,k=1,2,3.

And bjk has the property bjk=1 when j=k
bjk=0 when j doesn't equal k



And with this being said, why does n x n unitary matrix have n^2-1 independent parameters. Can someone help clear some stuff up?
 
Physics news on Phys.org
I can't explain the unitary bit, but for the orthogonal one:

Any orthogonal matrix can be written as a product of basic (my terminology, not standard) rotation matrices (plus some reflection, but let's not worry about that here) with respect to the standard basis

What are these? Well, what is a basic rotation: it fixes n-2 basis vectors and rotates the two remaining ones by some angle, theta. How many ways are there to pick 2 from n? n(n-1)/2



why are there more for unitary ones? Well, each entry has a real and an imaginary part, but I'm not going to attempt a more detailed explanation cos i'll muck it up.

That's a start anyway, but I'd need to know what your book defined 'independent parameter' as.
 
Last edited:
See the thing is, my crummy book never defined independent parameter. What you said makes sense to me, but I think this can be shown using the orthogonality condition.
 
well let's try using the orthogonality condition. Consider the space of all square nxn matrices and map it into itself by the map taking A to A.Atranspose.

I suppose an orthogonal; matrix is one whose inverse equals its transpose, right? So they would be the matrices which map to the identity by this map. Now the image of this map seems to equal all symmetric matrices, which do have dimension (1/2)(n)(n+1). So the domain space has dimension n^2 hence the fiber over one point would be expected to have dimension n^2 - (1/2)(n)n+1) = (1/2)(n-1)(n).

This same approach should do the unitary case too.
 
I am studying the mathematical formalism behind non-commutative geometry approach to quantum gravity. I was reading about Hopf algebras and their Drinfeld twist with a specific example of the Moyal-Weyl twist defined as F=exp(-iλ/2θ^(μν)∂_μ⊗∂_ν) where λ is a constant parametar and θ antisymmetric constant tensor. {∂_μ} is the basis of the tangent vector space over the underlying spacetime Now, from my understanding the enveloping algebra which appears in the definition of the Hopf algebra...

Similar threads

Replies
3
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 17 ·
Replies
17
Views
6K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 27 ·
Replies
27
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 5 ·
Replies
5
Views
3K
Replies
5
Views
5K
  • · Replies 2 ·
Replies
2
Views
5K