What is the geometric interpretation of matrices A^{T}A and AA^{T}?

  • Thread starter Thread starter monea83
  • Start date Start date
monea83
Messages
20
Reaction score
0
The matrices A^{T}A and AA^{T} come up in a variety of contexts. How should one think about them - is there a way to understand them intuitively, e.g. do they have a geometric interpretation?
 
Physics news on Phys.org
monea83 said:
The matrices A^{T}A and AA^{T} come up in a variety of contexts. How should one think about them - is there a way to understand them intuitively, e.g. do they have a geometric interpretation?

In general there is no real interpretation that comes to mind. With some specific groups the transpose matrix is actually the inverse matrix. Matrices that have this property include the rotation matrices that have a determinant of one.

Geometrically rotation matrices conserve length. So if you had a vector with the tail at the origin (in other words a point), then when you apply a rotation it preserves length of that vector: if you apply it to a set of points it preserves area/volume etc.

There are of course other uses for transpose matrices like least squares, but off the top of my head I can't give you geometric descriptions or interpretations for those.
 
monea83 said:
is there a way to understand them intuitively,
Lots of practice. :smile:



One thing I intuit about them is that it is often the "right" thing to use when you might have used x2 in an analogous situation with real numbers, or used a norm with an analogous situation with a vector.

Also, you can think of them as being a way to turn a matrix into a symmetric, square matrix that does the "least damage" in some sense.

Of course, these are all algebraic ways to intuit them, rather than the geometric one you asked about. :frown:
 
I think about it that way:

For n=3 for exemple,
Think at A as a linear transformation on R3: then you can think at A^T A as a convex bilinear transformation on R3xR3 giving you the scalar product of two vectors A(x1) and A(x2)

In particular, when x1 = x2 = x, I think at it as a "lengh" fonction on A(x) (hence positive defiinte)
 
Its eigenvalues are the singular values of A. In the basis of the eigenvectors of A*A and AA*, the matrix A is 'almost' diagonal. This is the Singular value decomposition.

The Singular value decomposition has some geometric implications, but I don't know whether this qualifies as a geometric explanation of A*A itself.
 
One thing from differential geometry comes to mind: If \gamma : \mathbb{R}^k \to \mathbb{R}^N is a parametrization of a k-manifold M \in \mathbb{R}^N, and A = [D\gamma] is its Jacobian, then the matrix A^T A is the metric induced on M by the embedding in \mathbb{R}^N, which is a _very_ geometric object. This is just a reflection of the fact that A^T A is the matrix of inner products of the columns of A (which is a nice geometric interpretation in and of itself), and in our particular case, the columns of A are the basis vectors of the tangent space to M in the coordinates we've chosen. This fact sometimes comes up in slightly disguised form in the context of multivariable calculus, in the formula for the volume element of a manifold with parametrization \gamma: dV_M = \sqrt{ [D\gamma]^T [D\gamma] } dV_k, where "dV_k" is the volume element in \mathbb{R}^k. Since A^T A is the metric, this is just a version of the usual formula dV_M = \sqrt{g} dV_k, where g is the determinant of the metric.
 
Thread 'Determine whether ##125## is a unit in ##\mathbb{Z_471}##'
This is the question, I understand the concept, in ##\mathbb{Z_n}## an element is a is a unit if and only if gcd( a,n) =1. My understanding of backwards substitution, ... i have using Euclidean algorithm, ##471 = 3⋅121 + 108## ##121 = 1⋅108 + 13## ##108 =8⋅13+4## ##13=3⋅4+1## ##4=4⋅1+0## using back-substitution, ##1=13-3⋅4## ##=(121-1⋅108)-3(108-8⋅13)## ... ##= 121-(471-3⋅121)-3⋅471+9⋅121+24⋅121-24(471-3⋅121## ##=121-471+3⋅121-3⋅471+9⋅121+24⋅121-24⋅471+72⋅121##...
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
Back
Top