Understanding Matrix Transpose and Examples | Learn about Matrix Transpose

  • Thread starter Thread starter FrankJ777
  • Start date Start date
  • Tags Tags
    Matrix Transpose
FrankJ777
Messages
140
Reaction score
6
Hi

Could somebody please tell me what the use is for the transpose of a matrix, and maybe give an example if possible.

Thanks
 
Physics news on Phys.org
If A is a linear transformation from vector space U to vector space V, then AT is defined as the linear transformation from V to U such that < Au, v>U= < u, ATv>V where u is any vector in U, v is any vector in V and < , >U is an inner product in U and < , > is an inner product in V.

Given a basis for U and a basis for V, of course A would be represented by a matrix and AT would be represented by the "transpose" matrix: swapping rows and columns. Notice that if U is n dimensional and V is m dimensional, the matrix representing A would have n columns and m rows while AT would have m columns and n rows.

Here's an application: Suppose A is not "onto" V- that is the image is a proper subspace of V. Then we cannot, in general, solve the equation Ax= v. Such an x exists only if v happens to be in the image of A (strictly speaking the image A(U)). If v is not, what we can do is find the u in U such that Au is "closest" to v. What do we mean by "closest"? Well, geometrically, the point on A(U) (visualize it as a plane in 3 dimensions) closest to v is the one at the base of a perpendicular from v to A(U).

That is, suppose Au is the point in A(U) closest to v. The v- Au is the vector from Au to v and we want that perpendicular to A(U). That means that if w is any vector in U, Aw is in A(U) and so <Aw, v- Au>V= 0. Now, using the definition of inner product, <w, AT(v- Au)>U= 0 and since w can be any vector in U we must have AT(v- Au)U= ATv- ATAu= 0 or ATAu= ATv so u= (ATA)-1ATv. It is not necessarily true that ATA has an inverse (for example A= 0) but it may have even when A does not and if A does have an inverse, then (ATA)-1A= A-1. (ATA)-1A is referred to as a "generalized inverse" of A.

Here's a specific application of that: Suppose we want to find a line, y= ax+ b that passes through the points (x1,y1), (x2,y2[/sup]), ..., (xn,yn). Of course, a line is determined by two points so in general you can't find a single line through n points. We can, however represent this as a matrix equation:
\left[\begin{array}{cc} x_1 &amp; 1 \\ x2 &amp; 2\\ ... &amp; ... \\ x_n &amp; 1\end{array}\right]\left[\begin{array}{c} a \\ b \end{array}\right]= \left[\begin{array}{c} y-1 \\ y_2 \\ ... \\ y_n \end{array}\right]

Of course that "2 by n" matrix has no inverse but AT is
\left[\begin{array}{cccc}x_1 &amp; x_2 &amp; ... &amp; x_n \\ 1 &amp; 1 &amp; ... &amp; 1\end{array}\right]
AT A is the 2 by 2 matrix
\left[\begin{array}{cc}\sum_{i=1}^n x_i^2 &amp; \sum_{i=1}^n x_i \\ \sum_{i= 1}^n x_i &amp; n \end{array}\right]

The equation ATAu= Av would be
\left[\begin{array}{cc}\sum_{i=1}^n x_i^2 &amp; \sum_{i=1}^n x_i \\ \sum_{i= 1}^n x_i &amp; n \end{array}\right]\left[\begin{array}{cc}a &amp; b\end{array}\right]= \left[\begin{array}{cc} x_1 &amp; 1 \\ x2 &amp; 2\\ ... &amp; ... \\ x_n &amp; 1\end{array}\right]\left[\begin{array}{c} y_1 \\ y_2 \\ ... \\ y_n\end{array}\right]

You might recognize that as giving the formula for the "least squares" line, the whose total distance to the points is a minimum.
 
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top