# What does the transpose mean?

## Main Question or Discussion Point

I'm sure there are a ton of ways to interpret what the transpose of a matrix represents. Could someone just give me a laundry list of interpretations? Thanks!

Related Linear and Abstract Algebra News on Phys.org
Here is what a transpose basically is:

Say have a matrix X, and some component of is in location Xij. If you transpose the matrix that same component will be in the location Xji. You basically read the columns as rows, and rows as columns.

For example say you have this matrix Y:
|A B|
|C D|

The transpose is given by a superscript T. So YT is then:
|A C|
|B D|

You don't need a square matrix to do a transposition.

Right, I understand the mathematics of it. I just don't understand the interpretation of it. I'm trying to picture the difference between a matrix and its transpose geometrically, but I don't have any insights.

If the transpose by itself is actually meaningless, and only the adjoint or Hermitian thing has a meaningful interpretation, then I probably don't want to know, because I don't know if my brain can handle the idea of complex quantities on any order higher than scalars.

Fredrik
Staff Emeritus
Gold Member
I'm sure there are a ton of ways to interpret what the transpose of a matrix represents. Could someone just give me a laundry list of interpretations? Thanks!
I don't think there's a ton of things you can say about transposes in general. But if you know what a specific matrix "does", I'm sure you can figure out what the transpose does. For example, if R rotates a vector in space, then RT is a rotation in the opposite direction.

Right, I understand the mathematics of it. I just don't understand the interpretation of it. I'm trying to picture the difference between a matrix and its transpose geometrically, but I don't have any insights.
How do you picture the matrix geometrically? It seems that you have to do that before you can picture the difference.

If the transpose by itself is actually meaningless, and only the adjoint or Hermitian thing has a meaningful interpretation, then I probably don't want to know, because I don't know if my brain can handle the idea of complex quantities on any order higher than scalars.
You should try to get over that as soon as possible. Complex matrices are actually easier to deal with than real ones.

I don't think there's a ton of things you can say about transposes in general. But if you know what a specific matrix "does", I'm sure you can figure out what the transpose does. For example, if R rotates a vector in space, then RT is a rotation in the opposite direction.

How do you picture the matrix geometrically? It seems that you have to do that before you can picture the difference.

You should try to get over that as soon as possible. Complex matrices are actually easier to deal with than real ones.

1. If you tell me that a unitary matrix is defined by the fact that its transpose is equal to its inverse, then I can look at how the rows and columns each form an orthogonal basis of unit vectors, and then I understand how all unitary matrices represent rotations. Therefore all transposes of rotational matrices rotate in the opposite direction. So in that case I have a pretty good grasp on what the transpose does, but in general, I still don't.

2. I picture a matrix as a basis, usually.

3. Okay, that's promising.

AlephZero
Homework Helper
In one sense, the idea of a "transpose" is just an artefact of the way we think of matrices as rectangular arrays of stuff.

It might make more sense to you if you just think about the mathematical operations involved. For example if you have a column vectors X and Y, then XTY has a fairly obvious geometrical interpretation, and XYT is a matrix of rank one, often called the Cartesian product of X and Y.

Both those basic operations occur frequently as the "building blocks" of more complicated matrix expressions.

mathwonk
Homework Helper
the transpose is the matrix of the induced map on the dual space. read the last part of my math 4050 notes on my website, at roy smith, retired faculty, uga math dept.

So mathwonk more or less described the point of the transpose, but let's state it in a simpler way (in the case of Rn with the usual dot product).

The transpose of a matrix A is the unique matrix AT such that, for any vectors x and y (of the appropriate size), we have the equation
$$Ax \cdot y = x \cdot A^T y.$$
That's it; fundamentally there's nothing more to the transpose. (Using more general terminology, one might say that AT is adjoint to A.) Note that since the dot product is symmetric, this also says that
$$x \cdot Ay = A^T x \cdot y.$$

edit: This also explains the equations characterizing, say, orthogonal matrices. The point of an orthogonal matrix Q is that it preserves the inner product; that is, Qx · Qy = x · y for any x and y. Using the above equations characterizing the transpose, you get that QTQx · y = Qx · Qy = x · y for any x and y, so that QTQ = I; similarly, QQT = I, so QT = Q-1, which is how orthogonal matrices are often defined.

Last edited:
Landau
Let $$V,W$$ be finite dimensional vector spaces over the field $$k$$. The (algebraic) dual of $$V$$ is the vector space $$V^*$$ of all linear functionals $$V\to k$$, under componentwise addition and scalar multiplication. To any linear map $$f:V\to W$$ is accociated its dual map: the linear map $$f^*:W^*\to V^*$$ defined by pre-composition: $$f^*(\phi):=\phi\circ f$$.
Note that $$(g f)^*=f^* g^*$$ and $$I_V^*=I_V$$ (where I_V denotes the identity map on V), so 'taking the dual' is a contravariant functor from the category of finite dimensional k-vector spaces to itself.
If $$V$$ has basis $$(e_1,...,e_m)$$, the dual $$V^*$$ has a corresponding 'dual' basis,$$(\phi_1,...,\phi_m)$$, where $$\phi_i(e_j):=[i=j]$$. (Here [i=j] is my favorite notation for what many people call the kronecker-delta $\delta_{i,j}$, which is a number that equals 1 if i=j and equals 0 otherwise.)