- #1

- 49

- 0

You are using an out of date browser. It may not display this or other websites correctly.

You should upgrade or use an alternative browser.

You should upgrade or use an alternative browser.

- Thread starter marschmellow
- Start date

- #1

- 49

- 0

- #2

- 634

- 1

Say have a matrix X, and some component of is in location X

For example say you have this matrix Y:

|A B|

|C D|

The transpose is given by a superscript T. So Y

|A C|

|B D|

You don't need a square matrix to do a transposition.

- #3

- 49

- 0

If the transpose by itself is actually meaningless, and only the adjoint or Hermitian thing has a meaningful interpretation, then I probably don't want to know, because I don't know if my brain can handle the idea of complex quantities on any order higher than scalars.

- #4

Fredrik

Staff Emeritus

Science Advisor

Gold Member

- 10,851

- 413

I don't think there's a ton of things you can say about transposes in general. But if you know what a specific matrix "does", I'm sure you can figure out what the transpose does. For example, if R rotates a vector in space, then R

How do you picture the matrix geometrically? It seems that you have to do that before you can picture the difference.Right, I understand the mathematics of it. I just don't understand the interpretation of it. I'm trying to picture the difference between a matrix and its transpose geometrically, but I don't have any insights.

You should try to get over that as soon as possible. Complex matrices are actually easier to deal with than real ones.If the transpose by itself is actually meaningless, and only the adjoint or Hermitian thing has a meaningful interpretation, then I probably don't want to know, because I don't know if my brain can handle the idea of complex quantities on any order higher than scalars.

- #5

- 49

- 0

I don't think there's a ton of things you can say about transposes in general. But if you know what a specific matrix "does", I'm sure you can figure out what the transpose does. For example, if R rotates a vector in space, then R^{T}is a rotation in the opposite direction.

How do you picture the matrix geometrically? It seems that you have to do that before you can picture the difference.

You should try to get over that as soon as possible. Complex matrices are actually easier to deal with than real ones.

1. If you tell me that a unitary matrix is defined by the fact that its transpose is equal to its inverse, then I can look at how the rows and columns each form an orthogonal basis of unit vectors, and then I understand how all unitary matrices represent rotations. Therefore all transposes of rotational matrices rotate in the opposite direction. So in that case I have a pretty good grasp on what the transpose does, but in general, I still don't.

2. I picture a matrix as a basis, usually.

3. Okay, that's promising.

Thank you for your reply.

- #6

AlephZero

Science Advisor

Homework Helper

- 6,994

- 293

It might make more sense to you if you just think about the mathematical operations involved. For example if you have a column vectors X and Y, then X

Both those basic operations occur frequently as the "building blocks" of more complicated matrix expressions.

- #7

mathwonk

Science Advisor

Homework Helper

2020 Award

- 11,139

- 1,331

- #8

- 534

- 1

So mathwonk more or less described the point of the transpose, but let's state it in a simpler way (in the case of **R**^{n} with the usual dot product).

The transpose of a matrix A is the unique matrix A^{T} such that, for any vectors x and y (of the appropriate size), we have the equation

[tex]Ax \cdot y = x \cdot A^T y.[/tex]

That's it; fundamentally there's nothing more to the transpose. (Using more general terminology, one might say that A^{T} is *adjoint* to A.) Note that since the dot product is symmetric, this also says that

[tex]x \cdot Ay = A^T x \cdot y.[/tex]

edit: This also explains the equations characterizing, say, orthogonal matrices. The point of an orthogonal matrix Q is that it preserves the inner product; that is, Qx · Qy = x · y for any x and y. Using the above equations characterizing the transpose, you get that Q^{T}Qx · y = Qx · Qy = x · y for any x and y, so that Q^{T}Q = I; similarly, QQ^{T} = I, so Q^{T} = Q^{-1}, which is how orthogonal matrices are often defined.

The transpose of a matrix A is the unique matrix A

[tex]Ax \cdot y = x \cdot A^T y.[/tex]

That's it; fundamentally there's nothing more to the transpose. (Using more general terminology, one might say that A

[tex]x \cdot Ay = A^T x \cdot y.[/tex]

edit: This also explains the equations characterizing, say, orthogonal matrices. The point of an orthogonal matrix Q is that it preserves the inner product; that is, Qx · Qy = x · y for any x and y. Using the above equations characterizing the transpose, you get that Q

Last edited:

- #9

Landau

Science Advisor

- 905

- 0

Let [tex]V,W[/tex] be finite dimensional vector spaces over the field [tex]k[/tex]. The (algebraic) dual of [tex]V[/tex] is the vector space [tex]V^*[/tex] of all linear functionals [tex]V\to k[/tex], under componentwise addition and scalar multiplication. To any linear map [tex]f:V\to W[/tex] is accociated its dual map: the linear map [tex]f^*:W^*\to V^*[/tex] defined by pre-composition: [tex]f^*(\phi):=\phi\circ f[/tex].

Note that [tex](g f)^*=f^* g^*[/tex] and [tex]I_V^*=I_V[/tex] (where I_V denotes the identity map on V), so 'taking the dual' is a contravariant functor from the category of finite dimensional k-vector spaces to itself.

If [tex]V[/tex] has basis [tex](e_1,...,e_m)[/tex], the dual [tex]V^*[/tex] has a corresponding 'dual' basis,[tex](\phi_1,...,\phi_m)[/tex], where [tex]\phi_i(e_j):=[i=j][/tex]. (Here [i=j] is my favorite notation for what many people call the kronecker-delta [itex]\delta_{i,j}[/itex], which is a number that equals 1 if i=j and equals 0 otherwise.)

If the linear map f has matrix M with respect to some bases, then f* has matrix M^T with respect to the corresponding dual bases.

Share: