# Matrix Transpose Function

• I
In summary: So transposition is not a function on the structure of matrix multiplication.This conversation is discussing the possibility of expressing the transpose function of a matrix in a power series form, but it has been pointed out that this may not be possible due to the non-commutativity of matrices with their own transpose. Other ways of expressing matrix functions, such as through coordinates or as linear maps, have been suggested as alternatives. Ultimately, the concept of transposition as a linear function between two isomorphic but not identical spaces of vectors is used to explain why it may not be possible to express it in a power series form.

TL;DR Summary
How is the transpose function of a matrix expressed?
One way to express a function of a matrix A is by a power series (a Taylor expansion). It is not too difficult to show that two functions f(A) and g(A) with such a power series representation must commute, i.e. f(A)g(A) = g(A)f(A). But matrices typically do not commute with their own transpose, so presumably the transpose function does not have convergent a power series expansion? I had not previously appreciated that even simple matrix functions may not have a power series representation. Is there another way to express the matrix transpose function, or matrix functions in general?

Summary:: How is the transpose function of a matrix expressed?

One way to express a function of a matrix A is by a power series (a Taylor expansion). It is not too difficult to show that two functions f(A) and g(A) with such a power series representation must commute, i.e. f(A)g(A) = g(A)f(A). But matrices typically do not commute with their own transpose, so presumably the transpose function does not have convergent a power series expansion? I had not previously appreciated that even simple matrix functions may not have a power series representation. Is there another way to express the matrix transpose function, ...
Yes. Transposition is a linear map, so your power series should come to an end early: ##(f(a_{ij}))_{kl} = (f_{kl}(a_{ij}))=(a_{lk})##.
... or matrix functions in general?
No. Functions in general means almost complete arbitrariness. So how should a structure on everything work? The only meaningful way is by coordinates: ##f(a_{ij})=f_{kl}(a_{11},\ldots , a_{nm})##.

You have a matrix, but you talk about analysis. And a matrix from the analytical point of view is simply an ##n\cdot m## tuple of numbers or variables. You cannot expect a matrix to behave like a real or complex number. You have a linear function ##A\, : \,\mathbb{R}^n \longrightarrow \mathbb{R}^m##. If you want to consider the matrix itself as variable, then you have to determine the space the matrix is from, e.g. an algebraic group, and consider paths within this space, e.g. ##t \longmapsto t\cdot A##.

What is variable and what is constant?

Transposition is ##\tau\, : \,\mathbb{M}(n,m) \longrightarrow \mathbb{M}(m,n)##, i.e. a linear function between two isomorphic but not identical spaces of vectors of length ##n\times m##. In this case we have constants which represent ##\tau## and variables which represent the ##n\times m## input and ##m\times n## output variables. As transposition is linear, there is a matrix representation ##\tau \in \mathbb{M}(nm,nm)## with ##(nm)^2## many entries.