Why is matrix multiplication necessary for representing linear transformations?

Gear300
Messages
1,209
Reaction score
9
What would be the proof for matrix multiplication?...or just an explanation as to why its done the way its done.
 
Mathematics news on Phys.org
It's done like that by definition, as far as I am aware. I don't really know why it's defined like that though...:rolleyes:
 
I see...I haven't really studied matrices all too much, so I'm not too sure of what they are (by what I can say, a matrix is a rectangular array of data or an organization of data). I'm just wondering how multiplying two rectangular arrays of data works.
 
They're defined to multiply in that manner so that they can be used to solve systems of linear equations. The manner of multiplication is also convenient in that it easily represents a linear transformation, which is very useful in studying nonlinear mathematics. A transformation T of two objects u and v is linear if T(s*u + v) = s*T(u) + T(v). Ie., if x is a vector denoted by a column of numbers, a linear transformation of x can always be represented by matrix multiplication from the left (Ax).
 
Let's think of a 2x2 matrix, which represents a linear transformation of the plane. I like to think of matrices as columns of numbers, not as rows. Then the left column of the matrix represents where the point (1,0) on the horizontal axis goes. The right column represents where the point (0,1) on the vertical axis goes.

Each time you multiply a matrix by another, you are following up one linear transformation by another. You can imagine how the (1,0) point moves after one transformation, and then how the resulting vector moves after the next. The rules for calculating products of matrices might be easier to think about that way.
 
More abstractly, if you think of matrices as linear transformations R^n->R^m, then matrix multiplication corresponds to composition of linear maps. So A(B(v))=(AB)(v), where AB is the matrix product. This requires and (easy) proof.
 
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. In Dirac’s Principles of Quantum Mechanics published in 1930 he introduced a “convenient notation” he referred to as a “delta function” which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/ by...
Fermat's Last Theorem has long been one of the most famous mathematical problems, and is now one of the most famous theorems. It simply states that the equation $$ a^n+b^n=c^n $$ has no solutions with positive integers if ##n>2.## It was named after Pierre de Fermat (1607-1665). The problem itself stems from the book Arithmetica by Diophantus of Alexandria. It gained popularity because Fermat noted in his copy "Cubum autem in duos cubos, aut quadratoquadratum in duos quadratoquadratos, et...
I'm interested to know whether the equation $$1 = 2 - \frac{1}{2 - \frac{1}{2 - \cdots}}$$ is true or not. It can be shown easily that if the continued fraction converges, it cannot converge to anything else than 1. It seems that if the continued fraction converges, the convergence is very slow. The apparent slowness of the convergence makes it difficult to estimate the presence of true convergence numerically. At the moment I don't know whether this converges or not.
Back
Top