# Matrix multiplication

ajayguhan
Consider two square matrix A,B each specifying a parallelopiped by three different vector. The x, y, z components are written in column 1, column 2, column 3 respectively. Thus the order of A , B is 3x3.

Let C=AB. To get the c11 element of C i do a dot product of row1 of A and column 1 of B.
Row 1 contains x, y, z component of a vector of A.
Column 1 contains x components of three vector of B.

C specifies a parallelopiped whose volume is the product of volume of parallelopiped specified by A, B.
Since det A*det B=det AB = det C.

C11 is a x component of one of the vector of the C.

How come the dot product of x, y, z components of a vector of A and the x components of three vector of B gives the x component of a vector of C?

I know that matrix represent a linear system and when you multiply two matrice, it means your taking composition of two linear function so the matrix mutilipication is in that way.

But how can we prove in terms of vector? Since vetors are imagnabile, the proof of matrix mutilipication interms of vector can gives us more intuition regarding the matrix mutilipication.

Homework Helper
hi ajayguhan!
Row 1 contains x, y, z component of a vector of A.
Column 1 contains x components of three vector of B.

a 3x3 matrix represents a deformation of space

consider a unit cube lined up with the axes …

the vectors of its three sides are (1,0,0) (0,1,0) and (0,0,1)

matrix A sends those three vectors to the vectors represented by the three rows of A

in other words: the cube is deformed into the parallelepiped with those three row vectors (and the whole "grid" of space is deformed in the same way)

now apply matrix B … it deforms the whole of space to match the parallelepiped defined by its three row vectors, and in particular it deforms the "A" parallelepiped

this double deformation of the unit cube is represented by the parallelepiped with the three row vectors of matrix C = BA

(and yes, the volume of the C = BA parallelepiped is the product of the volumes of the A and B parallelepipeds)
How come the dot product of x, y, z components of a vector of A and the x components of three vector of B gives the x component of a vector of C?

the column vectors have nothing to do with it …

they are not related to the parallelepiped

ajayguhan
4x5=20 means we are adding 4 5times or 5 4times..,

In square matrix of order n it represent n dimensional object.

So in case of 3x3 it represent a parallelopiped.
AB=C means I'm adding parallelopiped n times, (where n is the volume of another parallelopiped) to obtain new parallelopiped.

Now assume that matrix multiplication is not defined. How would you get the x, y, z components of three vector specifying the parallelopiped of C ?

Last edited:
Homework Helper
Now assume that matrix multiplication is not defined. How would you get the x, y, z components of three vector specifying the parallelopiped of C ?

i don't understand

we'd use the rules of matrix multiplication, but we wouldn't call it "matrix multiplication"

Staff Emeritus
ajayguhan, I strongly suggest you drop the notion of parallelepipeds as the a mechanism for understanding matrix multiplication. If you insist on some geometric interpretation, it's better to look to the concept of a linear transformation. That is why Cayley defined matrix multiplication in this apparently weird form.

Even better is to just take matrix multiplication for what it is: Some apparently weird definition that is nonetheless incredibly applicable across a wide range of concepts in mathematics.

ajayguhan
C=AB is obtained by matrix multiplication .but we know C specifies a parallelopiped whose volume is the product of the volume of parallelopiped specified by A,B.

Each elements of C are obtained by the dot product of row vector of A and column vector of B.
Now assume that we don't know how to multiply the matrix.

Imagine you have two parallelopiped one corresponds to A and another to B. You find the volume of parallelopiped ( i.e., you find its determinant value) let it be n now you add the other parallelopiped n times to obtain a new parallelopiped.

If you did this in geometric manner we can obtain the components of three vector specifying the parallelopiped of C, by measuring with a ruler it's length and angle made with axes.

But how would you find the component of three vector analytical with A, B

The answers is x components of first vector of C is obtained by the dot product of x,y,z components of first vector of A(i.e., row 1 of A) and of x component of three vector of B(i.e., column 1 of B)

Thus all the elements of C are found by taking dot product of roof A and the column of B.

Now the question is how can we prove that by taking dot product row's of A and column's of B we obtain the elements of C. The proof? What is it? How can we prove it?

Homework Helper
… how can we prove that by taking dot product row's of A and column's of B we obtain the elements of C. The proof? What is it? How can we prove it?

but what do the column vectors represent??

ajayguhan
I mentioned the column of B as column vector.

Tiny Tim can you get my question now?

ajayguhan
D H ....science is full of proof. You never accept anything without proof.There may be no proof this instant but it doesn't mean it's not there and we shouldn't leave it and accept without proof. We can think of various approach to find out the proof.thats why i basically posted out this thread. So that i may get some clue or hint over it or the proof itself by some genius guy...!

Last edited:
Homework Helper
I mentioned the column of B as column vector.

yes, but what do the column vectors represent??

ajayguhan
x component of three vector of B. Either matrix A or B x, y, z component of a vector is specificd in column 1, column 2, column 3 respectively.

Bipolarity
There is a subtle reason why matrix multiplication is defined that way. It comes from the understanding of linear maps.

In linear algebra, it is shown that two linear maps are equal if and only if their actions on the basis vectors for the domain space are equal.

Given this result, it turns out that every linear map on a finite-dimensional vector space has a unique matrix representation. This leads us to ask, what is the matrix representation for a composition of linear maps (which can be shown to be linear)?

It turns out that there is a unique way we can define matrix multiplication so that there is an identification between matrices under multiplication and linear maps under composition. This is called a ring isomorphism, and it lead us to the standard definition for matrix multiplication.

BiP

ajayguhan
I already mentioned what you said in short in my first post
I know that matrix represent a linear system and when you multiply two matrice, it means your taking composition of two linear function so the matrix mutilipication is in that way.

MY question is
how can we prove in terms of vector? Since vetors are imagnabile, the proof of matrix mutilipication interms of vector can gives us more intuition regarding the matrix mutilipication.

Staff Emeritus