Understanding Matrix Multiplication: A Vector-Based Explanation

The column vectors represent the direction and magnitude of the transformation of space by the matrix. Each column vector corresponds to one of the standard basis vectors (i.e. (1,0,0), (0,1,0), and (0,0,1) for a 3x3 matrix) and shows where that basis vector is sent to after the transformation. The dot product of the row vector of A and the column vector of B represents the component of the transformed vector in the direction of the transformed basis vector. By taking the dot product for all combinations of row vectors and column vectors, we can obtain the components of the transformed vector in all directions, resulting in the elements of C.
  • #1
ajayguhan
153
1
Consider two square matrix A,B each specifying a parallelopiped by three different vector. The x, y, z components are written in column 1, column 2, column 3 respectively. Thus the order of A , B is 3x3.

Let C=AB. To get the c11 element of C i do a dot product of row1 of A and column 1 of B.
Row 1 contains x, y, z component of a vector of A.
Column 1 contains x components of three vector of B.

C specifies a parallelopiped whose volume is the product of volume of parallelopiped specified by A, B.
Since det A*det B=det AB = det C.

C11 is a x component of one of the vector of the C.

How come the dot product of x, y, z components of a vector of A and the x components of three vector of B gives the x component of a vector of C?

I know that matrix represent a linear system and when you multiply two matrice, it means your taking composition of two linear function so the matrix mutilipication is in that way.

But how can we prove in terms of vector? Since vetors are imagnabile, the proof of matrix mutilipication interms of vector can gives us more intuition regarding the matrix mutilipication.
 
Physics news on Phys.org
  • #2
hi ajayguhan! :smile:
ajayguhan said:
Row 1 contains x, y, z component of a vector of A.
Column 1 contains x components of three vector of B.

a 3x3 matrix represents a deformation of space

consider a unit cube lined up with the axes …

the vectors of its three sides are (1,0,0) (0,1,0) and (0,0,1)

matrix A sends those three vectors to the vectors represented by the three rows of A

in other words: the cube is deformed into the parallelepiped with those three row vectors (and the whole "grid" of space is deformed in the same way)

now apply matrix B … it deforms the whole of space to match the parallelepiped defined by its three row vectors, and in particular it deforms the "A" parallelepiped

this double deformation of the unit cube is represented by the parallelepiped with the three row vectors of matrix C = BA

(and yes, the volume of the C = BA parallelepiped is the product of the volumes of the A and B parallelepipeds)
How come the dot product of x, y, z components of a vector of A and the x components of three vector of B gives the x component of a vector of C?

the column vectors have nothing to do with it …

they are not related to the parallelepiped
 
  • #3
4x5=20 means we are adding 4 5times or 5 4times..,

In square matrix of order n it represent n dimensional object.

So in case of 3x3 it represent a parallelopiped.
AB=C means I'm adding parallelopiped n times, (where n is the volume of another parallelopiped) to obtain new parallelopiped.

Now assume that matrix multiplication is not defined. How would you get the x, y, z components of three vector specifying the parallelopiped of C ?
 
Last edited:
  • #4
ajayguhan said:
Now assume that matrix multiplication is not defined. How would you get the x, y, z components of three vector specifying the parallelopiped of C ?

i don't understand :redface:

we'd use the rules of matrix multiplication, but we wouldn't call it "matrix multiplication"
 
  • #5
ajayguhan, I strongly suggest you drop the notion of parallelepipeds as the a mechanism for understanding matrix multiplication. If you insist on some geometric interpretation, it's better to look to the concept of a linear transformation. That is why Cayley defined matrix multiplication in this apparently weird form.

Even better is to just take matrix multiplication for what it is: Some apparently weird definition that is nonetheless incredibly applicable across a wide range of concepts in mathematics.
 
  • #6
C=AB is obtained by matrix multiplication .but we know C specifies a parallelopiped whose volume is the product of the volume of parallelopiped specified by A,B.

Each elements of C are obtained by the dot product of row vector of A and column vector of B.
Now assume that we don't know how to multiply the matrix.

Imagine you have two parallelopiped one corresponds to A and another to B. You find the volume of parallelopiped ( i.e., you find its determinant value) let it be n now you add the other parallelopiped n times to obtain a new parallelopiped.

If you did this in geometric manner we can obtain the components of three vector specifying the parallelopiped of C, by measuring with a ruler it's length and angle made with axes.

But how would you find the component of three vector analytical with A, B

The answers is x components of first vector of C is obtained by the dot product of x,y,z components of first vector of A(i.e., row 1 of A) and of x component of three vector of B(i.e., column 1 of B)

Thus all the elements of C are found by taking dot product of roof A and the column of B.

Now the question is how can we prove that by taking dot product row's of A and column's of B we obtain the elements of C. The proof? What is it? How can we prove it?
 
  • #7
ajayguhan said:
… how can we prove that by taking dot product row's of A and column's of B we obtain the elements of C. The proof? What is it? How can we prove it?

but what do the column vectors represent?? :confused:
 
  • #8
I mentioned the column of B as column vector.

Tiny Tim can you get my question now?
 
  • #9
D H ...science is full of proof. You never accept anything without proof.There may be no proof this instant but it doesn't mean it's not there and we shouldn't leave it and accept without proof. We can think of various approach to find out the proof.thats why i basically posted out this thread. So that i may get some clue or hint over it or the proof itself by some genius guy...!
 
Last edited:
  • #10
ajayguhan said:
I mentioned the column of B as column vector.

yes, but what do the column vectors represent?? :confused:
 
  • #11
x component of three vector of B. Either matrix A or B x, y, z component of a vector is specificd in column 1, column 2, column 3 respectively.
 
  • #12
There is a subtle reason why matrix multiplication is defined that way. It comes from the understanding of linear maps.

In linear algebra, it is shown that two linear maps are equal if and only if their actions on the basis vectors for the domain space are equal.

Given this result, it turns out that every linear map on a finite-dimensional vector space has a unique matrix representation. This leads us to ask, what is the matrix representation for a composition of linear maps (which can be shown to be linear)?

It turns out that there is a unique way we can define matrix multiplication so that there is an identification between matrices under multiplication and linear maps under composition. This is called a ring isomorphism, and it lead us to the standard definition for matrix multiplication.

BiP
 
  • #13
I already mentioned what you said in short in my first post
ajayguhan said:
I know that matrix represent a linear system and when you multiply two matrice, it means your taking composition of two linear function so the matrix mutilipication is in that way.

MY question is
how can we prove in terms of vector? Since vetors are imagnabile, the proof of matrix mutilipication interms of vector can gives us more intuition regarding the matrix mutilipication.
 
  • #14
I'll be blunt. Your question makes no sense. What is the geometric meaning of the product of two parallelepipeds? You are assuming that this makes some kind of sense.

What does make sense is to look at matrix multiplication as representing a sequence of linear transformations. *That* is why matrix multiplication was defined the way it is defined. This definition turns out to have a much broader applicability, but that broader applicability ultimately traces back to the concept of linear maps.
 
  • Like
Likes 1 person
  • #15
Yeah your right there is no meaning of it . I took the one of the volumes of a parallelopiped as n and added other parallelopiped n times, but to be in strict and in literal way there is no meaning of product of two parallelopiped. So proving matrix multiplication interns of parallelopiped is itself a very wrong idea.

Isn't it quite fascinating that due to matrix multiplication we are having wide applications, when i first learned them in my 10th grade i was so bored and thought it was waste of time studying it.

Btw i thank everyone for their reply to this thread! Specially to D H
 

What is matrix multiplication?

Matrix multiplication is an operation performed on two matrices to produce a new matrix. It involves multiplying each element of one matrix by the corresponding element in the other matrix and then adding the products together.

How is matrix multiplication different from regular multiplication?

Matrix multiplication is different from regular multiplication because it involves multiplying elements in two matrices together rather than just two numbers. Additionally, the order of multiplication matters in matrix multiplication, whereas in regular multiplication it does not.

What are the rules for matrix multiplication?

The rules for matrix multiplication are as follows:

  • The number of columns in the first matrix must be equal to the number of rows in the second matrix.
  • The resulting matrix will have the same number of rows as the first matrix and the same number of columns as the second matrix.
  • The order of multiplication matters. Multiplying AB is not the same as multiplying BA.
  • The product of two matrices is not always defined. It is only defined when the number of columns in the first matrix is equal to the number of rows in the second matrix.

What are some common applications of matrix multiplication?

Matrix multiplication has many applications in various fields such as physics, engineering, economics, and computer science. Some common applications include solving systems of linear equations, transforming geometric shapes, and performing data analysis and machine learning algorithms.

What is the best way to perform matrix multiplication?

The best way to perform matrix multiplication is by using a computer or calculator. This allows for efficient and accurate calculations, especially for larger matrices. It is important to follow the rules of matrix multiplication and double-check the dimensions of the matrices before performing the operation.

Similar threads

Replies
12
Views
3K
  • Linear and Abstract Algebra
Replies
4
Views
877
Replies
24
Views
1K
Replies
27
Views
1K
  • Linear and Abstract Algebra
Replies
7
Views
2K
  • Linear and Abstract Algebra
Replies
4
Views
2K
  • Linear and Abstract Algebra
Replies
13
Views
1K
  • Linear and Abstract Algebra
Replies
9
Views
573
  • Linear and Abstract Algebra
Replies
3
Views
1K
  • Linear and Abstract Algebra
Replies
3
Views
296
Back
Top