Matrix Multiplication and Function Composition

In summary, the conversation discusses linear algebra and the concept of matrix multiplication as a composition of functions. The use of matrices to represent linear transformations is also mentioned. The conversation references an article that explains how linear transformations affect basis vectors and the proof that g(x) = Bx. A helpful exercise is suggested for understanding how matrix multiplication is equivalent to composition of linear functions.
  • #1
Septimra
27
0
I am doing linear algebra and want to fully understand it, not just pass the class. I was recently taught matrix multiplication and decided to look up how it works. The good part is that I understand the concept. Matrices are a way of representing linear transformations. So matrix multiplication is actually a composition of functions. That is why it is not communicative and it is associative.

But i recently came across this article and I could not follow the math near the middle of the page.
http://nolaymanleftbehind.wordpress.com/2011/07/10/linear-algebra-what-matrices-actually-are/

the matrices that are being multiplied are

[ 2 1 ] [ 1 2 ]
[ 4 3 ] [ 1 0 ]

the basis are w1 and w 2

and w1 = [ 1 0 ]
and w2 = [ 0 1 ]

The author states that all that is needed it to see how the linear transformation affects the basis vectors.

Then it states that f(g(w1)) = f(w1+w2)
How does that work? Where on Earth do you plug in the w1?
Please help
 
Physics news on Phys.org
  • #2
That would depend on what the author sees as f and g ... but the basic principle is that a linear transformation can be represented as a transformation of the coordinate system. A square in an oblique coordinate system looks the same as an oblique shape in a rectangular coordinate system.
 
  • #3
Septimra said:
But i recently came across this article and I could not follow the math near the middle of the page.
http://nolaymanleftbehind.wordpress.com/2011/07/10/linear-algebra-what-matrices-actually-are/

the matrices that are being multiplied are

[ 2 1 ] [ 1 2 ]
[ 4 3 ] [ 1 0 ]

the basis are w1 and w 2

and w1 = [ 1 0 ]
and w2 = [ 0 1 ]

The author states that all that is needed it to see how the linear transformation affects the basis vectors.

Then it states that f(g(w1)) = f(w1+w2)
How does that work? Where on Earth do you plug in the w1?
Please help
If we write elements of ##\mathbb R^2## as 2×1 matrices, the definition of ##g:\mathbb R^2\to\mathbb R^2## can be written as ##g(x)=Bx## for all ##x\in\mathbb R^2##. So
$$g(w_1)=Bw_1 =\begin{pmatrix}1 & 2\\ 1 & 0\end{pmatrix}\begin{pmatrix}1 \\ 0\end{pmatrix}=\begin{pmatrix}1\\ 1\end{pmatrix}=\begin{pmatrix}1\\ 0\end{pmatrix}+\begin{pmatrix}0\\ 1\end{pmatrix}=w_1+w_2.$$
You may find https://www.physicsforums.com/showthread.php?p=4402648#post4402648 useful.
 
Last edited by a moderator:
  • #4
Thank you a lot, I appreciate it. I now see what the author was saying.
But I still have one minor question. I thought the author was trying to prove that g(x) = Bx. I now see that I was mistaken. But could one of you prove this? How does g(x) = Bx if x is a vector?
 
  • #5
Septimra said:
How does g(x) = Bx if x is a vector?
x is an element of ##\mathbb R^2##. If we use the convention to write elements of ##\mathbb R^2## as 2×1 matrices, then we can just define ##g(x)=Bx## for all ##x\in\mathbb R^2##. If we instead use the convention to write elements of ##\mathbb R^2## in the standard ##(x_1,x_2)## notation for ordered pairs, the notation ##Bx## doesn't work, but we could e.g. define
$$g(x)=\left(\left(B\begin{pmatrix}x_1\\ x_2\end{pmatrix}\right)_1,\left(B\begin{pmatrix}x_1\\ x_2\end{pmatrix}\right)_2\right)$$ for all ##x\in\mathbb R^2##. This looks really awkward of course. This is why I chose to use the matrix notation instead of the ordered pair notation.

We could also define g by saying that it's the function defined by ##g(s,t)=(s+2t,s)## for all ##s,t\in\mathbb R##. The matrix of this function with respect to the standard ordered basis ##(e_1,e_2)## where ##e_1=(1,0)## and ##e_2=(0,1)##, has ##g(e_j)_i## on row i, column j, as explained in the FAQ post. This is the ith component of the vector we get when g takes e_j as input. For example, row 2, column 1, of this matrix is
$$g(e_1)_2=(g(1,0))_2=(1+2\cdot 0,1)_2=1.$$ Note that this is equal to ##B_{21}##, as it's supposed to be.

If you want to understand how matrix multiplication is really composition of linear functions, then you should study the FAQ post and do this exercise: Let A and B be linear functions from ##\mathbb R^n## to ##\mathbb R^n##. Let [A] and denote their matrix representations with respect to the standard basis for ##\mathbb R^n##. Let [AB] denote the matrix representation of AB with respect to the standard basis for ##\mathbb R^n##. Prove that for all ##i,j\in\{1,\dots,n\}##, we have
$$[A\circ B]_{ij}=[AB]_{ij}.$$ This result tells us that the matrix representation of ##A\circ B## is equal to the matrix product of the matrix representations of A and B.

Hint: The definition of matrix multiplication is ##(XY)_{ij}=\sum_k X_{ik}Y_{kj}##. You will also have to use the fact that every vector is a linear combination of basis vectors.
 
Last edited:

What is matrix multiplication?

Matrix multiplication is a mathematical operation that takes two matrices as input and produces a new matrix as output. It involves multiplying the elements of one matrix with the corresponding elements of the other matrix and summing up the products.

What is the difference between matrix multiplication and function composition?

Matrix multiplication is a mathematical operation on matrices, whereas function composition is a mathematical operation on functions. While matrix multiplication involves multiplying the elements of two matrices, function composition involves applying one function to the output of another function.

Why is matrix multiplication important?

Matrix multiplication is important in many areas of mathematics, science, and engineering. It is used to solve systems of linear equations, transform geometric shapes, and perform operations on vectors. It is also a key component in various algorithms and calculations used in computer science and data analysis.

What are some properties of matrix multiplication?

Matrix multiplication is associative, meaning that the order in which matrices are multiplied does not affect the result. It is also distributive, meaning that the product of a matrix with the sum of two matrices is equal to the sum of the products of the matrix with each individual matrix. Additionally, matrix multiplication is not commutative, meaning that the order of matrices does affect the result.

How do I perform matrix multiplication or function composition?

To perform matrix multiplication, you need to make sure that the number of columns in the first matrix is equal to the number of rows in the second matrix. Then, multiply each element of the first matrix with the corresponding element in the second matrix and sum up the products to get the corresponding element in the output matrix. To perform function composition, you need to substitute the output of one function into the input of another function and evaluate the resulting composite function.

Similar threads

  • Linear and Abstract Algebra
Replies
10
Views
140
  • Linear and Abstract Algebra
Replies
14
Views
1K
  • Linear and Abstract Algebra
Replies
8
Views
1K
  • Linear and Abstract Algebra
Replies
9
Views
1K
  • Linear and Abstract Algebra
Replies
3
Views
2K
  • Linear and Abstract Algebra
Replies
4
Views
1K
Replies
12
Views
3K
  • Linear and Abstract Algebra
Replies
1
Views
811
Replies
27
Views
1K
  • Linear and Abstract Algebra
Replies
3
Views
1K
Back
Top