Show colsp(AB) is contained in colsp(A)

  • Thread starter Thread starter Robb
  • Start date Start date
Robb
Messages
225
Reaction score
8

Homework Statement


Let A and B be matrices for which the product AB is defined. Show that the column space of AB is contained in the column space of A.

Homework Equations


perform one elementary row operation to matrix A to obtain matrix B.

The Attempt at a Solution



1 2
3 4 = A

2 4
6 8 = 2A = B

14 20
30 44 = AB

Obviously the colsp(A) = colsp(B) given that B is a linear combination of A, but I don't see how colsp(AB) = colsp(A) and is therefore contained in A. Please advise.
 
Last edited by a moderator:
Physics news on Phys.org
To make this easy: Let's assume both matrices are ##n## x ##n##

Suppose I block ##\mathbf B## by column. I can now do matrix vector multiplication one column at a time in ##\mathbf B##.

##\mathbf {AB} = \mathbf A \bigg[\begin{array}{c|c|c|c|c}
\mathbf b_1 & \mathbf b_2 &\cdots & \mathbf b_{n-1} & \mathbf b_{n}
\end{array}\bigg] = \bigg[\begin{array}{c|c|c|c|c} \mathbf A \mathbf b_1 & \mathbf A\mathbf b_2 &\cdots & \mathbf A\mathbf b_{n-1} & \mathbf A\mathbf b_{n}
\end{array}\bigg]##

now let's look at the ##kth## column of the Right Hand side. Block ##\mathbf A## by column and look at ##\mathbf A \mathbf b_k## -- what does that tell you?
Robb said:
Obviously the colsp(A) = colsp(B) given that B is a linear combination of A

I didn't find this obvious or correct. For example: ##\mathbf A## could be singular and ##\mathbf B## could be non-singular.

The relevant equation, in my view, is matrix vector and matrix matrix multiplication. I'm not sure what elementary row operations have to do with this. It seems like something is missing from the problem statement.
- - - -
edit: If you're going to use a blocked-multiplication argument and respond in the forums, you must use Tex/ LaTeX. The forum sticky is here:

https://www.physicsforums.com/help/latexhelp/

I also like the GUI approach here:
https://www.codecogs.com/latex/eqneditor.php
 
1. follows from the definition of matrix multiplication. I.e. when you multiply a matrix A by a column vector v the result Av is the column vector which is the linear combination of th columns of A, using the entries of v as coefficients. Hence whenever you multiply a matrix A by a sequence of columns, the columna of B, the result AB is a matrix whose columns are all linear combinations of the columns of A. In particular every linear combination of the columns of AB is also a linear combination of the columns of A. QED.If you say this more conceptually, the column space of A consists of all column vectors which can be written as Av for some column vector v. Similarly a vector in the column space of AB is one that can be written as (AB)w for some w. But, since matrix multiplication is associative, any such vector (AB)w can also be wriiten as A(Bw), so if we write v for Bw, the vector (AB)w in the column space of AB, also has form Av, hence belongs also to the column space of A.
 
mathwonk said:
1. follows from the definition of matrix multiplication. I.e. when you multiply a matrix A by a column vector v the result Av is the column vector which is the linear combination of th columns of A, using the entries of v as coefficients. Hence whenever you multiply a matrix A by a sequence of columns, the columna of B, the result AB is a matrix whose columns are all linear combinations of the columns of A. In particular every linear combination of the columns of AB is also a linear combination of the columns of A. QED.If you say this more conceptually, the column space of A consists of all column vectors which can be written as Av for some column vector v. Similarly a vector in the column space of AB is one that can be written as (AB)w for some w. But, since matrix multiplication is associative, any such vector (AB)w can also be wriiten as A(Bw), so if we write v for Bw, the vector (AB)w in the column space of AB, also has form Av, hence belongs also to the column space of A.
Thank you!
 
StoneTemplePython said:
To make this easy: Let's assume both matrices are ##n## x ##n##

Suppose I block ##\mathbf B## by column. I can now do matrix vector multiplication one column at a time in ##\mathbf B##.

##\mathbf {AB} = \mathbf A \bigg[\begin{array}{c|c|c|c|c}
\mathbf b_1 & \mathbf b_2 &\cdots & \mathbf b_{n-1} & \mathbf b_{n}
\end{array}\bigg] = \bigg[\begin{array}{c|c|c|c|c} \mathbf A \mathbf b_1 & \mathbf A\mathbf b_2 &\cdots & \mathbf A\mathbf b_{n-1} & \mathbf A\mathbf b_{n}
\end{array}\bigg]##

now let's look at the ##kth## column of the Right Hand side. Block ##\mathbf A## by column and look at ##\mathbf A \mathbf b_k## -- what does that tell you?

I didn't find this obvious or correct. For example: ##\mathbf A## could be singular and ##\mathbf B## could be non-singular.

The relevant equation, in my view, is matrix vector and matrix matrix multiplication. I'm not sure what elementary row operations have to do with this. It seems like something is missing from the problem statement.
- - - -
edit: If you're going to use a blocked-multiplication argument and respond in the forums, you must use Tex/ LaTeX. The forum sticky is here:

https://www.physicsforums.com/help/latexhelp/

I also like the GUI approach here:
https://www.codecogs.com/latex/eqneditor.php
Gracias!
 
There are two things I don't understand about this problem. First, when finding the nth root of a number, there should in theory be n solutions. However, the formula produces n+1 roots. Here is how. The first root is simply ##\left(r\right)^{\left(\frac{1}{n}\right)}##. Then you multiply this first root by n additional expressions given by the formula, as you go through k=0,1,...n-1. So you end up with n+1 roots, which cannot be correct. Let me illustrate what I mean. For this...
Back
Top