Vector, Sub, Column, and Row Spaces

In summary, the conversation discusses the concepts of vector space, linear map, image, null space, and rank in the context of a linear algebra class. It explains how a linear map is a function with specific properties and how it can be represented by a matrix. It also discusses the importance of the image and null space of a linear map and how they relate to the column and row spaces of the corresponding matrix. The conversation concludes by mentioning the importance of understanding vector space before delving into the other concepts.
  • #1
erraticimpulse
55
0
There are so many concepts going on in my linear algebra class. Could someone help me understand what they mean? Particularly: vector space, subspace, column space, row space, dimension, basis, and rank. Thanks in advance!
 
Physics news on Phys.org
  • #2
Worry about vector space first. If you don't know what that is, the rest don't matter. It's a structure that has an addition and a scalar multiplication operations that satisfy certain properties.

Your book most likely has a section where it gives the definition of a vector space. (It might be called an "abstract vector space") It should have some exercises of the form "prove this is a vector space" and "prove this is not a vector space" -- you might want to do a few of those until you get the idea.
 
  • #3
as luck would have it i am teaching that course right now, and we just reached that section. please let me try my version of that stuff on you.


there are two concepts,
1) vector space.
2) a linear map from one vector space to another.

for example, R^2 is a vector space, and projection of R^2 onto the x-axis is a linear map from R^2 to R^2.

Also rotation of R^2, through 60 degrees counterclockwise, is a linear map from R^2 to R^2.

(Any mapping of R^2 to R^2 that takes parallelograms to [maybe flattened] parallelograms is a linear map. e.g. projection onto a line through (0,0), rotation about (0,0), or reflection in a line through (0,0).)

given a linear map L:R^2-->R^2, we want to know two things:

1) which vectors in the target are actually hit by the map?

I.e. for which vectors b can we solve the equation L(X) = b?


2) If b is a vector for which L(X) = b has solutions, what are all those solutions X?


for example, if L:R^2-->R^2 is projection on the x axis, then only those vectors b which lie on the x-axis can be solved for in L(X)=b. and given such a vector b on the x axis, the solutions X of L(X) = b, consist of exactly all X's on the vertical line through b and perpendicular to the x axis.


The set of solutions X of the equation L(X) = 0, is called the null space of L. it is interesting because of the general principle, that the general solution to the equation L(X) = b, consists of any particular solution u such that L(u) = b, plus a general solution v of the equation L(X) = 0.

I.e. if L(u) = b and L(v) = 0, then L(u+v) = b also. thus if we know all solutions of L(X) = 0, to find all solutions of L(X) = b, we only to know one solution.



Thus given any lienat map L:R^2-->R^2, there are two important subspaces:

1) Im(L) = "image of L" = those b such that L(X) = b has a solution X.

2) N(L) = "nullspace of L" = those vectors v such that L(v) = 0.


But since we can always form the space perpendicular to any given space, these two important subspaces give rise also to two more spaces, the spaces perpendicular to the two given ones. N(L)perp, and Im(L)perp.


Now a linear map L:R^2-->R^2 is always given by multiplying by some unique matrix, say A. I.e. given a linear map L, there is a matrix A such that L(X) = AX for all X.


Then notice that L(X) = 0 means simply that AX= 0, and if you know about dot products and matyrix multiplication, this means that X is perpendicupar to the rows of the matrix A. So in fact, N(L) = the space perpendicular to the row space of A,

i.e. N(L) = R(A)perp.

We also know that the product AX is simply the linear combination of the columns of A with coefficients from X, so only vectors which are linear combinations of the columns can have form AX = b.

Thus Im(L) = the column space C(A) of A.

Moreover, the equations which vanish on Im(L) are therefore the space C(A)perp.


Hence we have for each map L:R^n-->R^m, a matrix A such that L(X) = AX for all X.

Then R(A)perp = N(L) = the solutions of L(X) = 0.

C(A) = Im(L) = those b such that L(X) = b has a solution X.

C(A)perp = the equations that tell you which b can be solved for in L(X) = b.


hows that?
 
  • #4
mathwonk said:
C(A)perp = the equations that tell you which b can be solved for in L(X) = b.


hows that?

I followed everything up until this. By the way try to use latex if you could, it's a lot easier on the eyes. I'm not very familiar with your terminology. By linear map I assume you're referring to bases (which have to be linearly independent and span a certain vector space).

"the equations that tell you which b can be solved for in L(X) = b." I don't have any idea what you mean here.

Everything else is pretty good though! :biggrin:
 
  • #5
A linear map is what you might have heard referred to as a linear transformation. Essentially a linear map [itex]T: V \longrightarrow W[/itex] where [itex]V[/itex] and [itex]W[/itex] are vector spaces (over [itex]\mathbb{R}[/itex], say) is a function with two special properties:

[tex] \alpha \in \mathbb{R}, \ v \in V \Longrightarrow T(\alpha v) = \alpha T(v) [/tex]

and

[tex] v, w \in V \Longrightarrow T(v+w) = T(v) + T(w)[/tex]

geometrically, this means, as mathwonk said, that the function [itex]T[/itex] sends parallelograms in [itex]V[/itex] to parallelograms in [itex]W[/itex].
 
Last edited:
  • #6
Now, the main question that mathwonk is addressing is, assuming [itex]T: V \longrightarrow W[/itex] is a linear transformation (where [itex]V, W[/itex] are (finite-dimensional) vector spaces), then, for some vector [itex]b \in W[/itex], when can we find some [itex]v \in V[/itex] such that [itex]T(v) = b[/itex], ie. when can we solve the equation

[tex]T(v) = b[/tex]

?

Hopefully this will help you to follow the parts you didn't understand better~
 
  • #7
mathwonk,

As luck would have it, I'm trying to fill in some of the many gaps in my knowledge of linear algebra (among other things :biggrin: ) so this thread is particularly welcome. If you feel inclined to try out your lecture notes for upcoming topics in a tutorial thread here, I'm sure others would appreciate it as well.

But I was confused by this:
mathwonk said:
for example, if L:R^2-->R^2 is projection on the x axis, then only those vectors b which lie on the x-axis can be solved for in L(X)=b. and given such a vector b on the x axis, the solutions X of L(X) = b, consist of exactly all X's on the vertical line through b and perpendicular to the x axis.
Why are all the X's on a vertical line? Can't any point on the x,y plane be mapped to points on the x-axis by letting
[tex]A = \left [ \begin{array}{lr} a_{11} &a_{12}\\ 0 &0 \end{array} \right ] [/tex]

so L(x) = b becomes
[tex]\left [ \begin{array}{lr} a_{11} &a_{12}\\ 0 &0 \end{array} \right ] \times \left [ \begin{array}{cc}x \\ y \end{array} \right ] = \left [ \begin{array}{cc} b_1 \\ 0 \end{array} \right ][/tex]

Or are we talking about two different things?

--------------------------------------------------------------

Also, am I correct in assuming the R in
R(A)perp
represents "Row(A)", and has nothing to do with the R's everyplace else in your post?
 
  • #8
L is specifically projection onto the first component: L_{11}=1, L_{ij}=0 otherwise.
 
  • #9
That went right over my head. :confused:

Aren't we talking about a linear transformation such that
[tex]L(\bold{x}) = \bold{b} \leftrightarrow A\bold{x} = \bold{b}[/tex]

for some matrix A?
 
  • #10
No, mathwonk said L was projection into the first coordinate, I don't see why that goes over your head. L is specified uniquely as the matrix

[tex] \left( \begin{array}{cc} 1 &0 \\ 0 &0 \end{array} \right)[/tex]

L(x,y) = (x,0)


Ker(L) is obviously [tex]\{ (0,y) | y \in \mathbb{R} \}[/tex]
 
  • #11
The reason, Matt, is that I and the originator of the thread are coming at this from the perspective of beginners in elementary linear algebra.

Apparently, with the benefit of your vastly superior knowledge, mathwonk's statement "if L:R^2-->R^2 is projection on the x axis" makes it obvious to you that
Matt Grime said:
L is specified uniquely as the matrix
[tex] \left( \begin{array}{cc} 1 &0 \\ 0 &0 \end{array} \right)[/tex]
I was thinking that he was using L(x) to refer to linear transformations in general, just as Data was using T(x).

Here's another example of something that's obvious to you, but meaningless to me:
Ker(L) is obviously [tex]\{ (0,y) | y \in \mathbb{R} \}[/tex]


If everything that's obvious you was obvious to me, I wouldn't be here asking questions.

Forgive me.
 
  • #12
i should have said "orthogonal" projection on the x axis. i.e. along the line perpendicular to the x axis. that's what matt knew i meant because "we went to high school together" [in the sense used by robert de niro in "ronin".]


the equations telling you which b allow solutions for AX=b, form another matrix C such that AX=b has a solution if and only if Cb =0. thus the rows of C are perpendicular to those vectors b which are linear combinations of the columns. hence it suffices for the rows of C to be perpendicular to the columns of A.

hence the coefficients of the "constraint" equations which must be satisfied by b if AX=b is to be consistent, i.e. the rows of C, span the space C(A)perp.


i did not write up lectures for this course, but might write one tonight. i just make up these lectures spontaneously for you guys here.
 
  • #13
I think it just goes to show how conditioned you become to the "abuses of notation" , ie why would anyone ever mean anything other than orthogonal projection when they say projection onto the x-coord. Sorry, Gnome for presuming everyone reads things with that same underlying idea of an "error correcting code". The fact that L is as given though could be deduced from the other things written in that paragraph, indeed, Mathwonk describes its kernel, I just put it in more abstract symbols.

Incidentally, the mathematical use of obvious means, approximately, that "it should become clear that the proof is after thinking about it for a while", often people post questions without accepting that the mathematical time scale isn't a short one when it comes to solving things. Trivial means "oh, the proof is..." pops instantly into your head, "is left as an exercise for the reader" means that the writer is too lazy to write out something that has now become "obvious" but takes a lot of effort to type out.



(Ronin is on TV here tonight, as it happens).
 
Last edited:
  • #14
It appears that you are talking about concepts related to orthogonality, which is covered in a chapter that I haven't gotten to yet. I get the jist of what you're saying; understanding the implications will have to wait until I read further.

Thanks.
 
  • #15
Oh, and one thing that I perhaps assumed you also knew (again, apologies) is what a projection matrix was. They are something very common you see, coming under the aegis of idempotents, matrices satisfying x^2=x, and they look like

[tex]\left( \begin{array}{cc} \text{I} & 0\\ 0&0 \end{array}\right) [/tex]

where I is some diagonal matrix with 1s on the diagonal.

THey come from: suppose V<W. Let v_1,..,v_r be a basis of V, and complete to a basis of W with elements w_{r+1},...,w_{n}, then Let P be the map defined by P(v_i) = v_i, P(w_j)=0, then P has the above representation wrt this basis.
 
Last edited:
  • #16
matt,

"The fact that L is as given though could be deduced from the other things written in that paragraph" is true, but much easier to see from your perspective than from mine. It seemed to me that he started off talking about mappings in general, and used L in that context before mentioning projection.

To my (and I don't intend this as an argument but rather as a clarification of my understanding - or lack thereof) unsophisticated way of thinking, the example I gave:
[tex]\left [ \begin{array}{lr} a_{11} &a_{12}\\ 0 &0 \end{array} \right ] \times \left [ \begin{array}{cc}x \\ y \end{array} \right ] = \left [ \begin{array}{cc} b_1 \\ 0 \end{array} \right ][/tex]

looks like projection to me -- any input vector is being mapped to a vector along the x axis. So your mathematician's definition of obvious doesn't apply here; no amount of "thinking about it" would lead me to the realization that your projection explicitly means multiplication by
[tex] \left( \begin{array}{cc} 1 &0 \\ 0 &0 \end{array} \right)[/tex]

What's obvious is that I have much to learn.

Anyway, peace.
 
Last edited:
  • #17
ok here is an (ughhh!) example.

First another comment: a subspace V in R^n can be described in one of two complementary ways:

1) we give a set S = {v1,v2,...,vr} of vectors in V that "span V", i.e. every vector in V can be written as a linear combination of these vbectors, i.e. as a1v1+...arvr, for appropriate numbers a1,...,ar.

2) we give a set of equations which are satisfied precisely by those vectors w belonging to V. This is done in shorthand as a matrix A, such that Aw = 0 if and only if w belongs to V.


Then the basic problem of all mathematics is this:
if someone gives V in one of those ways, try to give V in the other way.

For example, if A is a matrix, look att he equation AX=b.

Then we "know", just from thinking hard about the meaning of matrix multiplication, that the only b's for which this equation can be solved are those b's which are linear combinations of the columns of A. I.e. the space of good b's is already given as the span of the columns of A, i.e. the good b's are the elements of the column space C(A). so we already have a description of type 1) for C(A).

But this is not much use to us: i.e. if someone walks up and hands us a b, it is hard to tell if it is or is not a linear combination of those columns. so we need to describe the column space C(A) the other way, by equations. I.e. we want equations, i.e. some matrix M, such that b is in C(A) if and only if, Mb = 0. I.e. we want a description of type 2) for C(A).

Then this matrix M would have rows which are orthogonal to all the good b's, or equivalently, to the columns of A. Thus describing a space the "other way" means finding the SAME description of the "other space" i.e. of the space perpendicular to the given space.

I.e. a matrix M giving equations for C(A), i.e. giving a type 2) description of C(A), would actually have rows which give a type 1) description of the space perpendicular to C(A). thus giving a type 2) description for any space means giving as type 1) description of its "orthogonal complement".


now consider the solution space of AX=0, i.e. N(A) = nullspace of A. we already have equations for this space, namely the matrix of equations is A. I.e. we have a type 2 description of N(A), [because we have artype 1) description of its complement R(A), the row space].

But this does not help us FIND a solution of AX=0. That requires a type 1 description of N(A). so we solve by gaussian elimination.


Recap:

a matrix has two spaces associated to it with type 1) descripotions: the column space and the row space.

this gives us a type 1) description of the space C(A) of good b's, and a type 2 description of the null space N(A) of solutions of AX=0.

We want the opposite: a type 2 description of C(A), and atype 1 description of N(A).

both of these may be obtained simutaneously by gaussian elmination.

AS FOLLOWS: in tha ctual; example coming right up: (in the next post).


we are interested in a
 
  • #18
matt grime said:
Incidentally, the mathematical use of obvious means, approximately, that "it should become clear that the proof is after thinking about it for a while", often people post questions without accepting that the mathematical time scale isn't a short one when it comes to solving things. Trivial means "oh, the proof is..." pops instantly into your head, "is left as an exercise for the reader" means that the writer is too lazy to write out something that has now become "obvious" but takes a lot of effort to type out.

PS: where does "it is straightforward (but tedious)" fit into that hierarchy? :wink:
 
  • #19
according to my own research, "straightforward but tedious" means a PhD with 15 years experience teaching the material cannot get it right in 10 pages of calculation, without using an $800 program like mathematica.

thats why in my algebra book, this phrase never appears.
 
  • #20
Same as "exercise for the reader".

As I hoped mathwonk had indicated, I naturally read "orthogonal" (when you didn't, also naturally) into projection into the x coordinate. I am now wondering if the following is fixed terminology or just another unspoken assumption, but as far as I'm concerned projections satisfy the property that P^2=P, you see, so if we use your A example as a general projection we would require that

[tex]a_{11}^2 = a_{11}[/tex] and [tex]a_{12}a_{11}=a_{12}[/tex]

and is orthogonal when a_11 = 1 and a_12 = 0, however I will stand by the thought that "orthogonal" is what we should assume if you don't refer explicitly to *a* projection.
 
  • #21
example: no kidding: oops i cannot make a matrix without some technical expertise which I am entirely lacking. well then here is a trivial example, a 1 by 2 matrix A with one column, consisting of [ 1 -2] (but written vertically). then put the column of unknowns next to it [ b1 b2] also written verticaslly.


now reduce to echelon form, getting [ 1 0] and [ b1 (b2+2b1)].


now the constraint equation is the equation given by looking at the zero rows at the bottom of the cehelon form, (there is one of these), and setting the corresponding expression in the b's equal to zero.

I.e. C(A) is spanned by the column vector [1 -2], and the orthogonal complement is spanned by the coefficients of the equation 2b1 + b2 = 0, i.e. by the column vector
[2 1]. notice this is perpendicular to the original column vector.

then we want also a set spanning the null space N(A). this of course comes from taking b1 = b2 = 0, looking at the non zero rows of the echelon form, and taking

x = 0. i.e. this is the only solution of AX=0.

to get a more interesting example i need at least a 2 by 2, of rank one.

let me take as example the matrix A whose ROWS are [ 1 -2] and [2 -4].

then putting b1 and b2 in a column on the right and reducing gives the matrix with rows [ 1 -2 b1] and [ 0 0 (b2 - 2b1)].

then we can solve AX = b if and only if b satisfies b2 -2b1 = 0. and then

the solution is x1 = b1 +2t, and x2 = t, where t is anything.

in particular the space N(A) of solutions of AX=0, is all vectors of form (2t, t).

The perp of the column space is spanned by the coefficient vector of the equation -2b1 + b2 = 0: i.e. by the vector (-2,1).
 
Last edited:
  • #22
I'll try to keep that in mind. Meanwhile, I'm tangled up in trying to teach myself perspective projection and homogeneous coordinates (from a computer graphics point of view), and yes, most of the matrices I've been looking at are built around identity matrices, but with "nuances", if that's the right word. And somewhere along the line I'm going to have to figure out how to use SVD to solve some problems (but not yet). It's all pretty overwhelming.

I'll check back later to see the rest of mathwonk's example.


Edit: Oops, there it is.
 
  • #23
technical assitance:

inside the tex labels use the standard latex

\left( \begin{array}{c..c} row 1 entry & entry ..& entry \\ row2 entry & ... \end{array}\right)

with as many c's as columns and so on.
 
  • #24
examples of honesty in mathematical writing are sometimes found. A common instance of "tedious but lenghty" is in the section where an author discusses the computation of a discriminant of an equation of degree > 2. (the higher degree version of
b^2 -4ac.)

in hungerford's algebra book, page 272 he calls this "a gruesome computation" for d = 3, and in artin's algebra book, page 548, he even says, the result for d = 3 is too complicated to remember and that he does not know the formula for d > 3.

(he is a world famous professor at MIT, and of course one great benefit of going to MIT might be to take algebra from him. but you can learn a lot from his book, which i recommend highly.)

then in dummit and foote's algebra book, also excellent, they actually give the calculations for discriminants for d = 3 and d = 4, and say things like "these are easily done using ..." and maybe they even explain it enough to make it look doable. they tend to on other matters i have read there.
 
  • #25
matt, sadly you underestimate my lack of technical savvy. i am the last person on Earth who wouldn't know a TEX document from a handwritten mortgage application.

actually in one of my browsers latex never shows up, so in my internet explorer i cannot see any of the latex displays above.

It just says "Latex will reload" but it never does.
 
Last edited:
  • #26
That example seems pretty clear. Thanks.



inside the tex labels use the standard latex...
'smatter, you don't like my {lr} design?
 
  • #27
mathwonk said:
actually in one of my browsers latex never shows up, so in my internet explorer i cannot see any of the latex displays above.

It just says "Latex will reload" but it never does.

Take a look at your Tools/Internet Options/Advanced menu, scroll down to the Multimedia section & see if you have "show pictures" unchecked.
 

1. What is a vector space?

A vector space is a mathematical structure that is defined by a set of vectors and operations that can be performed on those vectors. These operations include addition, scalar multiplication, and sometimes also include a dot product or cross product. It is a fundamental concept in linear algebra and is used to model real-world phenomena in physics, engineering, and other fields.

2. What is a subspace?

A subspace is a subset of a vector space that also satisfies the properties of a vector space. This means that it is closed under addition and scalar multiplication and contains the zero vector. Subspaces are important because they allow us to simplify and generalize problems in a vector space.

3. What are column and row spaces?

Column and row spaces are subspaces of a matrix. The column space is the span of the columns of a matrix, and the row space is the span of the rows of a matrix. These spaces are important because they provide information about the linear independence and dimension of a matrix, which can be useful in solving systems of equations and other problems.

4. How are column and row spaces related to each other?

The column and row spaces are related through the rank of a matrix. The rank is the maximum number of linearly independent rows or columns in a matrix. The dimension of the column space is equal to the rank, and the dimension of the row space is equal to the rank. Additionally, the column and row spaces have the same basis, meaning they are spanned by the same set of vectors.

5. How can we determine the basis for a vector, column, or row space?

To determine the basis for a vector, column, or row space, we can use the reduced row echelon form of a matrix. This form reveals the pivot columns, which correspond to the basis vectors for the column space, and the pivot rows, which correspond to the basis vectors for the row space. The pivot columns and rows can then be used to construct a basis for the vector space.

Similar threads

  • Linear and Abstract Algebra
Replies
8
Views
873
  • Linear and Abstract Algebra
Replies
4
Views
872
  • Linear and Abstract Algebra
Replies
6
Views
869
  • Linear and Abstract Algebra
Replies
14
Views
537
  • Linear and Abstract Algebra
Replies
2
Views
926
  • Linear and Abstract Algebra
Replies
14
Views
2K
  • Linear and Abstract Algebra
Replies
1
Views
1K
Replies
2
Views
1K
  • Linear and Abstract Algebra
Replies
15
Views
4K
  • Linear and Abstract Algebra
Replies
3
Views
1K
Back
Top