Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Vector, Sub, Column, and Row Spaces

  1. Mar 24, 2005 #1
    There are so many concepts going on in my linear algebra class. Could someone help me understand what they mean? Particularly: vector space, subspace, column space, row space, dimension, basis, and rank. Thanks in advance!
  2. jcsd
  3. Mar 24, 2005 #2


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Worry about vector space first. If you don't know what that is, the rest don't matter. It's a structure that has an addition and a scalar multiplication operations that satisfy certain properties.

    Your book most likely has a section where it gives the definition of a vector space. (It might be called an "abstract vector space") It should have some exercises of the form "prove this is a vector space" and "prove this is not a vector space" -- you might want to do a few of those until you get the idea.
  4. Mar 24, 2005 #3


    User Avatar
    Science Advisor
    Homework Helper
    2015 Award

    as luck would have it i am teaching that course right now, and we just reached that section. please let me try my version of that stuff on you.

    there are two concepts,
    1) vector space.
    2) a linear map from one vector space to another.

    for example, R^2 is a vector space, and projection of R^2 onto the x axis is a linear map from R^2 to R^2.

    Also rotation of R^2, through 60 degrees counterclockwise, is a linear map from R^2 to R^2.

    (Any mapping of R^2 to R^2 that takes parallelograms to [maybe flattened] parallelograms is a linear map. e.g. projection onto a line through (0,0), rotation about (0,0), or reflection in a line through (0,0).)

    given a linear map L:R^2-->R^2, we want to know two things:

    1) which vectors in the target are actually hit by the map?

    I.e. for which vectors b can we solve the equation L(X) = b?

    2) If b is a vector for which L(X) = b has solutions, what are all those solutions X?

    for example, if L:R^2-->R^2 is projection on the x axis, then only those vectors b which lie on the x axis can be solved for in L(X)=b. and given such a vector b on the x axis, the solutions X of L(X) = b, consist of exactly all X's on the vertical line through b and perpendicular to the x axis.

    The set of solutions X of the equation L(X) = 0, is called the null space of L. it is interesting because of the general principle, that the general solution to the equation L(X) = b, consists of any particular solution u such that L(u) = b, plus a general solution v of the equation L(X) = 0.

    I.e. if L(u) = b and L(v) = 0, then L(u+v) = b also. thus if we know all solutions of L(X) = 0, to find all solutions of L(X) = b, we only to know one solution.

    Thus given any lienat map L:R^2-->R^2, there are two important subspaces:

    1) Im(L) = "image of L" = those b such that L(X) = b has a solution X.

    2) N(L) = "nullspace of L" = those vectors v such that L(v) = 0.

    But since we can always form the space perpendicular to any given space, these two important subspaces give rise also to two more spaces, the spaces perpendicular to the two given ones. N(L)perp, and Im(L)perp.

    Now a linear map L:R^2-->R^2 is always given by multiplying by some unique matrix, say A. I.e. given a linear map L, there is a matrix A such that L(X) = AX for all X.

    Then notice that L(X) = 0 means simply that AX= 0, and if you know about dot products and matyrix multiplication, this means that X is perpendicupar to the rows of the matrix A. So in fact, N(L) = the space perpendicular to the row space of A,

    i.e. N(L) = R(A)perp.

    We also know that the product AX is simply the linear combination of the columns of A with coefficients from X, so only vectors which are linear combinations of the columns can have form AX = b.

    Thus Im(L) = the column space C(A) of A.

    Moreover, the equations which vanish on Im(L) are therefore the space C(A)perp.

    Hence we have for each map L:R^n-->R^m, a matrix A such that L(X) = AX for all X.

    Then R(A)perp = N(L) = the solutions of L(X) = 0.

    C(A) = Im(L) = those b such that L(X) = b has a solution X.

    C(A)perp = the equations that tell you which b can be solved for in L(X) = b.

    hows that?
  5. Mar 24, 2005 #4
    I followed everything up until this. By the way try to use latex if you could, it's alot easier on the eyes. I'm not very familiar with your terminology. By linear map I assume you're referring to bases (which have to be linearly independent and span a certain vector space).

    "the equations that tell you which b can be solved for in L(X) = b." I don't have any idea what you mean here.

    Everything else is pretty good though! :biggrin:
  6. Mar 24, 2005 #5
    A linear map is what you might have heard referred to as a linear transformation. Essentially a linear map [itex]T: V \longrightarrow W[/itex] where [itex]V[/itex] and [itex]W[/itex] are vector spaces (over [itex]\mathbb{R}[/itex], say) is a function with two special properties:

    [tex] \alpha \in \mathbb{R}, \ v \in V \Longrightarrow T(\alpha v) = \alpha T(v) [/tex]


    [tex] v, w \in V \Longrightarrow T(v+w) = T(v) + T(w)[/tex]

    geometrically, this means, as mathwonk said, that the function [itex]T[/itex] sends parallelograms in [itex]V[/itex] to parallelograms in [itex]W[/itex].
    Last edited: Mar 24, 2005
  7. Mar 24, 2005 #6
    Now, the main question that mathwonk is addressing is, assuming [itex]T: V \longrightarrow W[/itex] is a linear transformation (where [itex]V, W[/itex] are (finite-dimensional) vector spaces), then, for some vector [itex]b \in W[/itex], when can we find some [itex]v \in V[/itex] such that [itex]T(v) = b[/itex], ie. when can we solve the equation

    [tex]T(v) = b[/tex]


    Hopefully this will help you to follow the parts you didn't understand better~
  8. Mar 25, 2005 #7

    As luck would have it, I'm trying to fill in some of the many gaps in my knowledge of linear algebra (among other things :biggrin: ) so this thread is particularly welcome. If you feel inclined to try out your lecture notes for upcoming topics in a tutorial thread here, I'm sure others would appreciate it as well.

    But I was confused by this:
    Why are all the X's on a vertical line? Can't any point on the x,y plane be mapped to points on the x axis by letting
    [tex]A = \left [ \begin{array}{lr} a_{11} &a_{12}\\ 0 &0 \end{array} \right ] [/tex]

    so L(x) = b becomes
    [tex]\left [ \begin{array}{lr} a_{11} &a_{12}\\ 0 &0 \end{array} \right ] \times \left [ \begin{array}{cc}x \\ y \end{array} \right ] = \left [ \begin{array}{cc} b_1 \\ 0 \end{array} \right ][/tex]

    Or are we talking about two different things?


    Also, am I correct in assuming the R in
    represents "Row(A)", and has nothing to do with the R's everyplace else in your post?
  9. Mar 25, 2005 #8

    matt grime

    User Avatar
    Science Advisor
    Homework Helper

    L is specifically projection onto the first component: L_{11}=1, L_{ij}=0 otherwise.
  10. Mar 25, 2005 #9
    That went right over my head. :confused:

    Aren't we talking about a linear transformation such that
    [tex]L(\bold{x}) = \bold{b} \leftrightarrow A\bold{x} = \bold{b}[/tex]

    for some matrix A?
  11. Mar 25, 2005 #10

    matt grime

    User Avatar
    Science Advisor
    Homework Helper

    No, mathwonk said L was projection into the first coordinate, I don't see why that goes over your head. L is specified uniquely as the matrix

    [tex] \left( \begin{array}{cc} 1 &0 \\ 0 &0 \end{array} \right)[/tex]

    L(x,y) = (x,0)

    Ker(L) is obviously [tex]\{ (0,y) | y \in \mathbb{R} \}[/tex]
  12. Mar 25, 2005 #11
    The reason, Matt, is that I and the originator of the thread are coming at this from the perspective of beginners in elementary linear algebra.

    Apparently, with the benefit of your vastly superior knowledge, mathwonk's statement "if L:R^2-->R^2 is projection on the x axis" makes it obvious to you that
    I was thinking that he was using L(x) to refer to linear transformations in general, just as Data was using T(x).

    Here's another example of something that's obvious to you, but meaningless to me:

    If everything that's obvious you was obvious to me, I wouldn't be here asking questions.

    Forgive me.
  13. Mar 25, 2005 #12


    User Avatar
    Science Advisor
    Homework Helper
    2015 Award

    i should have said "orthogonal" projection on the x axis. i.e. along the line perpendicular to the x axis. that's what matt knew i meant because "we went to high school together" [in the sense used by robert de niro in "ronin".]

    the equations telling you which b allow solutions for AX=b, form another matrix C such that AX=b has a solution if and only if Cb =0. thus the rows of C are perpendicular to those vectors b which are linear combinations of the columns. hence it suffices for the rows of C to be perpendicular to the columns of A.

    hence the coefficients of the "constraint" equations which must be satisfied by b if AX=b is to be consistent, i.e. the rows of C, span the space C(A)perp.

    i did not write up lectures for this course, but might write one tonight. i just make up these lectures spontaneously for you guys here.
  14. Mar 25, 2005 #13

    matt grime

    User Avatar
    Science Advisor
    Homework Helper

    I think it just goes to show how conditioned you become to the "abuses of notation" , ie why would anyone ever mean anything other than orthogonal projection when they say projection onto the x-coord. Sorry, Gnome for presuming everyone reads things with that same underlying idea of an "error correcting code". The fact that L is as given though could be deduced from the other things written in that paragraph, indeed, Mathwonk describes its kernel, I just put it in more abstract symbols.

    Incidentally, the mathematical use of obvious means, approximately, that "it should become clear that the proof is after thinking about it for a while", often people post questions without accepting that the mathematical time scale isn't a short one when it comes to solving things. Trivial means "oh, the proof is..." pops instantly into your head, "is left as an exercise for the reader" means that the writer is too lazy to write out something that has now become "obvious" but takes a lot of effort to type out.

    (Ronin is on TV here tonight, as it happens).
    Last edited: Mar 25, 2005
  15. Mar 25, 2005 #14
    It appears that you are talking about concepts related to orthogonality, which is covered in a chapter that I haven't gotten to yet. I get the jist of what you're saying; understanding the implications will have to wait until I read further.

  16. Mar 25, 2005 #15

    matt grime

    User Avatar
    Science Advisor
    Homework Helper

    Oh, and one thing that I perhaps assumed you also knew (again, apologies) is what a projection matrix was. They are something very common you see, coming under the aegis of idempotents, matrices satisfying x^2=x, and they look like

    [tex]\left( \begin{array}{cc} \text{I} & 0\\ 0&0 \end{array}\right) [/tex]

    where I is some diagonal matrix with 1s on the diagonal.

    THey come from: suppose V<W. Let v_1,..,v_r be a basis of V, and complete to a basis of W with elements w_{r+1},...,w_{n}, then Let P be the map defined by P(v_i) = v_i, P(w_j)=0, then P has the above representation wrt this basis.
    Last edited: Mar 25, 2005
  17. Mar 25, 2005 #16

    "The fact that L is as given though could be deduced from the other things written in that paragraph" is true, but much easier to see from your perspective than from mine. It seemed to me that he started off talking about mappings in general, and used L in that context before mentioning projection.

    To my (and I don't intend this as an argument but rather as a clarification of my understanding - or lack thereof) unsophisticated way of thinking, the example I gave:
    [tex]\left [ \begin{array}{lr} a_{11} &a_{12}\\ 0 &0 \end{array} \right ] \times \left [ \begin{array}{cc}x \\ y \end{array} \right ] = \left [ \begin{array}{cc} b_1 \\ 0 \end{array} \right ][/tex]

    looks like projection to me -- any input vector is being mapped to a vector along the x axis. So your mathematician's definition of obvious doesn't apply here; no amount of "thinking about it" would lead me to the realization that your projection explicitly means multiplication by
    [tex] \left( \begin{array}{cc} 1 &0 \\ 0 &0 \end{array} \right)[/tex]

    What's obvious is that I have much to learn.

    Anyway, peace.
    Last edited: Mar 25, 2005
  18. Mar 25, 2005 #17


    User Avatar
    Science Advisor
    Homework Helper
    2015 Award

    ok here is an (ughhh!) example.

    First another comment: a subspace V in R^n can be described in one of two complementary ways:

    1) we give a set S = {v1,v2,...,vr} of vectors in V that "span V", i.e. every vector in V can be written as a linear combination of these vbectors, i.e. as a1v1+...arvr, for appropriate numbers a1,...,ar.

    2) we give a set of equations which are satisfied precisely by those vectors w belonging to V. This is done in shorthand as a matrix A, such that Aw = 0 if and only if w belongs to V.

    Then the basic problem of all mathematics is this:
    if someone gives V in one of those ways, try to give V in the other way.

    For example, if A is a matrix, look att he equation AX=b.

    Then we "know", just from thinking hard about the meaning of matrix multiplication, that the only b's for which this equation can be solved are those b's which are linear combinations of the columns of A. I.e. the space of good b's is already given as the span of the columns of A, i.e. the good b's are the elements of the column space C(A). so we already have a description of type 1) for C(A).

    But this is not much use to us: i.e. if someone walks up and hands us a b, it is hard to tell if it is or is not a linear combination of those columns. so we need to describe the column space C(A) the other way, by equations. I.e. we want equations, i.e. some matrix M, such that b is in C(A) if and only if, Mb = 0. I.e. we want a description of type 2) for C(A).

    Then this matrix M would have rows which are orthogonal to all the good b's, or equivalently, to the columns of A. Thus describing a space the "other way" means finding the SAME description of the "other space" i.e. of the space perpendicular to the given space.

    I.e. a matrix M giving equations for C(A), i.e. giving a type 2) description of C(A), would actually have rows which give a type 1) description of the space perpendicular to C(A). thus giving a type 2) description for any space means giving as type 1) description of its "orthogonal complement".

    now consider the solution space of AX=0, i.e. N(A) = nullspace of A. we already have equations for this space, namely the matrix of equations is A. I.e. we have a type 2 description of N(A), [because we have artype 1) description of its complement R(A), the row space].

    But this does not help us FIND a solution of AX=0. That requires a type 1 description of N(A). so we solve by gaussian elimination.


    a matrix has two spaces associated to it with type 1) descripotions: the column space and the row space.

    this gives us a type 1) description of the space C(A) of good b's, and a type 2 description of the null space N(A) of solutions of AX=0.

    We want the opposite: a type 2 description of C(A), and atype 1 description of N(A).

    both of these may be obtained simutaneously by gaussian elmination.

    AS FOLLOWS: in tha ctual; example coming right up: (in the next post).

    we are interested in a
  19. Mar 25, 2005 #18
    PS: where does "it is straightforward (but tedious)" fit into that hierarchy? :wink:
  20. Mar 25, 2005 #19


    User Avatar
    Science Advisor
    Homework Helper
    2015 Award

    according to my own research, "straightforward but tedious" means a PhD with 15 years experience teaching the material cannot get it right in 10 pages of calculation, without using an $800 program like mathematica.

    thats why in my algebra book, this phrase never appears.
  21. Mar 25, 2005 #20

    matt grime

    User Avatar
    Science Advisor
    Homework Helper

    Same as "exercise for the reader".

    As I hoped mathwonk had indicated, I naturally read "orthogonal" (when you didn't, also naturally) into projection into the x coordinate. I am now wondering if the following is fixed terminology or just another unspoken assumption, but as far as I'm concerned projections satisfy the property that P^2=P, you see, so if we use your A example as a general projection we would require that

    [tex]a_{11}^2 = a_{11}[/tex] and [tex]a_{12}a_{11}=a_{12}[/tex]

    and is orthogonal when a_11 = 1 and a_12 = 0, however I will stand by the thought that "orthogonal" is what we should assume if you don't refer explicitly to *a* projection.
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?

Similar Discussions: Vector, Sub, Column, and Row Spaces