1. Not finding help here? Sign up for a free 30min tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Linear Algebra Problem #1

  1. Jun 20, 2008 #1
    I am trying to teach myself Linear Algebra and it is really slow going. As much as I hate to admit a weakness, I really suck at abstract thinking. So some really basic ideas are tripping me up. Here is a question from the first exercise in the book I am using


    1. The problem statement, all variables and given/known data

    Which of the following sets (with natural addition and multiplication by a
    scalar) are vector spaces. Justify your answer.
    a) The set of all continuous functions on the interval [0, 1];
    b) The set of all non-negative functions on the interval [0, 1];
    c) The set of all polynomials of degree exactly n;
    d) The set of all symmetric n × n matrices, i.e. the set of matrices [tex]A={a_{j,k}}^n_{j.k=1}[/tex] such that [itex]A^T=A[/itex]





    2. Relevant equations
    Definition of a vector space

    A vector space V is a collection of ob jects, called vectors (denoted in this
    book by lowercase bold letters, like v), along with two operations, addition
    of vectors and multiplication by a number (scalar) 1 , such that the following
    8 properties (the so-called axioms of a vector space) hold:
    The first 4 properties deal with the addition:
    1. Commutativity: v + w = w + v for all v, w ∈ V ;

    2. Associativity: (u + v) + w = u + (v + w) for all u, v, w ∈ V ;

    3. Zero vector: there exists a special vector, denoted by 0 such that
    v + 0 = v for all v ∈ V ;

    4. Additive inverse: For every vector v ∈ V there exists a vector w ∈ V
    such that v + w = 0. Such additive inverse is usually denoted as −v;

    5. Multiplicative identity: 1v = v for all v ∈ V ;

    6. Multiplicative associativity: (αβ )v = α(β v) for all v ∈ V and all
    scalars α, β ;

    7. α(u + v) = αu + αv for all u, v ∈ V and all scalars α;

    8. (α + β )v = αv + β v for all v ∈ V and all scalars α, β .


    3. The attempt at a solution

    Let's just start with (a) the set of all continuous functions on the interval [0, 1]

    This is probably really easy, but I am having trouble figuring out how to answer this one.

    I guess I start by seeing if all continuous functions adhere to the eight criterion above right?

    Well it appears that 1 and 2 hold, as continuous functions add commutatively and associatively right?

    3 (the existence of a zero vector such that v+0=v) seems true enough

    4 and 5 should hold (out of curiosity, when does 1*v not equal to v?)

    6,7,8 also seem obvious enough, but I don't know how to prove any of this.

    So I am concluding that the set of all continuos functions on the interval [0,1] IS a vector space.


    What is the proper approach to these kinds of problems? And why did they choose the interval [0,1] ? Why not all reals?


    Sorry for so many questions! Any input towards ANY of them is greatly appreciated!
     
  2. jcsd
  3. Jun 20, 2008 #2

    Dick

    User Avatar
    Science Advisor
    Homework Helper

    1*v=v is basically just a definition of 1. The point of 5) is really just to say 1 exists. 6,7 and 8 hardly even need proving. When you multiply functions by constants and add them then you are really just multiplying and adding real numbers. So 6,7 and 8 aren't very mysterious. The thing to prove is that a constant times a continuous function is continuous and that the sum of two continuous functions is continuous. If you want to get down into the dirt and prove them, then use epsilons and deltas. But I think you've probably proved that before, right? Otherwise, they are just properties of real numbers.
     
    Last edited: Jun 20, 2008
  4. Jun 21, 2008 #3
    Actually, somehow I got like the one Calculus professor who did not find it necessary to do epsilon and deltas (with limits). I am afraid of them:redface:
     
  5. Jun 21, 2008 #4

    Dick

    User Avatar
    Science Advisor
    Homework Helper

    I didn't say you HAD to prove them. If you know they are true, just cite the relevant theorem. The real content here is just that if you apply vector space type operations to continuous functions, they remain continuous.
     
  6. Jun 21, 2008 #5

    Dick

    User Avatar
    Science Advisor
    Homework Helper

    Before you ask, the fact it's on [0,1] instead of (-infinity,infinity) doesn't matter at all.
     
  7. Jun 21, 2008 #6
    Okay then! Let's move on to part (b):smile: Now it seems similar to part (a) except that now it includes just positive functions and they are not necessarily continuous.

    What is the difference here? I am sorry, I am losing focus here... which of these properties (1-8) would not hold for noncontinuous functions?
     
  8. Jun 21, 2008 #7

    Dick

    User Avatar
    Science Advisor
    Homework Helper

    That's easy. If V is the set of nonnegative functions and v is in V and I do a vector space type operation like (-1)*v, is the result necessarily a nonnegative function? These aren't so hard, are they?
     
  9. Jun 21, 2008 #8

    Dick

    User Avatar
    Science Advisor
    Homework Helper

    They all hold for functions that aren't necessarily continuous as well.
     
    Last edited: Jun 21, 2008
  10. Jun 21, 2008 #9
    Am I stupid? Okay, don't answer that.... I dropped out of high school, so I did not have the luxury of high-school maths. So when I started out at Community college, I started with "Intermediate Algebra" and now that I am transferring out of that school, I have completed many math courses. However, I feel like I missed MANY of the BASICS!

    Crap like, the definition of a set and stuff like that...

    I know that the terms aren't difficult, but I have to think about them instead of them just being in there (my head!) and knowing them well.....
     
  11. Jun 21, 2008 #10

    Dick

    User Avatar
    Science Advisor
    Homework Helper

    I was NOT implying you were stupid. It takes practice to focus on what's important in a list of 8 axioms. I was trying to encourage you.
     
    Last edited: Jun 21, 2008
  12. Jun 21, 2008 #11
    Okay. I think I might be with you now. The approach to these is something like this:

    I have some collection (set) of objects (elements); now I should ask myself: If I apply a vector space type operation (i.e. 1-8) to one of these elements, do I as a result, get one of those elements?

    If the answer is yes, it IS a vector space. If the answer is NO, it is not.

    Is this right?
     
  13. Jun 21, 2008 #12
    I didn't mean you were! I was just sort of "talking out loud"... sorry, I do it a lot! Sometimes if I type out what I am thinking, it helps me to sort out the nonsense going on inside my head. :rofl:

    I had originally planned on asking what class most people started learning stuff like that in.... but I got lost somewhere along the line!
     
  14. Jun 21, 2008 #13

    Dick

    User Avatar
    Science Advisor
    Homework Helper

    Yes, that's REALLY right. What about c) and d)?
     
  15. Jun 21, 2008 #14
    Sweet-Jesus!:smile:
     
  16. Jun 21, 2008 #15

    Dick

    User Avatar
    Science Advisor
    Homework Helper

    Likely, they learned it the same class you are taking. Sorry, NOT TAKING.
     
    Last edited: Jun 21, 2008
  17. Jun 21, 2008 #16
    So for part (c) I would say that the set of all polynomials of degree exactly n IS a vector space since,

    1-8 hold and since all said operations yield a polynomial of degree exactly n
     
  18. Jun 21, 2008 #17

    Dick

    User Avatar
    Science Advisor
    Homework Helper

    Agreed. Now finish it with d) and agree with me that it's not that hard and you aren't stupid.
     
    Last edited: Jun 21, 2008
  19. Jun 21, 2008 #18

    Dick

    User Avatar
    Science Advisor
    Homework Helper

    Umm. They are just symmetric matrices. I'm trying not to let a bit of index oddness in what you wrote throw me off. a_{ij}=a_{ji), right?
     
  20. Jun 21, 2008 #19
    :rofl: Okee-dokee!

    This will be the harder of the four since I have to really think about what it is saying.

    I am a little confused by the definition of symmetric matrices:

    The set of all symmetric n × n matrices, i.e. the set of matrices [tex]A={a_{j,k}}^n_{j.k=1}[/tex] such that [itex]A^T=A[/itex]

    How can a matrix be equal to its transpose unless all of the entries of that matrix are the SAME entry? If you have some 2 x 2 matrix called A. And you take the first row of A and make it the first column of some other matrix B. And then you take the 2nd row of A and make it the 2nd column of B. B is now the transpose of A. Isn't the only way the A=B if all the entries in A were the same entry?
     
  21. Jun 21, 2008 #20
    I thought I copied their definition right.... but yes, it said symmetric matrices.:smile:


    EDIT: Here's a screenshot of the text

    [​IMG]
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?



Similar Discussions: Linear Algebra Problem #1
  1. Linear algebra problem (Replies: 1)

Loading...