1. Limited time only! Sign up for a free 30min personal tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Find a basis for R2 that includes the vector (1,2)

  1. Nov 12, 2017 #1
    • Member warned that the homework template is not optional
    My answer was [ (1,2), (0,-1) ]

    I just wanted to make sure with you guys that you could essentially have an infinite amount of answers.

    For example:

    [ (1,2), (0,1) ]

    [ (1,2), (0, -5) ]

    etc would all suffice, right?
     
  2. jcsd
  3. Nov 12, 2017 #2

    Orodruin

    User Avatar
    Staff Emeritus
    Science Advisor
    Homework Helper
    Gold Member

    As long as you do not put further requirements on the basis (such as orthogonality), yes.
     
  4. Nov 12, 2017 #3
    Thank you for the quick reply.
     
  5. Nov 25, 2017 #4

    scottdave

    User Avatar
    Homework Helper
    Gold Member

    As long as the other vector is nonzero and not a scalar multiple of (1,2) { examples would be (-1,-2) or (3,6) } then you will have two linear independent vectors which can be used to describe any point in R2.
     
  6. Nov 25, 2017 #5

    WWGD

    User Avatar
    Science Advisor
    Gold Member

    Actually EDIT any non-sero vector orthogonal to a vector v is linearly-independent to it/ with it.
     
    Last edited: Nov 25, 2017
  7. Nov 25, 2017 #6

    Mark44

    Staff: Mentor

    You might want to restrict "any vector" a bit. The zero vector is orthogonal to every other vector in whatever is the space of interest, but the zero vector can't be among a set of linearly independent vectors.
     
  8. Nov 25, 2017 #7

    Orodruin

    User Avatar
    Staff Emeritus
    Science Advisor
    Homework Helper
    Gold Member

    The OP clearly had the right idea from the beginning and marked the thread resolved two weeks ago ...

    While true after the edit, nothing in the OP’s question requires orthogonality. It is also slightly misleading as it can be read to imply that orthogonality is a requirement for for linear independence - it is not. The OP had the right idea from the beginning.
     
  9. Nov 25, 2017 #8

    WWGD

    User Avatar
    Science Advisor
    Gold Member

    But orthogonality guarantees L.I, and this can work for higher dimensions, i.e., it generalizes, which may not happen when you try to make a guess for a higher number of dimensions: Can you extend, e.g. ## \{ (1,0,1,0), (0,2,3,1) \} ## into a basis for ##\mathbb R^4 ## without using orthogonality? Besides, I stated orthogonality as a sufficient condition and I cannot be held responsible if someone takes it as a necessary one. EDIT: For a more formal treatment of the result: https://en.wikipedia.org/wiki/Fundamental_theorem_of_linear_algebra
     
  10. Nov 26, 2017 #9

    Orodruin

    User Avatar
    Staff Emeritus
    Science Advisor
    Homework Helper
    Gold Member

    Yes, I can do that essentially without thinking or computing anything. The appropriate check is that the determinant of the matrix with the components of vector k in row k is non-zero.
     
  11. Nov 26, 2017 #10

    WWGD

    User Avatar
    Science Advisor
    Gold Member

    EDIT: And how do you decide whether the determinant is non-zero without thinking or computing?
    Is that a significant improvement ( computationally or otherwise) over making sure you choose a pair of vectors in ##\{(1,0,1,0), (0,2,3,1) \} ^{\perp} ##?
    I just don't see the reason for opposing my suggested approach.
     
  12. Nov 26, 2017 #11

    Orodruin

    User Avatar
    Staff Emeritus
    Science Advisor
    Homework Helper
    Gold Member

    Well, apart from the fact that the OP had already solved this problem two weeks ago and has not been online for a week, it is cumbersome and ineffective if all you are looking for is a basis. You can simply take two linearly independent vectors that are obviously orthogonal to a subspace that the projection of your given vectors onto form a basis in. In the case of your example, (0,0,1,0) and (0,0,0,1) will do fine. These vectors span the subspace of vectors on the form (0,0,z,w), the projection of your vectors onto the orthogonal complement of that subspace clearly form a basis of it. Done.

    If you would want to check by computing the determinant, it would be 2.
     
  13. Nov 26, 2017 #12

    WWGD

    User Avatar
    Science Advisor
    Gold Member

    Well, first of all, these posts are also intended to be a repository of knowledge so that anyone reading may learn something , and not just immediate answers for the OP. Still, If I understood correctly, you are ultimately using ortho projections, so what is the difference then? And the computation of the determinant may be obvious and clear to you but not necessarily so for anyone/everyone. Ditto for the "obviously orthogonal" . Ultimately, my method guarantees the existence and a specific method, and not just heuristics. Essentially, the obviousness you believe intrinsic in your approach may not be so for any/every user. It is fine to state and believe your approach is better , but to claim there is some objectivity to it, would require, IMHO, stronger arguments/support.
    EDIT: At any rate, you can have the last word if you wish to, I am not interested in pursuing this any further.
     
  14. Nov 26, 2017 #13

    Orodruin

    User Avatar
    Staff Emeritus
    Science Advisor
    Homework Helper
    Gold Member

    I cannot see that you have given a method, merely a statement that you want the additional vectors to be orthogonal. That if anything is heuristic. Besides, there is no guarantee that your method works in a general linear vector space where the existence of a metric is not guaranteed (take the space of polynomials of degree three and lower as an example). Simply selecting vectors outside the subspace spanned by the given vectors does work regardless of the vector space. I just do not see any reason to bring up looking for an orthogonal basis here. I find it much simpler to just take the vectors (1,0,0,0), (0,1,0,0), (0,0,1,0), and (0,0,0,1) - then you include them from the left (or right) while discarding any combination that is linearly dependent on the ones you have already included and the original ones.
     
  15. Nov 26, 2017 #14

    WWGD

    User Avatar
    Science Advisor
    Gold Member

    And you want the vectors to not be in the span, and then you make reference to ortho projection as means to verifying the correctness of your choice, while decrying the lack of generality of my use of metrics, while making reference to orthogonality as a means to generating said basis . What gives then? Do you choose to use orthogonality or not? EDIT: Yes, I did assume we would be working within ##\mathbb R^n## , and, yes, my method does assume an inner-product, which does restrict the generality.
    The value of the determinant being 2 is not obvious to me. Maybe I am missing something obvious, not sure. And good luck figuring out when the determinant of a collection of abstract objects: polynomials, etc. is non-zero
     
  16. Nov 26, 2017 #15

    Orodruin

    User Avatar
    Staff Emeritus
    Science Advisor
    Homework Helper
    Gold Member

    The orthogonal complement in my approach is not a necessity. It is sufficient to pick a basis for any subspace such that no non-zero vector in it can be spanned by the given vectors.
     
  17. Nov 26, 2017 #16

    WWGD

    User Avatar
    Science Advisor
    Gold Member

    But how can you tell the vector is not in the span? Of course, if a vector is not in the span, it is linearly independent from the basis vectors, but, how do you choose such vector in a more-or-less algorithmic way, e.g., with polynomials or other non-numerical objects, EDIT : or even within ##\mathbb R^n ## for n large-enough, other than by finding the determinant, which again, may be difficult to use with non-numerical "vectors", or may become unwieldy for a large value of ##n##? EDIT 2I don't see how to extend your idea of using ##e_i## an removing LD vectors to general abstract spaces.
     
  18. Nov 26, 2017 #17

    Orodruin

    User Avatar
    Staff Emeritus
    Science Advisor
    Homework Helper
    Gold Member

    You check whether or not it can be written as a linear combination. This is generally not a difficult task which typically can be accomplished by showing that no non-trivial linear combination of all basis vectors can be zero. This will generally boil down to the determinant condition given some other basis. Consider the polynomials ##p_1 = x^2 + 2x - 1## and ##p_2 = x^2 + x + 3##. Now take the subspace spanned by the monomial ##x^2##. To write ##x^2## as a linear combination ##a p_1 + b p_2## would require ##a + b = 1## and therefore
    $$
    2a + b = 0, \quad -a + 3b = 0,
    $$
    is impossible to satisfy and you are done. Had you not been done, you could have continued with the monomial ##x## and ultimately with the monomial ##x^0##.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted



Similar Discussions: Find a basis for R2 that includes the vector (1,2)
  1. Basis vectors (Replies: 9)

Loading...