Find a basis for R2 that includes the vector (1,2)

  • Thread starter Thread starter Rijad Hadzic
  • Start date Start date
  • Tags Tags
    Basis Vector
AI Thread Summary
A basis for R² that includes the vector (1,2) can be formed with any non-zero vector that is not a scalar multiple of (1,2), such as (0,-1) or (0,1). The discussion emphasizes that while orthogonality guarantees linear independence, it is not a necessary condition for forming a basis. Participants clarify that the zero vector cannot be included in a set of linearly independent vectors. The conversation also touches on extending the concept of bases to higher dimensions, noting that the determinant can be used to check for linear independence. Ultimately, the thread confirms the flexibility in choosing vectors for a basis in R².
Rijad Hadzic
Messages
321
Reaction score
20
Member warned that the homework template is not optional
My answer was [ (1,2), (0,-1) ]

I just wanted to make sure with you guys that you could essentially have an infinite amount of answers.

For example:

[ (1,2), (0,1) ]

[ (1,2), (0, -5) ]

etc would all suffice, right?
 
Physics news on Phys.org
As long as you do not put further requirements on the basis (such as orthogonality), yes.
 
  • Like
Likes Rijad Hadzic
Orodruin said:
As long as you do not put further requirements on the basis (such as orthogonality), yes.

Thank you for the quick reply.
 
As long as the other vector is nonzero and not a scalar multiple of (1,2) { examples would be (-1,-2) or (3,6) } then you will have two linear independent vectors which can be used to describe any point in R2.
 
Actually EDIT any non-sero vector orthogonal to a vector v is linearly-independent to it/ with it.
 
Last edited:
WWGD said:
Actually any vector orthogonal to a vector v is linearly-independent to it/ with it.
You might want to restrict "any vector" a bit. The zero vector is orthogonal to every other vector in whatever is the space of interest, but the zero vector can't be among a set of linearly independent vectors.
 
  • Like
Likes WWGD
The OP clearly had the right idea from the beginning and marked the thread resolved two weeks ago ...

WWGD said:
Actually EDIT any non-sero vector orthogonal to a vector v is linearly-independent to it/ with it.

While true after the edit, nothing in the OP’s question requires orthogonality. It is also slightly misleading as it can be read to imply that orthogonality is a requirement for for linear independence - it is not. The OP had the right idea from the beginning.
 
Orodruin said:
The OP clearly had the right idea from the beginning and marked the thread resolved two weeks ago ...
While true after the edit, nothing in the OP’s question requires orthogonality. It is also slightly misleading as it can be read to imply that orthogonality is a requirement for for linear independence - it is not. The OP had the right idea from the beginning.

But orthogonality guarantees L.I, and this can work for higher dimensions, i.e., it generalizes, which may not happen when you try to make a guess for a higher number of dimensions: Can you extend, e.g. ## \{ (1,0,1,0), (0,2,3,1) \} ## into a basis for ##\mathbb R^4 ## without using orthogonality? Besides, I stated orthogonality as a sufficient condition and I cannot be held responsible if someone takes it as a necessary one. EDIT: For a more formal treatment of the result: https://en.wikipedia.org/wiki/Fundamental_theorem_of_linear_algebra
 
WWGD said:
Can you extend, e.g. {(1,0,1,0),(0,2,3,1)}{(1,0,1,0),(0,2,3,1)} \{ (1,0,1,0), (0,2,3,1) \} into a basis for R4R4\mathbb R^4 without using orthogonality?
Yes, I can do that essentially without thinking or computing anything. The appropriate check is that the determinant of the matrix with the components of vector k in row k is non-zero.
 
  • Like
Likes scottdave
  • #10
Orodruin said:
Yes, I can do that essentially without thinking or computing anything. The appropriate check is that the determinant of the matrix with the components of vector k in row k is non-zero.
EDIT: And how do you decide whether the determinant is non-zero without thinking or computing?
Is that a significant improvement ( computationally or otherwise) over making sure you choose a pair of vectors in ##\{(1,0,1,0), (0,2,3,1) \} ^{\perp} ##?
I just don't see the reason for opposing my suggested approach.
 
  • #11
WWGD said:
EDIT: And how do you decide whether the determinant is non-zero without thinking or computing?
Is that a significant improvement ( computationally or otherwise) over making sure you choose a pair of vectors in ##\{(1,0,1,0), (0,2,3,1) \} ^{\perp} ##?
I just don't see the reason for opposing my suggested approach.
Well, apart from the fact that the OP had already solved this problem two weeks ago and has not been online for a week, it is cumbersome and ineffective if all you are looking for is a basis. You can simply take two linearly independent vectors that are obviously orthogonal to a subspace that the projection of your given vectors onto form a basis in. In the case of your example, (0,0,1,0) and (0,0,0,1) will do fine. These vectors span the subspace of vectors on the form (0,0,z,w), the projection of your vectors onto the orthogonal complement of that subspace clearly form a basis of it. Done.

If you would want to check by computing the determinant, it would be 2.
 
  • #12
Orodruin said:
Well, apart from the fact that the OP had already solved this problem two weeks ago and has not been online for a week, it is cumbersome and ineffective if all you are looking for is a basis. You can simply take two linearly independent vectors that are obviously orthogonal to a subspace that the projection of your given vectors onto form a basis in. In the case of your example, (0,0,1,0) and (0,0,0,1) will do fine. These vectors span the subspace of vectors on the form (0,0,z,w), the projection of your vectors onto the orthogonal complement of that subspace clearly form a basis of it. Done.

If you would want to check by computing the determinant, it would be 2.
Well, first of all, these posts are also intended to be a repository of knowledge so that anyone reading may learn something , and not just immediate answers for the OP. Still, If I understood correctly, you are ultimately using ortho projections, so what is the difference then? And the computation of the determinant may be obvious and clear to you but not necessarily so for anyone/everyone. Ditto for the "obviously orthogonal" . Ultimately, my method guarantees the existence and a specific method, and not just heuristics. Essentially, the obviousness you believe intrinsic in your approach may not be so for any/every user. It is fine to state and believe your approach is better , but to claim there is some objectivity to it, would require, IMHO, stronger arguments/support.
EDIT: At any rate, you can have the last word if you wish to, I am not interested in pursuing this any further.
 
  • #13
WWGD said:
Ultimately, my method guarantees the existence and a specific method, and not just heuristics.
I cannot see that you have given a method, merely a statement that you want the additional vectors to be orthogonal. That if anything is heuristic. Besides, there is no guarantee that your method works in a general linear vector space where the existence of a metric is not guaranteed (take the space of polynomials of degree three and lower as an example). Simply selecting vectors outside the subspace spanned by the given vectors does work regardless of the vector space. I just do not see any reason to bring up looking for an orthogonal basis here. I find it much simpler to just take the vectors (1,0,0,0), (0,1,0,0), (0,0,1,0), and (0,0,0,1) - then you include them from the left (or right) while discarding any combination that is linearly dependent on the ones you have already included and the original ones.
 
  • #14
Orodruin said:
I cannot see that you have given a method, merely a statement that you want the additional vectors to be orthogonal. That if anything is heuristic. .
And you want the vectors to not be in the span, and then you make reference to ortho projection as means to verifying the correctness of your choice, while decrying the lack of generality of my use of metrics, while making reference to orthogonality as a means to generating said basis . What gives then? Do you choose to use orthogonality or not? EDIT: Yes, I did assume we would be working within ##\mathbb R^n## , and, yes, my method does assume an inner-product, which does restrict the generality.
Orodruin said:
<Snip>. You can simply take two linearly independent vectors that are obviously orthogonal to a subspace that the projection of your given vectors onto form a basis in. In the case of your example, (0,0,1,0) and (0,0,0,1) will do fine. These vectors span the subspace of vectors on the form (0,0,z,w), the projection of your vectors onto the orthogonal complement of that subspace clearly form a basis of it. Done.

EDIT:If you would want to check by computing the determinant, it would be 2.
The value of the determinant being 2 is not obvious to me. Maybe I am missing something obvious, not sure. And good luck figuring out when the determinant of a collection of abstract objects: polynomials, etc. is non-zero
 
  • #15
WWGD said:
And you want the vectors to not be in the span, and then you make reference to ortho projection as means to verifying the correctness of your choice, while decrying the lack of generality of my use of metrics, while making reference to orthogonality as a means to generating said basis . What gives then? Do you choose to use orthogonality or not?
The orthogonal complement in my approach is not a necessity. It is sufficient to pick a basis for any subspace such that no non-zero vector in it can be spanned by the given vectors.
 
  • #16
Orodruin said:
The orthogonal complement in my approach is not a necessity. It is sufficient to pick a basis for any subspace such that no non-zero vector in it can be spanned by the given vectors.
But how can you tell the vector is not in the span? Of course, if a vector is not in the span, it is linearly independent from the basis vectors, but, how do you choose such vector in a more-or-less algorithmic way, e.g., with polynomials or other non-numerical objects, EDIT : or even within ##\mathbb R^n ## for n large-enough, other than by finding the determinant, which again, may be difficult to use with non-numerical "vectors", or may become unwieldy for a large value of ##n##? EDIT 2I don't see how to extend your idea of using ##e_i## an removing LD vectors to general abstract spaces.
 
  • #17
WWGD said:
But how can you tell the vector is not in the span?
You check whether or not it can be written as a linear combination. This is generally not a difficult task which typically can be accomplished by showing that no non-trivial linear combination of all basis vectors can be zero. This will generally boil down to the determinant condition given some other basis. Consider the polynomials ##p_1 = x^2 + 2x - 1## and ##p_2 = x^2 + x + 3##. Now take the subspace spanned by the monomial ##x^2##. To write ##x^2## as a linear combination ##a p_1 + b p_2## would require ##a + b = 1## and therefore
$$
2a + b = 0, \quad -a + 3b = 0,
$$
is impossible to satisfy and you are done. Had you not been done, you could have continued with the monomial ##x## and ultimately with the monomial ##x^0##.
 

Similar threads

Replies
58
Views
4K
Replies
9
Views
8K
Replies
2
Views
2K
Replies
9
Views
3K
Replies
3
Views
2K
Replies
0
Views
477
Replies
1
Views
1K
Back
Top