Linear independancy and orthogonality of vectors

Click For Summary

Discussion Overview

The discussion revolves around the concepts of linear independence and orthogonality of vectors in linear algebra. Participants explore the implications of a theorem stating that an orthogonal set of nonzero vectors is linearly independent, while also questioning the conditions under which a set of linearly independent vectors may or may not be orthogonal. The conversation includes theoretical considerations and examples from different dimensions.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • Some participants note that while an orthogonal set of vectors is linearly independent, the converse is not necessarily true.
  • One participant suggests that in R^n, a set of 2*n linearly independent vectors would imply orthogonality, which is challenged by others who clarify the limitations on the number of linearly independent vectors in that space.
  • A counterexample is provided involving two vectors in 2D that are linearly independent but not orthogonal, illustrating the distinction between the two concepts.
  • Participants discuss the implications of having a basis in a vector space, noting that while orthogonal bases exist, they are not a requirement for linear independence.
  • There is a proposal that a set of 2*n vectors in R^n, under certain conditions, could be orthogonal, though this claim is met with skepticism regarding its validity in higher dimensions.
  • One participant emphasizes that any set of more than n non-zero vectors in R^n must be linearly dependent, reinforcing the dimensional constraints of vector spaces.
  • Questions arise about the generalization of these concepts to abstract vectors beyond physical representations.

Areas of Agreement / Disagreement

Participants generally agree on the foundational definitions of linear independence and orthogonality, but multiple competing views remain regarding the implications of having 2*n vectors in R^n and the conditions under which orthogonality can be inferred from linear independence. The discussion remains unresolved on some of these points.

Contextual Notes

Limitations include the dependence on dimensionality and the specific definitions of linear independence and orthogonality. Some mathematical steps and assumptions are not fully resolved, particularly regarding the implications of having 2*n vectors in R^n.

Who May Find This Useful

This discussion may be useful for students and practitioners of linear algebra, particularly those interested in the relationships between vector properties in different dimensions.

larusi
Messages
4
Reaction score
0
Hi, I'm reading up on linear algebra and I'm wondering if the remark after a theorem I'm reading here is complete. The theorem states:
"If {V_1,V_2,...,V_k} is an orthogonal set of nonzero vectors then these vectors are linearly independent."
Remark after that simply states that if a set of vectors are linearly independent they are not necessarily orthogonal.

If the dimension you're working with is R^n I find that if you have a set of 2*n linearly independent vectors in that dimension then they are necessarily orthogonal. Am I thinking about this the wrong way?
 
Physics news on Phys.org
larusi said:
Hi, I'm reading up on linear algebra and I'm wondering if the remark after a theorem I'm reading here is complete. The theorem states:
"If {V_1,V_2,...,V_k} is an orthogonal set of nonzero vectors then these vectors are linearly independent."
Remark after that simply states that if a set of vectors are linearly independent they are not necessarily orthogonal.
Do you have a question about this? What it says is that if you have an orthogonal set of vectors, then they are linearly independent. The converse doesn't have to be true. IOW, if you have a set of linearly independent vectors, they don't have to be orthogonal.
larusi said:
If the dimension you're working with is R^n I find that if you have a set of 2*n linearly independent vectors in that dimension then they are necessarily orthogonal. Am I thinking about this the wrong way?
In Rn, you can have at most n (not 2n) linearly independent vectors. If you have n + 1 vectors in Rn, one of them must be a linear combination of the others.
 
Here's a simple counterexample to help your intuition. Imagine two vectors in 2D: one along the x axis, and one along the y=x line (so at a 45 degree angle). They are linearly independent, but not orthogonal.

Does that help?
 
Ah, I failed to note that the number of linearly independent vectors you could have was limited. Thanks Mark44, that clears things up.

chogg, I understand the theorem - I was wondering under what circumstances the reverse was true.
 
For a vector space of dimension n, any basis will have exactly n vectors in it. If the vectors happen to be orthogonal, or even othonormal (i.e., mutually perpendicular and of length 1), then great, but there is no requirement for the vectors in a basis to be orthogonal.

For R2, A = {<1, 0>, <0, 1>} and B = {<1, 1>, <1, 2>} are both bases, but only in A are the vectors orthogonal.

Edit: I might have misunderstood your question. I think you are asking for a situation where you have a set of orthogonal vectors that are linearly dependent.

C = {<1, 0>, <0, 1>, <0, 0>}
These vectors are mutually orthogonal, but are linearly dependent.
 
Mark44 said:
Edit: I might have misunderstood your question. I think you are asking for a situation where you have a set of orthogonal vectors that are linearly dependent.

C = {<1, 0>, <0, 1>, <0, 0>}
These vectors are mutually orthogonal, but are linearly dependent.


Not exactly. My reasoning was that if I had f.e. C = {<1, 0>, <0, 1>, <-1, 0>,<0, -1>} and the premise was that they were linearly independent I could state that they were orthogonal. What I failed to notice was that these are of course linearly dependent vectors. This would however hold:

In a set of 2*n vectors in R^n where none of the vectors in the set is the null vector nor can be written as a sum of the other vectors multiplied by a positive scalar, the vectors are orthogonal.
 
larusi said:
Not exactly. My reasoning was that if I had f.e. C = {<1, 0>, <0, 1>, <-1, 0>,<0, -1>} and the premise was that they were linearly independent I could state that they were orthogonal. What I failed to notice was that these are of course linearly dependent vectors. This would however hold:

In a set of 2*n vectors in R^n where none of the vectors in the set is the null vector nor can be written as a sum of the other vectors multiplied by a positive scalar, the vectors are orthogonal.
I doubt that this is true for larger values of n, say 3 and up. In two dimensions, it's easy to determine whether two vectors are linearly dependent - one of them will be a scalar multiple of the other. In three or more dimensions it's harder, as you can have a set of vectors where no one of them is a multiple of any of the others, but is a linear combination of the others.

I'm not sure why it's important to say something about a set of 2n vectors in Rn, though, whether they're orthogonal or not, but what do I know?
 
larusi said:
This would however hold:

In a set of 2*n vectors in R^n where none of the vectors in the set is the null vector nor can be written as a sum of the other vectors multiplied by a positive scalar, the vectors are orthogonal.

It only "holds" in the sense that "no such set of vectors exists, therefore no proposition about the members of the set is false."

Any set of ##k > n## non-zero vectors in ##\mathbb{R}^n## is linearly dependent.

Proof: any set of ##k## linearly independent vectors is the basis for a subspace of ##\mathbb{R}^n## of dimension ##k##. Therefore ##k \le n##.
 
Mark44 said:
I doubt that this is true for larger values of n, say 3 and up. In two dimensions, it's easy to determine whether two vectors are linearly dependent - one of them will be a scalar multiple of the other. In three or more dimensions it's harder, as you can have a set of vectors where no one of them is a multiple of any of the others, but is a linear combination of the others.

I'm not sure why it's important to say something about a set of 2n vectors in Rn, though, whether they're orthogonal or not, but what do I know?

You're right, it's useless for anything else than a thought experiment for me to learn linear algebra :approve:

AlephZero said:
It only "holds" in the sense that "no such set of vectors exists, therefore no proposition about the members of the set is false."

Any set of ##k > n## non-zero vectors in ##\mathbb{R}^n## is linearly dependent.

Proof: any set of ##k## linearly independent vectors is the basis for a subspace of ##\mathbb{R}^n## of dimension ##k##. Therefore ##k \le n##.

I didn't state that they were linearly independent, positive scalar is the key. This set is an example:
C = {<1, 0>, <0, 1>, <-1, 0>,<0, -1>}
4 vectors in R^2 where none are the null vector nor can be written as a sum of the others multiplied by positive scalars, therefore they are orthogonal (according to my broken proposition).

Thanks for the insight guys.
 
  • #10
the examples (1,0) and (1,1) is true for Cartesian co-ordinates... in general how can we prove that?
All linearly independent vectors need not be orthogonal... is it true for abstract vectors also? I mean which need not be arrows in space...
I am just asking...
 

Similar threads

  • · Replies 9 ·
Replies
9
Views
4K
  • · Replies 3 ·
Replies
3
Views
4K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 24 ·
Replies
24
Views
2K
  • · Replies 9 ·
Replies
9
Views
4K
  • · Replies 13 ·
Replies
13
Views
3K
Replies
3
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 13 ·
Replies
13
Views
5K