Affine independence in terms of linear independence

Click For Summary

Homework Help Overview

This discussion revolves around the concept of affine independence in relation to linear independence, specifically focusing on the relationship between families of vectors and their representations. The original poster seeks to understand how the linear independence of difference vectors from an arbitrary origin vector implies the linear independence of a larger family of vectors.

Discussion Character

  • Conceptual clarification, Assumption checking

Approaches and Questions Raised

  • The original poster attempts to connect the linear independence of difference vectors to the linear independence of a family of vectors in a higher-dimensional space. Some participants question the notation used and the implications of representing vectors in different dimensions.

Discussion Status

Participants are exploring the original poster's notation and seeking clarification on the definitions involved. There is a suggestion to visualize the problem with concrete examples, indicating a productive direction in the discussion.

Contextual Notes

The original poster acknowledges a potential misunderstanding in their notation, clarifying that they meant to refer to vectors in \(\mathbb{R}^{n}\) rather than \(\mathbb{R}\). There is an ongoing exploration of the implications of this clarification on the problem at hand.

Wiseguy
Messages
6
Reaction score
0
This question mostly pertains to how looking at affine independence entirely in terms of linear independence between different families of vectors. I understand there are quite a few questions already online pertaining to the affine/linear independence relationship, but I'm not quite able to find something that helps my particular problem, nor am I able to make the connection on my own.

I want to try and understand how the linear independence of a family of ##n## difference vectors from any arbitrary 'origin' vector, say ##(\overrightarrow{a_i a_0}, \ldots, \overrightarrow{a_i a_j}, \ldots \overrightarrow{a_i a_n})## where ##a_i\ and\ a_j \in \mathbb{R}^{n}## and ##j \neq i## for any arbitrary 'origin' ##i \in I##, implies the linear independence of the whole family of ##(n+1)## vectors ##(\hat{a_0}, \ldots, \hat{a_n})## where ##\hat{a_j} = (1, a_j)##

I am able to understand this from the perspective of using families of points, but I am unable to visualize how I would construct this only using families of vectors. I've tried looking at the vectors as position vectors, but I think that way of thinking would not necessarily be correct.
 
Last edited:
Physics news on Phys.org
Hello and welcome to physicsforums.

I'm afraid your notation is quite unusual.

What does ##\overrightarrow{a_i a_0},## represent? Given that you've said ##a_i\in\mathbb{R}## that would suggest that ##\overrightarrow{a_i a_0}=a_0-a_i\in\mathbb{R}##, which is a scalar. You can think of that as a vector if you like, but ##\mathbb{R}## as vector space has only one dimension, so you can't have more than one linearly independent vector in it..

What does the right hand side of ##
\hat{a_j} = (1, a_j)
## represent?
 
andrewkirk said:
Hello and welcome to physicsforums.

I'm afraid your notation is quite unusual.

What does ##\overrightarrow{a_i a_0},## represent? Given that you've said ##a_i\in\mathbb{R}## that would suggest that ##\overrightarrow{a_i a_0}=a_0-a_i\in\mathbb{R}##, which is a scalar. You can think of that as a vector if you like, but ##\mathbb{R}## as vector space has only one dimension, so you can't have more than one linearly independent vector in it..

What does the right hand side of ##
\hat{a_j} = (1, a_j)
## represent?

Thank you for your welcome.

I apologize. I meant to write ##\mathbb{R}^{n}##, not ##\mathbb{R}##. Yes, the notation ##\overrightarrow{a_i a_0},## is just used to represent ##(a_i - a_0) \in \mathbb{R}^{n}##. We can keep it in the latter form if it makes more sense.

And ## \hat{a_j} ## is just that. A vector ##\in \mathbb{R}^{n+1}## comprising of ##(1, a_j)##. I would like to know the intuition as to why the linear independence of these forms are equivalent.
 
Is it a proof, or a visualization, that you are missing? If it's a visualization, why not take a small concrete example.
The easiest that still has vector structure is n=2. Take for instance a0=(1,1), a1=(2,2), a2=(1,2). Draw a picture of these in ##\mathbb{R}^2## and then another of what you get with the move into ##\mathbb{R}^3##.
 

Similar threads

  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 11 ·
Replies
11
Views
2K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 14 ·
Replies
14
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K