Genericity of Linear Independence

  • Thread starter jakemf1986
  • Start date
  • #1

Main Question or Discussion Point

Restrict attention to vectors in ℝ^m where m is a natural number.

Let σ be the vector of ones. Let V be the set of vectors whose largest entry is 1 and whose smallest entry is 0.

When is it (generically) the case that the set of vectors {σ, v_1, v_2, ..., v_n} is linearly independent, where v_i are vectors randomly drawn from V?

I am guessing that this is so whenever m > n + 1, but am not sure how to prove it.

Any help is much appreciated, thank you!

JF
 

Answers and Replies

  • #2
Stephen Tashi
Science Advisor
7,026
1,249
Are you really posing a question involving random variables? Or are you trying to express a quantifier like "for each" by saying "for a randomly drawn"?
 
  • #3
Hm I suppose what I really want is:

If you give me any v_1, v_2, ..., v_n ∈ V,
I can conclude with probability 1 that the following set of vectors is LI:
{σ, v_1, v_2, ..., v_n}.

Thank you!
 
  • #4
1,761
123
Well, sort of.

You need some sort of measure on the space of vectors, which is essentially (R^m)^n. Unfortunately, Lebesgue measure won't exactly work because it's not finite.

So, as stated, what you can do is prove that the set of linearly independent guys has measure 0.
 
  • #5
Stephen Tashi
Science Advisor
7,026
1,249
If you give me any v_1, v_2, ..., v_n ∈ V,
Am I allowed to make up v_1,v_2,...v_n ? Or must I obtain them by using realizations of random variables?

You need some sort of measure on the space of vectors, which is essentially (R^m)^n. Unfortunately, Lebesgue measure won't exactly work because it's not finite.
By one interpretation of the problem, we need a probability measure for realizing a finite number of finite dimensional vectors, each of whose components are realizations of a scalar random variable uniformly distributed on [0,1]. Would we have any problem finding a finite measure?
 
  • #6
1,761
123
By one interpretation of the problem, we need a probability measure for realizing a finite number of finite dimensional vectors, each of whose components are realizations of a scalar random variable uniformly distributed on [0,1]. Would we have any problem finding a finite measure?
I guess you are saying, then, that we should restrict the components of the vectors to have values in [0,1]?

Maybe that's unfair to all the other vectors. The trouble is, there's no such thing as uniformly distributed on the whole line.

You could do the same trick with Lebesgue measure and just look at vectors of norm less than some number.

I was trying to come up with some kind of uniform distribution statement, but then I thought maybe measure is just as good as probability. I was concerned that we ought to treat all vectors equally to be truly random with no vector preferred over any other vector. But there's no way to do that because any translation-invariant measure is basically Lebesgue measure. So, I said, let's just say measure, rather than probability measure.
 
  • #7
Stephen Tashi
Science Advisor
7,026
1,249
I guess you are saying, then, that we should restrict the components of the vectors to have values in [0,1]?

Maybe that's unfair to all the other vectors.
It is unfair. But that's what the original post wants. (It may even want the components to be either 0 or 1 and nothing in between. We need clarification.)
 
  • #8
1,761
123
It is unfair. But that's what the original post wants. (It may even want the components to be either 0 or 1 and nothing in between. We need clarification.)
True. I didn't read it carefully enough, perhaps because I have thought about this question myself a bit before, so I focused on my version of it. It's a pretty natural question to ask if you are thinking about transversality in differential topology and stuff like that. Sort of the baby case of Sard's theorem and all that junk that goes with it. It does look like he's allowing for any value between 0 and 1.
 
  • #9
Thanks all for the replies. Let me clarify:

V is the set of vectors whose largest entry *must* be 1 and whose smallest entry *must* be 0.
So if for example m = 2, so that we're restricting attention to 2-dimensional vectors, then V would consist of exactly two vectors, namely (0,1) and (1,0).
If instead m=3, then V would have uncountably many vectors, e.g. (1,0,0.34), (0,0.5,1), (0,1,0.8), etc.



Now: Fix m. Randomly pick any n vectors from V and call them v_1, v_2, ..., v_n. Let σ = (1,1, ..., 1) i.e. σ is a vector of ones.

So then my question is: When is it generically the case that {σ, v_1, v_2, ..., v_n} is Linearly Independent?

My guess was that if m > n + 1 then we can say something like "with probability 1" they are Linearly Independent.


PS: Please forgive me if I'm not being 100% rigorous as I am no trained mathematician!
 
  • #10
Stephen Tashi
Science Advisor
7,026
1,249
My guess was that if m > n + 1 then we can say something like "with probability 1" they are Linearly Independent.
You have to be precise about how the randomness is to be implemented. If it is implemented by drawing random samples from a continuous probability distributions then this problem becomes a very technical problem in "measure theory" and I don't claim to know the answer.

Measure theory deals with how probabilities are assigned to sets of events and, using typical continuous probability distributions, it has some counterintuitive conclusions. For example, if x is a random sample taken from the uniform distribution on the interval [0,1], then the probability that x is in the set of rational numbers is zero since this set has "measure zero". As another example, let the point (x,y) be chosen by picking x and y indpendently, each from a uniform distribution on the interval [0,1]. What is the probability that x + y = 1? If we look at the set of (x,y) satisfying x + y = 1, it is a line segment, so it has area zero. Measure theory says the probability that (x,y) falls in that set is zero.

I can't tell whether you are interested in such a theoretical view of your question or whether you have some practical application in mind. For example, numbers in computer languages are rational numbers. So if you simulate drawing a random number form a uniform distribution in [0,1], you get a rational number with probability 1, not probability zero. Likewise, measurements of chemical concentrations or stock market prices have a finite number of decimals, so they are rational numbers.

For the theoretical question, let me suggest a specific algorithm for implementing your randomness.

Let m be a given integer > 2
Let v = (v1,v2,...vm) be the random m dimensional vector which is to be constructed.

1) Pick an integer k from the uniform discrete distribution on the set of integers S = {1,2,..m}. Set vk = 1
2) Pick an integer j from the uniform discrete distribution on the set S - {k}. Set vj= 0
3) For each integer i in the set S - {k} - {j}, draw a number x from the the continuous uniform distribution on [0,1] and set vi = x

To pick a random set of n vectors of the type you describe:

Let n be a integer > 0

Let V = {V0,V1,..Vn} be the set of vectors which is to be randomly constructed.

1) Set V0 equal to the vector, all of whose components are 1
2) Construct each of vectors V1,V2,..Vn by the previous algorithm.

So your question boils down to: For a given n and given m, what is the probabiity that a set of vectors randomly constructed by the above algorithms will be linearly independent?
 
  • #11
mathwonk
Science Advisor
Homework Helper
10,822
989
why don't you try it in R^2? then the set seems to be the two unit intervals on the two axes, and the only vector that fails is (0,0).
 
  • #12
Deveno
Science Advisor
906
6
m = 2 is a trivial case (the base case).

since σ = (1,1), v1 = (1,0), or (0,1) (these are the only vectors allowable by the problem).

both possible sets are LI.

m = 3 is the first non-trival case:

σ = (1,1,1), and wlog we may assume v1 = (0,1,x), v2 = (0,1,y), x,y in [0,1] (honestly it makes no difference if x,y are in (0,1)).

(if v1,v2 have 0's in different places, we obviously have LI).

suppose aσ + bv1 + cv2 = 0

clearly a = 0, so we are reduced to asking when {v1, v2} is LI.

since the second coordinate will only be 0 if a = -b, we are really asking:

with what probability is it, that (x,y) lies on the diagonal of I? again, given a suitable measure on IxI, we can answer, "0". unfortunately, no computer we have is sufficiently powerful to truly pick an element of [0,1] at random, so if one were to test this numerically, one would get a certain (very small) probability (possible 0, depending on how "rounding" was handled).
 
  • #13
Stephen Tashi
Science Advisor
7,026
1,249
(if v1,v2 have 0's in different places, we obviously have LI).
For [tex] \begin{pmatrix} 1&1&1 \\1&0&0.5 \\ 0&1&0.5 \end{pmatrix} [/tex]

the first row is the sum of the other two rows.
 
  • #14
Deveno
Science Advisor
906
6
oops! it appears the same trick will work for (1,0,a), (0,1,1-a) as well. silly me.
 

Related Threads on Genericity of Linear Independence

  • Last Post
Replies
13
Views
841
  • Last Post
Replies
3
Views
11K
  • Last Post
Replies
3
Views
1K
  • Last Post
Replies
8
Views
2K
Replies
6
Views
11K
Replies
17
Views
3K
  • Last Post
Replies
7
Views
3K
  • Last Post
Replies
12
Views
3K
  • Last Post
Replies
2
Views
3K
  • Last Post
Replies
2
Views
2K
Top