Genericity of Linear Independence

In summary, the conversation discusses the question of when the set of vectors {σ, v_1, v_2, ..., v_n} is linearly independent, where v_i are randomly drawn from a set of vectors restricted to have a largest entry of 1 and a smallest entry of 0. The conversation also delves into the technical aspects of assigning probabilities to sets of events in this scenario, including the use of measure theory. The conclusion is that a precise definition of how randomness is to be implemented is needed in order to answer the question.
  • #1
jakemf1986
7
0
Restrict attention to vectors in ℝ^m where m is a natural number.

Let σ be the vector of ones. Let V be the set of vectors whose largest entry is 1 and whose smallest entry is 0.

When is it (generically) the case that the set of vectors {σ, v_1, v_2, ..., v_n} is linearly independent, where v_i are vectors randomly drawn from V?

I am guessing that this is so whenever m > n + 1, but am not sure how to prove it.

Any help is much appreciated, thank you!

JF
 
Physics news on Phys.org
  • #2
Are you really posing a question involving random variables? Or are you trying to express a quantifier like "for each" by saying "for a randomly drawn"?
 
  • #3
Hm I suppose what I really want is:

If you give me any v_1, v_2, ..., v_n ∈ V,
I can conclude with probability 1 that the following set of vectors is LI:
{σ, v_1, v_2, ..., v_n}.

Thank you!
 
  • #4
Well, sort of.

You need some sort of measure on the space of vectors, which is essentially (R^m)^n. Unfortunately, Lebesgue measure won't exactly work because it's not finite.

So, as stated, what you can do is prove that the set of linearly independent guys has measure 0.
 
  • #5
jakemf1986 said:
If you give me any v_1, v_2, ..., v_n ∈ V,

Am I allowed to make up v_1,v_2,...v_n ? Or must I obtain them by using realizations of random variables?

You need some sort of measure on the space of vectors, which is essentially (R^m)^n. Unfortunately, Lebesgue measure won't exactly work because it's not finite.

By one interpretation of the problem, we need a probability measure for realizing a finite number of finite dimensional vectors, each of whose components are realizations of a scalar random variable uniformly distributed on [0,1]. Would we have any problem finding a finite measure?
 
  • #6
By one interpretation of the problem, we need a probability measure for realizing a finite number of finite dimensional vectors, each of whose components are realizations of a scalar random variable uniformly distributed on [0,1]. Would we have any problem finding a finite measure?

I guess you are saying, then, that we should restrict the components of the vectors to have values in [0,1]?

Maybe that's unfair to all the other vectors. The trouble is, there's no such thing as uniformly distributed on the whole line.

You could do the same trick with Lebesgue measure and just look at vectors of norm less than some number.

I was trying to come up with some kind of uniform distribution statement, but then I thought maybe measure is just as good as probability. I was concerned that we ought to treat all vectors equally to be truly random with no vector preferred over any other vector. But there's no way to do that because any translation-invariant measure is basically Lebesgue measure. So, I said, let's just say measure, rather than probability measure.
 
  • #7
I guess you are saying, then, that we should restrict the components of the vectors to have values in [0,1]?

Maybe that's unfair to all the other vectors.

It is unfair. But that's what the original post wants. (It may even want the components to be either 0 or 1 and nothing in between. We need clarification.)
 
  • #8
It is unfair. But that's what the original post wants. (It may even want the components to be either 0 or 1 and nothing in between. We need clarification.)

True. I didn't read it carefully enough, perhaps because I have thought about this question myself a bit before, so I focused on my version of it. It's a pretty natural question to ask if you are thinking about transversality in differential topology and stuff like that. Sort of the baby case of Sard's theorem and all that junk that goes with it. It does look like he's allowing for any value between 0 and 1.
 
  • #9
Thanks all for the replies. Let me clarify:

V is the set of vectors whose largest entry *must* be 1 and whose smallest entry *must* be 0.
So if for example m = 2, so that we're restricting attention to 2-dimensional vectors, then V would consist of exactly two vectors, namely (0,1) and (1,0).
If instead m=3, then V would have uncountably many vectors, e.g. (1,0,0.34), (0,0.5,1), (0,1,0.8), etc.



Now: Fix m. Randomly pick any n vectors from V and call them v_1, v_2, ..., v_n. Let σ = (1,1, ..., 1) i.e. σ is a vector of ones.

So then my question is: When is it generically the case that {σ, v_1, v_2, ..., v_n} is Linearly Independent?

My guess was that if m > n + 1 then we can say something like "with probability 1" they are Linearly Independent.


PS: Please forgive me if I'm not being 100% rigorous as I am no trained mathematician!
 
  • #10
jakemf1986 said:
My guess was that if m > n + 1 then we can say something like "with probability 1" they are Linearly Independent.

You have to be precise about how the randomness is to be implemented. If it is implemented by drawing random samples from a continuous probability distributions then this problem becomes a very technical problem in "measure theory" and I don't claim to know the answer.

Measure theory deals with how probabilities are assigned to sets of events and, using typical continuous probability distributions, it has some counterintuitive conclusions. For example, if x is a random sample taken from the uniform distribution on the interval [0,1], then the probability that x is in the set of rational numbers is zero since this set has "measure zero". As another example, let the point (x,y) be chosen by picking x and y indpendently, each from a uniform distribution on the interval [0,1]. What is the probability that x + y = 1? If we look at the set of (x,y) satisfying x + y = 1, it is a line segment, so it has area zero. Measure theory says the probability that (x,y) falls in that set is zero.

I can't tell whether you are interested in such a theoretical view of your question or whether you have some practical application in mind. For example, numbers in computer languages are rational numbers. So if you simulate drawing a random number form a uniform distribution in [0,1], you get a rational number with probability 1, not probability zero. Likewise, measurements of chemical concentrations or stock market prices have a finite number of decimals, so they are rational numbers.

For the theoretical question, let me suggest a specific algorithm for implementing your randomness.

Let m be a given integer > 2
Let v = (v1,v2,...vm) be the random m dimensional vector which is to be constructed.

1) Pick an integer k from the uniform discrete distribution on the set of integers S = {1,2,..m}. Set vk = 1
2) Pick an integer j from the uniform discrete distribution on the set S - {k}. Set vj= 0
3) For each integer i in the set S - {k} - {j}, draw a number x from the the continuous uniform distribution on [0,1] and set vi = x

To pick a random set of n vectors of the type you describe:

Let n be a integer > 0

Let V = {V0,V1,..Vn} be the set of vectors which is to be randomly constructed.

1) Set V0 equal to the vector, all of whose components are 1
2) Construct each of vectors V1,V2,..Vn by the previous algorithm.

So your question boils down to: For a given n and given m, what is the probabiity that a set of vectors randomly constructed by the above algorithms will be linearly independent?
 
  • #11
why don't you try it in R^2? then the set seems to be the two unit intervals on the two axes, and the only vector that fails is (0,0).
 
  • #12
m = 2 is a trivial case (the base case).

since σ = (1,1), v1 = (1,0), or (0,1) (these are the only vectors allowable by the problem).

both possible sets are LI.

m = 3 is the first non-trival case:

σ = (1,1,1), and wlog we may assume v1 = (0,1,x), v2 = (0,1,y), x,y in [0,1] (honestly it makes no difference if x,y are in (0,1)).

(if v1,v2 have 0's in different places, we obviously have LI).

suppose aσ + bv1 + cv2 = 0

clearly a = 0, so we are reduced to asking when {v1, v2} is LI.

since the second coordinate will only be 0 if a = -b, we are really asking:

with what probability is it, that (x,y) lies on the diagonal of I? again, given a suitable measure on IxI, we can answer, "0". unfortunately, no computer we have is sufficiently powerful to truly pick an element of [0,1] at random, so if one were to test this numerically, one would get a certain (very small) probability (possible 0, depending on how "rounding" was handled).
 
  • #13
(if v1,v2 have 0's in different places, we obviously have LI).

For [tex] \begin{pmatrix} 1&1&1 \\1&0&0.5 \\ 0&1&0.5 \end{pmatrix} [/tex]

the first row is the sum of the other two rows.
 
  • #14
oops! it appears the same trick will work for (1,0,a), (0,1,1-a) as well. silly me.
 

What is the definition of linear independence?

Linear independence is a concept in linear algebra that refers to a set of vectors being independent of each other, meaning that none of the vectors can be written as a linear combination of the others.

Why is linear independence important?

Linear independence is important because it allows us to determine whether a set of vectors can span a vector space or not. It also helps us to understand the relationship between vectors and their dimensions.

What is the genericity of linear independence?

The genericity of linear independence refers to the idea that the property of linear independence holds for most sets of vectors in a given vector space. In other words, linear independence is a generic property of vector spaces.

How is the genericity of linear independence related to the concept of randomness?

The genericity of linear independence is closely related to the concept of randomness because it implies that most sets of vectors in a vector space exhibit the properties of independence and randomness. This is important in many applications, such as in data analysis and machine learning.

Can linearly dependent vectors become linearly independent?

No, linearly dependent vectors cannot become linearly independent. This is because linear independence is a fundamental property of a set of vectors, and if a set of vectors is dependent, it means that at least one vector can be expressed as a linear combination of the others, making them not independent.

Similar threads

  • Linear and Abstract Algebra
Replies
6
Views
757
  • Linear and Abstract Algebra
Replies
10
Views
1K
  • Linear and Abstract Algebra
Replies
4
Views
767
  • Linear and Abstract Algebra
Replies
12
Views
1K
  • Linear and Abstract Algebra
Replies
3
Views
965
  • Linear and Abstract Algebra
Replies
1
Views
860
Replies
4
Views
1K
  • Linear and Abstract Algebra
Replies
17
Views
2K
  • Linear and Abstract Algebra
Replies
6
Views
2K
  • Linear and Abstract Algebra
Replies
5
Views
1K
Back
Top