MHB Linear Dependency: Max Size of Subset of Vectors

  • Thread starter Thread starter Yankel
  • Start date Start date
  • Tags Tags
    Linear
Click For Summary
The set of vectors (1,1), (-2,-2), (0,0), and (3,-2) is linearly dependent due to the presence of the zero vector (0,0) and the dependence between (1,1) and (-2,-2). The maximum size of a linearly independent subset from this set is 2, which can be formed by choosing (3,-2) along with either (1,1) or (-2,-2). To determine linear independence, one must check if any vector can be expressed as a scalar multiple of another. In this case, the dimensionality of the vector space is 2, confirming that only two vectors can be linearly independent. Thus, the conclusion is that the maximum size of a linearly independent subset is indeed 2.
Yankel
Messages
390
Reaction score
0
Hello all,

I have this set of vectors:

(1,1) , (-2,-2) , (0,0) , (3,-2)

I need to say if it linearly dependent, and I need to find the maximum size of the subset of this set, which is linearly independent.

What I think, is that as long as (0,0) is there, it must be dependent. In addition, (1,1) and (-2,-2) are also dependent. Thus if I had to guess, I would say the maximum size is 2 vectors, (3,-2) and either (1,1) or (-2,-2). My question is, am I right, or am I missing something ?

Thanks !
 
Physics news on Phys.org
Yankel said:
Hello all,

I have this set of vectors:

(1,1) , (-2,-2) , (0,0) , (3,-2)

I need to say if it linearly dependent, and I need to find the maximum size of the subset of this set, which is linearly independent.

What I think, is that as long as (0,0) is there, it must be dependent. In addition, (1,1) and (-2,-2) are also dependent. Thus if I had to guess, I would say the maximum size is 2 vectors, (3,-2) and either (1,1) or (-2,-2). My question is, am I right, or am I missing something ?

Thanks !

Hi Yankel, :)

You are correct. $(0,\,0)$ is indeed linearly dependent upon any of the remaining vectors as it could be obtained by multiplying a vector by $0$; for example $(0,\,0)=0(1,\,1)$, Also $(1,\,1)$ and $(-2,\,-2)$ are linearly dependent since, $(-2,\,-2)=-2(1,\,1)$. Then it could be shown that the sets $\{(1,\,1),\,(3,\,-2)\}$ and $\{(-2,\,-2),\,(3,\,-2)\}$ are linearly independent. In the case of $\{(1,\,1),\,(3,\,-2)\}$, let,

\[\alpha(1,\,1)+\beta(3,\,-2)=0\]

and show that both $\alpha$ and $\beta$ should be equal to zero.
 
In a sense, linear dependency is a measure of "spanning redundancy". For example, adding the 0-vector never gives us any new vectors in the span, and we already can realize the 0-vector as a linear combination of any other set $\{v_1,\dots,v_k\}$ as:

$0v_1 + 0v_2 + \cdots + 0v_k$.

in the same way if:

$v_2 = cv_1$ for a non-zero $c$, then we can replace any linear combination containing $v_2$ with $cv_1$.

For example, the linear combination:

$a_1v_1 + a_2v_2 + a_3v_3 + \cdots +a_kv_k$

is equal to:

$(a_1 + ca_2)v_1 + a_3v_3 + \cdots a_kv_k$

We might have just as well eliminated $v_1$ in this case, replacing it with $\dfrac{1}{c}v_2$ in any linear combination.

This situation is a bit more complicated if we have something like:

$v_3 = b_1v_1 + b_2v_2$

as we might decide to keep $\{v_1,v_2\},\{v_1,v_3\}$ or $\{v_2,v_3\}$.

Generally speaking, the more dimensions we have in our space, the more chances we have of the linear dependency relations being "complicated". For $\text{dim}(V) > 4$ I wouldn't trust "elimination by inspection" but form a matrix from the vector-set and compute its rank.

***********

In this particular problem, you have some information to go off of straight off the bat:

The dimension of your vector space is two (because each vector has only two coordinates, and some vectors with non-zero coordinates in each position exist in your set).

So at most, two of your vectors can be linearly independent.

Since (0,0) ALWAYS makes any set you add it to linearly dependent, get rid of it.

Pick anyone of the 3 remaining non-zero vectors. Now we have a linearly independent set of one vector (which spans a one-dimensional subspace of our two-dimensional vector space).

Now pick a 2nd vector...is it a scalar multiple of the first vector (that is: does it lie in the subspace generated by the first vector)? If so, get rid of it, you don't need it.

Otherwise, it is linearly independent from the first vector and you are done.

Repeat this procedure until you have exhausted the set, or obtained two linearly independent vectors (which is the maximum possible).

(If we had MORE dimensions, we would have to check for a third vector, and we would have to check our 3rd choice was not in the subspace spanned by our first two choices).
 
I am studying the mathematical formalism behind non-commutative geometry approach to quantum gravity. I was reading about Hopf algebras and their Drinfeld twist with a specific example of the Moyal-Weyl twist defined as F=exp(-iλ/2θ^(μν)∂_μ⊗∂_ν) where λ is a constant parametar and θ antisymmetric constant tensor. {∂_μ} is the basis of the tangent vector space over the underlying spacetime Now, from my understanding the enveloping algebra which appears in the definition of the Hopf algebra...

Similar threads

  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K