Linear Independence: What It Is & When to Use

Click For Summary
Linear independence refers to a set of vectors or functions that cannot be expressed as a linear combination of one another, meaning no combination results in zero unless all coefficients are zero. A set is linearly independent if removing any vector reduces the span of the set. Orthogonal vectors are always linearly independent, but the reverse is not necessarily true. The concept applies to functions as well, as they can be treated like vectors in terms of addition and scalar multiplication. Understanding these principles is crucial for analyzing vector spaces and their dimensions.
sncum
Messages
14
Reaction score
0
Any one please tell me about the term linear independence?and when we say that the function is linear independent
 
Physics news on Phys.org
You need to supply context. Usually linear independence refers to a set of vectors or a set of functions. These things are called linearly dependent if some linear combination = 0. If not then they are linearly independent.
 
[A][/A]=[c][/1]x+[c][/2]y[j][/j]+[c][/3]z[k][/k]
when we say it is linearly independent
and also my friend argue with me that orthonormality implies linear independence but i was not satisfied please help :smile:
 
sncum said:
[A][/A]=[c][/1]x+[c][/2]y[j][/j]+[c][/3]z[k][/k]
when we say it is linearly independent
and also my friend argue with me that orthonormality implies linear independence but i was not satisfied please help :smile:

I don't understand your first line.

However your friend is correct, if two vectors are orthogonal (unless one of the is 0) they are linearly independent. Note that the converse is not true.
 
sncum said:
my friend argue with me that orthonormality implies linear independence but i was not satisfied please help :smile:
A set E is said to be linearly independent if for all finite subsets ##\{x_1,\dots,x_n\}\subset E## and all ##a_1,\dots,a_n\in\mathbb C##,
$$\sum_{k=1}^n a_k x_k=0\quad \Rightarrow\quad a_1=\dots=a_n=0.$$
Suppose that E is orthonormal. Let ##\{e_k\}_{k=1}^n## be an arbitrary finite subset of E, and suppose that ##\sum_{k=1}^n a_k e_k=0##. Then for all ##i\in\{1,\dots,n\}##,
$$0=\langle e_i,0\rangle=\langle e_i,\sum_k a_k e_k\rangle=\sum_k a_k\langle e_i,e_k\rangle=\sum_k a_k\delta_{ik}=a_i.$$
 
Geometrically, linear independence means each vector contributes something to the span of the vectors.

So, if you have one vector, it spans a 1-dimensional subspace. If you add another vector to the set, you get a linearly dependent set if the span doesn't get any bigger. So, two vectors are linearly dependent if one is a multiple of the other. So, if you added a vector that was pointing in a different direction (and not the opposite direction), together they span a 2-dimensional subspace. In that case, they are said to be linearly independent.

So, in general, if you have n vectors, they are linearly independent if throwing one of them out makes them span a smaller subspace. If you can throw one out without changing the span, they are linearly dependent.

This picture is less accurate, but still helpful, for more general vector spaces, in which the "vectors" aren't exactly "arrows" pointing in space anymore.

Functions are really a kind of vector because vectors are things that you can add together and multiply by scalars. So, it's the same idea for functions.
 
Thank to all of you
 

Similar threads

  • · Replies 30 ·
2
Replies
30
Views
944
Replies
1
Views
2K
  • · Replies 3 ·
Replies
3
Views
1K
  • · Replies 5 ·
Replies
5
Views
1K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 11 ·
Replies
11
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 28 ·
Replies
28
Views
5K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 10 ·
Replies
10
Views
2K