- #1

sncum

- 14

- 0

You are using an out of date browser. It may not display this or other websites correctly.

You should upgrade or use an alternative browser.

You should upgrade or use an alternative browser.

- Thread starter sncum
- Start date

In summary, linear independence refers to a set of vectors or functions that are not a linear combination of each other. If a set of vectors is orthonormal, they are automatically linearly independent. This means that each vector in the set contributes something to the span of the vectors, and if a vector is removed, the span of the remaining vectors becomes smaller.

- #1

sncum

- 14

- 0

Physics news on Phys.org

- #2

mathman

Science Advisor

- 8,140

- 572

- #3

sncum

- 14

- 0

when we say it is linearly independent

and also my friend argue with me that orthonormality implies linear independence but i was not satisfied please help

- #4

mathman

Science Advisor

- 8,140

- 572

sncum said:

when we say it is linearly independent

and also my friend argue with me that orthonormality implies linear independence but i was not satisfied please help

I don't understand your first line.

However your friend is correct, if two vectors are orthogonal (unless one of the is 0) they are linearly independent. Note that the converse is not true.

- #5

Fredrik

Staff Emeritus

Science Advisor

Gold Member

- 10,877

- 422

A set E is said to be linearly independent if for all finite subsets ##\{x_1,\dots,x_n\}\subset E## and all ##a_1,\dots,a_n\in\mathbb C##,sncum said:my friend argue with me that orthonormality implies linear independence but i was not satisfied please help

$$\sum_{k=1}^n a_k x_k=0\quad \Rightarrow\quad a_1=\dots=a_n=0.$$

Suppose that E is orthonormal. Let ##\{e_k\}_{k=1}^n## be an arbitrary finite subset of E, and suppose that ##\sum_{k=1}^n a_k e_k=0##. Then for all ##i\in\{1,\dots,n\}##,

$$0=\langle e_i,0\rangle=\langle e_i,\sum_k a_k e_k\rangle=\sum_k a_k\langle e_i,e_k\rangle=\sum_k a_k\delta_{ik}=a_i.$$

- #6

homeomorphic

- 1,773

- 130

So, if you have one vector, it spans a 1-dimensional subspace. If you add another vector to the set, you get a linearly dependent set if the span doesn't get any bigger. So, two vectors are linearly dependent if one is a multiple of the other. So, if you added a vector that was pointing in a different direction (and not the opposite direction), together they span a 2-dimensional subspace. In that case, they are said to be linearly independent.

So, in general, if you have n vectors, they are linearly independent if throwing one of them out makes them span a smaller subspace. If you can throw one out without changing the span, they are linearly dependent.

This picture is less accurate, but still helpful, for more general vector spaces, in which the "vectors" aren't exactly "arrows" pointing in space anymore.

Functions are really a kind of vector because vectors are things that you can add together and multiply by scalars. So, it's the same idea for functions.

- #7

sncum

- 14

- 0

Thank to all of you

Linear independence refers to a set of vectors in a vector space that cannot be written as a linear combination of each other. In other words, none of the vectors in the set can be expressed as a linear combination of the others.

Linear independence is important because it allows us to determine whether a set of vectors can span a particular vector space. If a set of vectors is linearly independent, then it can span the entire vector space. This is useful in many areas of mathematics and science, such as in solving systems of equations and in determining the basis of a vector space.

A set of vectors is linearly independent if the only solution to the equation c1v1 + c2v2 + ... + cnvn = 0, where c1, c2, ..., cn are coefficients and v1, v2, ..., vn are the vectors in the set, is when all the coefficients are equal to 0. This means that none of the vectors in the set can be written as a linear combination of the others.

Linear independence is used in various mathematical and scientific applications, such as in determining the basis of a vector space, solving systems of equations, and finding solutions to differential equations. It is also useful in linear algebra, statistics, and physics.

Some real-world examples of linear independence include the three primary colors (red, blue, and green) in the RGB color model, the three dimensions (length, width, and height) in three-dimensional space, and the three primary forces (gravity, electromagnetism, and the strong nuclear force) in physics.

- Replies
- 1

- Views
- 800

- Replies
- 3

- Views
- 520

- Replies
- 2

- Views
- 2K

- Replies
- 6

- Views
- 1K

- Replies
- 11

- Views
- 1K

- Replies
- 28

- Views
- 4K

- Replies
- 2

- Views
- 1K

- Replies
- 10

- Views
- 1K

- Replies
- 3

- Views
- 3K

- Replies
- 7

- Views
- 2K

Share: