# Linear dependence of functions

• I
• Dank2
Vector Spaces.In summary, functions f and g are linearly dependent over the real numbers if there exists a non-trivial solution to the zero vector.f

#### Dank2

Functions f,g from R to R.

f(x) = xcosx, g(x) = cosx

for x = 0, we get f(x) = 0, g(x) = 1.
so for scalar t in R

t(f(x)) + 0 * g(x) = 0 . ==> f(x) and g(x) are linearly idepenent.

Is that right? if so in functions we search for an x that makes the function dependent?

Last edited:
What does it mean for two vectors ##v,w## to be linear dependent over ##ℝ##?

What does it mean for two vectors ##v,w## to be linear dependent over ##ℝ##?
it means that they give the trivial combination only for the zero vector.

What does it mean for two vectors ##v,w## to be linear dependent over ##ℝ##?
thing is, functions, needs somthing to be plugged inside them, where verctors are just scalars.

thing is, functions, needs somthing to be plugged inside them, where verctors are just scalars.
Thing is, no. The definition stays the same. Or have you ever read something like: "A vector space is ... but not for functions!"? And I asked for linear dependency, not independency. One of my next questions would have been "Over ##ℝ## or over ##ℝ[x]## or over ##ℝ(x)## in case you insist on fields?" This makes a significant difference here. So again: When are ##v,w## linear dependent?

Thing is, no. The definition stays the same. Or have you ever read something like: "A vector space is ... but not for functions!"? And I asked for linear dependency, not independency. One of my next questions would have been "Over ##ℝ## or over ##ℝ[x]## or over ##ℝ(x)## in case you insist on fields?" This makes a significant difference here. So again: When are ##v,w## linear dependent?

That i know, when there is non-trivial solution to the zero vector.

That i know, when there is non-trivial solution to the zero vector.
Or in case of two vectors, if ##v = λ \cdot w##. Now here is the point: Where is ##λ## supposed to be in? Therefore you may get different answers to your question.

Or in case of two vectors, if ##v = λ \cdot w##. Now here is the point: Where is ##λ## supposed to be in? Therefore you may get different answers to your question.
in the field, i know that. Still it's bit different to me in functions, so in order to verify they are dependent i need to check for which X they are dependent isn't that true?

Or in case of two vectors, if ##v = λ \cdot w##. Now here is the point: Where is ##λ## supposed to be in? Therefore you may get different answers to your question.
Sorry i corrected my answer above, i had a type, i mean linearly independent in post 1 .

Because we have non trivial solution.

in the field, i know that. Still it's bit different to me in functions, so in order to verify they are dependent i need to check for which X they are dependent isn't that true?
No. It has nothing to do with ##x##. You can simply put ##v=f(x)## and ##w=g(x)##. But ##ℝ## and ##ℝ(x)## are fields and ##ℝ[x]## is a ring. Each of them can be where the scalar ##λ## is from. So it's crucial here to say which is the domain for the scalars.

Dank2
No. It has nothing to do with ##x##. You can simply put ##v=f(x)## and ##w=g(x)##. But ##ℝ## and ##ℝ(x)## are fields and ##ℝ[x]## is a ring. Each of them can be where the scalar ##λ## is from. So it's crucial here to say which is the domain for the scalars.
OK , show me pleas how the two functions above, are linearly dependent without giving x any value.

x* (g(x)) = x * cosx ??

OK , show me pleas how the two functions above, are linearly dependent without giving x any value.
They are linear independent over ##ℝ## because there are no scalars ##λ \,, μ \in ℝ \setminus \{0\}## which fulfill ##0 = λ \cdot x\cos x + μ \cdot cos x##. However, if ##ℝ[x]## is your scalar domain, the polynomial ring over ##ℝ## or its quotient field ##ℝ(x)##, then ##f(x) = λ \cdot g(x)## with ##λ = x \in ℝ(x) \setminus \{0\}## so ##f## and ##g## are linear dependent. Therefore it is important to be precise on where the scalars are assumed to be in.

Dank2
They are linear independent over ##ℝ## because there are no scalars ##λ \,, μ \in ℝ \setminus \{0\}## which fulfill ##0 = λ \cdot x\cos x + μ \cdot cos x##. However, if ##ℝ[x]## is your scalar domain, the polynomial ring over ##ℝ## or its quotient field ##ℝ(x)##, then ##f(x) = λ \cdot g(x)## with ##λ = x \in ℝ(x) \setminus \{0\}## so ##f## and ##g## are linear dependent. Therefore it is important to be precise on where the scalars are assumed to be in.
Well in my question , The functions not in a vector space, it says they are just Function From R to R.

Is that means they are above R?, but above R means it's a vector space?

and also, if functions are only above R, why can't i use u = x. Since all the x are real anyway.

Well in my question . The functions not in a vector space, it says they are just Function From R to R.

Is that means they are above R?, but above R means it's a vector space?
If you have just functions then they are nothing else. But then the term "linear (in)dependent" has no meaning.

Linearity means you can add things and multiply them by some factor. Two linear independent vectors mean essentially they are pointing in different directions, so none is a multiple of the other.
Functions can be seen as vectors. We can add them: ##(f+g)(x) = f(x) + g(x)## and we can multiply them: ##(λ \cdot f)(x) = λf(x)##.

Your example shows that this multiplication, which essentially means to stretch or compress a vector (and thus not changing the direction), can decide whether they are linear independent or not.
If ##x## is part of the values which are available for a stretching (as in ##ℝ[x]##), then your functions are linear dependent. If not, like in ##ℝ##, then there is no way, no real number to stretch or compress ##x \cos x## into ##\cos x##.
The domain of the scalars is crucial in the definition of linearity.

Sorry i didn't get the whole picture yet.
f(x) =xcosx, g(x) = cosx, h(x) = sinx

is that right to say that they need to be linearly independent for all x? by that i mean:

a1f(x) + a2g(x) + a3h(x) = 0 ===> a1, a2 , a3, = 0 . for all x in R.

If so, then my problem was taking only the particular x=0.

Last edited:
OK now regarding functions, and linear dependency, do i need to even think about the values of x?
No.

i mean are they linearly independent for all x in R?
One has nothing to do with the other.

E.g. ##x^2## and ##5x^2## are linear dependant over ##ℝ## or ##ℚ## because you can stretch one by ##5## to get the other. But ##\sin x## and ##\cos x## are linear independent because you can't stretch one to get the other. And these two functions have almost the same graph as you know, or are just shifted. But not stretched. They even have many points in common where ##\cos x = \sin x##.

Dank2
No.

One has nothing to do with the other.

E.g. ##x^2## and ##5x^2## are linear dependant over ##ℝ## or ##ℚ## because you can stretch one by ##5## to get the other. But ##\sin x## and ##\cos x## are linear independent because you can't stretch one to get the other. And these two functions have almost the same graph as you know, or are just shifted. But not stretched. They even have many points in common where ##\cos x = \sin x##.
I just remember somewhere a problem where we need to check for dependency of functions, where we paid attention to the values and even said if x = somthing , then they are dependent, or could i have forgotten ;/.

So in trig function dependencies all i need to work with is trig identities?

You may always build a vector from the origin ##(0,0)## to some fixed points ##(x_0,f(x_0))## and ##(x_0,g(x_0))## and ask whether these two specific vectors are linearly independent. Or for which ##x_0## they are linearly independent or for which they are not. However, this is not the linear dependency of the functions as a whole. The only point where "for all ##x##" comes in is, when we define equalities for functions and equations of functions ##f=g ⇔ f(x) = g(x) ∀ x##.

And you can also define "dependent" in another way. E.g. sometimes it is said that ##x## is the dependent variable or ##f## depends on ##x##.

Linearity, however, has always the form ##(af + bg)(x) = af(x) + bg(x)##, and yes, for all ##x## simultaneously.
You may of course vary where the ##a,b## are from, from ##\mathbb{R}## or ##\mathbb{R}[x]## or even where the ##x## may be taken from, e.g. from ##\mathbb{R}## or from ##[0,1]##. But linearity of functions refers to the addition ##f+g## and the "stretching" ##af##. "stretching" is called scalar multiplication and ##a## the scalar. (You scale the function values.)

Don't confuse this please with linear functions ##φ##. They are a special type of functions where the linearity holds for the argument, the variable: ##φ(ax_1 + bx_2) = aφ(x_1) + bφ(x_2)##.
But the cosine function in your example is not linear.

Dank2
If not, like in Rℝℝ, then there is no way, no real number to stretch or compress xcosxxcos⁡xx \cos x into cosxcos⁡x\cos x.

It's still bothers that if i plug any real number in x, then i can find scalar a1 =/= 0 in R, that would satisfy:
a1xcosx + +a2cosx = 0

let's say x = 3

a1(3cos3) + a2(cos3) = 0 ==> a1 = 1, a2 = -3.

and we got non trivial combination for the zero vector.

You may always build a vector from the origin ##(0,0)## to some fixed points ##(x_0,f(x_0))## and ##(x_0,g(x_0))## and ask whether these two specific vectors are linearly independent. Or for which ##x_0## they are linearly independent or for which they are not. However, this is not the linear dependency of the functions as a whole. The only point where "for all ##x##" comes in is, when we define equalities for functions and equations of functions ##f=g ⇔ f(x) = g(x) ∀ x##.

And you can also define "dependent" in another way. E.g. sometimes it is said that ##x## is the dependent variable or ##f## depends on ##x##.

Linearity, however, has always the form ##(af + bg)(x) = af(x) + bg(x)##, and yes, for all ##x## simultaneously.
You may of course vary where the ##a,b## are from, from ##\mathbb{R}## or ##\mathbb{R}[x]## or even where the ##x## may be taken from, e.g. from ##\mathbb{R}## or from ##[0,1]##. But linearity of functions refers to the addition ##f+g## and the "stretching" ##af##. "stretching" is called scalar multiplication and ##a## the scalar. (You scale the function values.)

Don't confuse this please with linear functions ##φ##. They are a special type of functions where the linearity holds for the argument, the variable: ##φ(ax_1 + bx_2) = aφ(x_1) + bφ(x_2)##.
But the cosine function in your example is not linear.

On that threat they do plug different values to x to show linear independence.

It's still bothers that if i plug any real number in x, then i can find scalar a1 =/= 0 in R, that would satisfy:
a1xcosx + +a2cosx = 0

let's say x = 3

a1(3cos3) + a2(cos3) = 0 ==> a1 = 1, a2 = -3.

and we got non trivial combination for the zero vector.
Yes. And if you take ##x= 4## you will have to find another pair ##a_1 \, , a_2## and many others for really many values of ##x.## But to be linearly dependent as functions there has to be a single pair ##λ, μ## not both zero for which ##λ \cdot x \cos x + μ \cdot \cos x = 0##. You cannot find such a pair of real numbers. The equality has to hold for the functions themselves, so for all ##x## simultaneously.

You could find scalars, if you would allow ##\mathbb{R}[x]## to be your scalars. Then ##1 \cdot (x \cos x) + (-x) \cdot \cos x = f(x) + (-x) g(x) = 0##.

So ##f## and ##g## are linear independent functions over ##\mathbb{R}## (as scalar domain) and linear dependent functions over ##\mathbb{R}[x]## (as scalar domain).

It is important to distinguish between the functions and the function values. The function is ##x → f(x)##, a kind of a set of pairs, the function value is only ##f(x)##, a single value.

Dank2
Yes. And if you take ##x= 4## you will have to find another pair ##a_1 \, , a_2## and many others for really many values of ##x.## But to be linearly dependent as functions there has to be a single pair ##λ, μ## not both zero for which ##λ \cdot x \cos x + μ \cdot \cos x = 0##. You cannot find such a pair of real numbers. The equality has to hold for the functions themselves, so for all ##x## simultaneously.

You could find scalars, if you would allow ##\mathbb{R}[x]## to be your scalars. Then ##1 \cdot (x \cos x) + (-x) \cdot \cos x = f(x) + (-x) g(x) = 0##.

So ##f## and ##g## are linear independent functions over ##\mathbb{R}## (as scalar domain) and linear dependent functions over ##\mathbb{R}[x]## (as scalar domain).

It is important to distinguish between the functions and the function values. The function is ##x → f(x)##, a kind of a set of pairs, the function value is only ##f(x)##, a single value.