# Understanding Linear Dependence in ODEs: Exploring the Not All Zero Condition

• Noesis
In summary: We define the nth derivative as a linear operator- if u and v are continuous functions and a and b are constants then d^n(au+bv)/dx^n= a(d^n(u)/dx^n)+ b(d^n(v)/dx^n). A nth order linear homogeneous differential equation is one of the form dn(u)/dx^n+ an-1dn-1(u)/dx^(n-1)+ ...+ a0u= 0. The fundamental theorem is that the set of all solutions to that differential equation form a n-dimensional vector space on which the differential operator dn/dx^n
Noesis
Now I am reading over a theorem, which is very easy to understand, except for a small caveat.

Bascally:

A set of functions are said to be linearly dependent on an interval I if there exists constants, c1, c2...cn, not all zero, such that

c1f1(x) + c2f2(x) ... + cnfn(x) = 0

Well the constant part is easy enough to understand, if for every x on the interval a certain constant makes them equal, then they are obviously increasing/decreasing linearly at the same rate, and are thus linearly dependent.

Now, the not all zero part seriously bugs me.

Since this means some of the constants can be 0...but not all of them.

What if I had three functions:

y = 2x, y = 3x, and y = x^99

Going by the theorem:

(-3/2)*(2x) + (1)(3x) = 0

For the first two functions it works fine...and if I put in the third function:

(-3/2)*(2x) + (1)(3x) + (0)(x^99) = 0

It still equates to zero. And I sure as hell know that x^99 is not linearly dependent with the other functions.

So what the in the world is going on? It seems the theorem should say no constants being zero for it to have any validity.

Please somebody shed light on this.

Thanks guys.

Ok, now I have a second question which is sure to expose the mistake in my thinking.

One of the problems have three functions

y = x, y = x^2, and y = 4x - 3x^2

They are apparently dependent...which makes sense because:

(-4)x + (3)(x^2) + (1)4x - 3x^2 = 0

But I don't understand it.

y = x, is a straight line, a totally linear equation. y = x^2 is a parabola, an equation that is definitely not linear.

HOW can they be linearly dependent? I imagine somehow the third function is changing things up, but I don't understand how.

You're right that y=x^99 is linearly independant from the other two. So you can chose, say, y=2x and y=x^99 to form a basis of the space of functions, which means the other functions in the space can be written as a multiple of the basis functions, in this case, 3x= (3/2)*2x + (0)*x^99.

The functions don't have to be linear for them to be linearly independant. In your second example, you would have to chose x and x^2 as the basis, because the dimension of that space is 2, and x and x^2 are linearly independant. Then the function 4x-3x^2 can be written in terms of the basis functions, which you have already done.

Noesis said:
Ok, now I have a second question which is sure to expose the mistake in my thinking.

One of the problems have three functions

y = x, y = x^2, and y = 4x - 3x^2

They are apparently dependent...which makes sense because:

(-4)x + (3)(x^2) + (1)4x - 3x^2 = 0

But I don't understand it.

y = x, is a straight line, a totally linear equation. y = x^2 is a parabola, an equation that is definitely not linear.

HOW can they be linearly dependent? I imagine somehow the third function is changing things up, but I don't understand how.

You should look at the problem from a more formal point of view. That should make it clear.

So, you have the set of functions {x, x^2, 4x-3x^2} and you want to know if the set (i.e. the functions) is linearly dependent or not. Well, you can try to test the independence of the functions first. If they are not independent, then you'll know they are dependent.

So, assume they are independent. That implies:
c1x + c2x^2 + c3(4x-3x^2) = 0. Further on, by rearanging, you get: x^2(c2 - 3c3) + x(c1 + 4c3) = 0, which must hold for every real number x. It is obvious that that this is a zero polynomial, and its coefficients must vanish, so you obtain a system of equations:

c2 -3c3 = 0
c1 + 4c3 = 0.

Now it is clear that this equation has only a parametric solution (i.e. no unique solution c1 = c2 = c3 = 0, which would be required for the set to be linearly independent), and so you may conclude that the set of functions {x, x^2, 4x-3x^2} is linearly dependent.

Thanks for the help guys.

I understand the formalism in it, I just don't understand why this is so.

For example, how can y = x, and y = x^2 be linearly dependent just because that third function was thrown in?

I guess I must be missing what it truly means to be linearly dependent.

I also don't understand the caveat of having one of the constants equal zero.

The formalism helps to see it more clearly, but those two questions still bug me.

Thanks again for the help guys.

Noesis said:
Thanks for the help guys.

I understand the formalism in it, I just don't understand why this is so.

For example, how can y = x, and y = x^2 be linearly dependent just because that third function was thrown in?

I guess I must be missing what it truly means to be linearly dependent.

I also don't understand the caveat of having one of the constants equal zero.

The formalism helps to see it more clearly, but those two questions still bug me.

Thanks again for the help guys.

I may not sound very helpful right now, but it's best for you to go through some basic definitions, as to work your way through some examples. Actually, the more examples you go through, the better you'll understand.

It's a really good idea to study Linear Algebra before Differential Equations since most of a first course in Linear Algebra is "linear differential equations" and the whole theory behind them is Linear Algebra!

In linear algebra, we say that a set of vectors, {v1, v2, ..., vn} is independent if and only if the only set of scalars a1, a2, ..., an, such that a1v1+ a2v2+ ...+ anvn= 0 is a1= a2= ...= an= 0. A set of vectors, then, is dependent if that is not true.
The fundamental theorem for linear differential equations is that the set of all solutions for a nth order linear homogeneous differential equation for a n-dimensional vectors space: so all solutions can be written as linear combinations of n independent solutions.

Notice that if any subset of a set of functions (or vectors) is dependent, then the whole set is dependent.

Ok...I think I understand, but if it works this way it's kind of strange in my opinion.

Three functions once again,

y = ax, y = bx, and y = x^n (n is not equal to 1)

Intuitively I can see that y = ax and y = bx are linearly dependent, and of course y = x^n is linearly independent from both y = ax, and y = bx.

This is correct right?

I imagine this is verified by seeing that the only solution to these equations:

(c1)(ax) + (c2)(x^n) = 0

Would be for the two constants to be zero. Likewise of course for y = bx.

BUT, and here is my BIG PROBLEM, when I group them all together, they are supposedly linearly dependent!

(c1)(ax) + (c2)(bx) + (c3)(x^n) = 0

This is because c3 can be chosen to be zero thus in essence cancelling it out of the equation all together.

So when you say that these functions are linearly dependent...and please here is where you must correct me if I'm wrong...it means that any one function, is linearly dependent to ALL OF THE OTHERS?

Because it obviously can't be that they are all individually linearly dependent, since I just showed y = bx and y = x^n to be linearly independent...so we must take all of the functions into consideration.

Almost as if all of the little functions meld into one giant function?

Then of course I can input ANYTHING in there,

y = sin(cos(tan(e^01298392103)))x^-43

and just putting a zero in front of it, and adding it into my calculation, I can say that the functions are linearly dependent.

Is this way of thinking true? This has been my big issue.

Thanks guys...and I apologize for really sticking this problem so much.

Please go through my example...but let me try to summarize what I meant in a bit more cohesive word:

When I calculate linear dependence of a set via this method, I am not really checking the linear dependency of each element in this set against each other, but rather the linear dependency of each individual element against the entire set (excluding itself of course).

Is this correct?

Well I've pretty much gone over it many times and it finally clicked, as Radou said it would, hah.

My problem was that it seems so pointless...but now I understand in the DE world, this is a very important problem. By doing this we can make sure that we indeed have different solutions.

It should be given another name in my opinion, Linear Independence of a Set or something to indicate that it has to do with the entire set and not individual elements.

Thanks for the help guys in steering me in the right direction.

Actually, it is. We talk about the independence or dependence of a set of functions. By the way, as I pointed out before, this is not just in the "D.E. world"- this is basically Linear Algebra. Linear Algebra should always be a pre-requisite for Differential Equations.

I wish I would've known.

Someone should inform my university, hah.

## 1. What is linear dependence in ODEs?

Linear dependence in ODEs refers to the relationship between the dependent and independent variables in a system of ordinary differential equations (ODEs). It occurs when one or more of the equations in the system can be expressed as a linear combination of the other equations.

## 2. Why is linear dependence important in ODEs?

Linear dependence is important in ODEs because it affects the uniqueness and stability of solutions. If a system of ODEs is linearly dependent, it may have infinitely many solutions or may not have a unique solution at all. This can make it difficult to accurately model and predict the behavior of the system.

## 3. How can I determine if a system of ODEs is linearly dependent?

A system of ODEs is linearly dependent if any of the equations can be written as a linear combination of the other equations. This can be determined by using techniques such as Gaussian elimination or calculating the determinant of the coefficient matrix.

## 4. What are the consequences of linear dependence in ODEs?

The consequences of linear dependence in ODEs include a lack of uniqueness and stability in solutions, making it difficult to predict the behavior of the system. It can also lead to difficulties in solving the system and obtaining accurate results.

## 5. How can linear dependence be avoided in ODEs?

To avoid linear dependence in ODEs, one must ensure that the equations in the system are independent and cannot be expressed as a linear combination of each other. This can be achieved by carefully choosing the variables and equations in the system and by using techniques such as non-dimensionalization.

• Calculus and Beyond Homework Help
Replies
1
Views
270
• Calculus and Beyond Homework Help
Replies
2
Views
262
• Math POTW for University Students
Replies
10
Views
1K
• Calculus and Beyond Homework Help
Replies
6
Views
987
• Calculus and Beyond Homework Help
Replies
28
Views
4K
• Differential Equations
Replies
3
Views
1K
• Differential Equations
Replies
3
Views
786
• Calculus and Beyond Homework Help
Replies
1
Views
2K
• Calculus and Beyond Homework Help
Replies
5
Views
1K
• Calculus and Beyond Homework Help
Replies
3
Views
1K