# Linear Dependence in ODEs

1. Nov 2, 2006

### Noesis

Now I am reading over a theorem, which is very easy to understand, except for a small caveat.

Bascally:

A set of functions are said to be linearly dependent on an interval I if there exists constants, c1, c2...cn, not all zero, such that

c1f1(x) + c2f2(x) ... + cnfn(x) = 0

Well the constant part is easy enough to understand, if for every x on the interval a certain constant makes them equal, then they are obviously increasing/decreasing linearly at the same rate, and are thus linearly dependent.

Now, the not all zero part seriously bugs me.

Since this means some of the constants can be 0...but not all of them.

What if I had three functions:

y = 2x, y = 3x, and y = x^99

Going by the theorem:

(-3/2)*(2x) + (1)(3x) = 0

For the first two functions it works fine...and if I put in the third function:

(-3/2)*(2x) + (1)(3x) + (0)(x^99) = 0

It still equates to zero. And I sure as hell know that x^99 is not linearly dependent with the other functions.

So what the in the world is going on? It seems the theorem should say no constants being zero for it to have any validity.

Please somebody shed light on this.

Thanks guys.

2. Nov 2, 2006

### Noesis

Ok, now I have a second question which is sure to expose the mistake in my thinking.

One of the problems have three functions

y = x, y = x^2, and y = 4x - 3x^2

They are apparently dependent...which makes sense because:

(-4)x + (3)(x^2) + (1)4x - 3x^2 = 0

But I don't understand it.

y = x, is a straight line, a totally linear equation. y = x^2 is a parabola, an equation that is definitely not linear.

HOW can they be linearly dependent? I imagine somehow the third function is changing things up, but I don't understand how.

3. Nov 3, 2006

### Tomsk

You're right that y=x^99 is linearly independant from the other two. So you can chose, say, y=2x and y=x^99 to form a basis of the space of functions, which means the other functions in the space can be written as a multiple of the basis functions, in this case, 3x= (3/2)*2x + (0)*x^99.

The functions don't have to be linear for them to be linearly independant. In your second example, you would have to chose x and x^2 as the basis, because the dimension of that space is 2, and x and x^2 are linearly independant. Then the function 4x-3x^2 can be written in terms of the basis functions, which you have already done.

4. Nov 3, 2006

You should look at the problem from a more formal point of view. That should make it clear.

So, you have the set of functions {x, x^2, 4x-3x^2} and you want to know if the set (i.e. the functions) is linearly dependent or not. Well, you can try to test the independence of the functions first. If they are not independent, then you'll know they are dependent.

So, assume they are independent. That implies:
c1x + c2x^2 + c3(4x-3x^2) = 0. Further on, by rearanging, you get: x^2(c2 - 3c3) + x(c1 + 4c3) = 0, which must hold for every real number x. It is obvious that that this is a zero polynomial, and its coefficients must vanish, so you obtain a system of equations:

c2 -3c3 = 0
c1 + 4c3 = 0.

Now it is clear that this equation has only a parametric solution (i.e. no unique solution c1 = c2 = c3 = 0, which would be required for the set to be linearly independent), and so you may conclude that the set of functions {x, x^2, 4x-3x^2} is linearly dependent.

5. Nov 3, 2006

### Noesis

Thanks for the help guys.

I understand the formalism in it, I just don't understand why this is so.

For example, how can y = x, and y = x^2 be linearly dependent just because that third function was thrown in?

I guess I must be missing what it truly means to be linearly dependent.

I also don't understand the caveat of having one of the constants equal zero.

The formalism helps to see it more clearly, but those two questions still bug me.

Thanks again for the help guys.

6. Nov 3, 2006

I may not sound very helpful right now, but it's best for you to go through some basic definitions, as to work your way through some examples. Actually, the more examples you go through, the better you'll understand.

7. Nov 3, 2006

### HallsofIvy

Staff Emeritus
It's a really good idea to study Linear Algebra before Differential Equations since most of a first course in Linear Algebra is "linear differential equations" and the whole theory behind them is Linear Algebra!

In linear algebra, we say that a set of vectors, {v1, v2, ..., vn} is independent if and only if the only set of scalars a1, a2, ..., an, such that a1v1+ a2v2+ ...+ anvn= 0 is a1= a2= ...= an= 0. A set of vectors, then, is dependent if that is not true.
The fundamental theorem for linear differential equations is that the set of all solutions for a nth order linear homogeneous differential equation for a n-dimensional vectors space: so all solutions can be written as linear combinations of n independent solutions.

Notice that if any subset of a set of functions (or vectors) is dependent, then the whole set is dependent.

8. Nov 3, 2006

### Noesis

Ok...I think I understand, but if it works this way it's kind of strange in my opinion.

Three functions once again,

y = ax, y = bx, and y = x^n (n is not equal to 1)

Intuitively I can see that y = ax and y = bx are linearly dependent, and of course y = x^n is linearly independent from both y = ax, and y = bx.

This is correct right?

I imagine this is verified by seeing that the only solution to these equations:

(c1)(ax) + (c2)(x^n) = 0

Would be for the two constants to be zero. Likewise of course for y = bx.

BUT, and here is my BIG PROBLEM, when I group them all together, they are supposedly linearly dependent!

(c1)(ax) + (c2)(bx) + (c3)(x^n) = 0

This is because c3 can be chosen to be zero thus in essence cancelling it out of the equation all together.

So when you say that these functions are linearly dependent...and please here is where you must correct me if I'm wrong...it means that any one function, is linearly dependent to ALL OF THE OTHERS?

Because it obviously can't be that they are all individually linearly dependent, since I just showed y = bx and y = x^n to be linearly independent...so we must take all of the functions into consideration.

Almost as if all of the little functions meld into one giant function?

Then of course I can input ANYTHING in there,

y = sin(cos(tan(e^01298392103)))x^-43

and just putting a zero in front of it, and adding it into my calculation, I can say that the functions are linearly dependent.

Is this way of thinking true? This has been my big issue.

Thanks guys...and I apologize for really sticking this problem so much.

9. Nov 3, 2006

### Noesis

Please go through my example...but let me try to summarize what I meant in a bit more cohesive word:

When I calculate linear dependence of a set via this method, I am not really checking the linear dependency of each element in this set against each other, but rather the linear dependency of each individual element against the entire set (excluding itself of course).

Is this correct?

10. Nov 5, 2006

### Noesis

Well I've pretty much gone over it many times and it finally clicked, as Radou said it would, hah.

My problem was that it seems so pointless....but now I understand in the DE world, this is a very important problem. By doing this we can make sure that we indeed have different solutions.

It should be given another name in my opinion, Linear Independence of a Set or something to indicate that it has to do with the entire set and not individual elements.

Thanks for the help guys in steering me in the right direction.

11. Nov 6, 2006

### HallsofIvy

Staff Emeritus
Actually, it is. We talk about the independence or dependence of a set of functions. By the way, as I pointed out before, this is not just in the "D.E. world"- this is basically Linear Algebra. Linear Algebra should always be a pre-requisite for Differential Equations.

12. Nov 7, 2006

### Noesis

I wish I would've known.

Someone should inform my university, hah.