Now I am reading over a theorem, which is very easy to understand, except for a small caveat. Bascally: A set of functions are said to be linearly dependent on an interval I if there exists constants, c1, c2...cn, not all zero, such that c1f1(x) + c2f2(x) ... + cnfn(x) = 0 Well the constant part is easy enough to understand, if for every x on the interval a certain constant makes them equal, then they are obviously increasing/decreasing linearly at the same rate, and are thus linearly dependent. Now, the not all zero part seriously bugs me. Since this means some of the constants can be 0...but not all of them. What if I had three functions: y = 2x, y = 3x, and y = x^99 Going by the theorem: (-3/2)*(2x) + (1)(3x) = 0 For the first two functions it works fine...and if I put in the third function: (-3/2)*(2x) + (1)(3x) + (0)(x^99) = 0 It still equates to zero. And I sure as hell know that x^99 is not linearly dependent with the other functions. So what the in the world is going on? It seems the theorem should say no constants being zero for it to have any validity. Please somebody shed light on this. Thanks guys.