# Applications of Integral Calculus to Root Solving

1. Mar 31, 2007

### lapo3399

As a Grade 12 student that is often required to find the roots of quadratics for math, physics, and chemistry problems, I wondered whether there would be any methods for solving these problems excepting the quadratic formula. I was pondering the implications of calculus in algebra and, although this may seem much more complicated than the quadratic formula itself, have determined something interesting regarding roots and integrals.

If the area under a curve is found for a function f(x)

$$A = \int_{a}^{b}\ f(x) dx$$

which may also be represented as

$$A(x) = \int_{a}^{a+c}\ f(x) dx$$

where h is the different between a and b, then

$$A(x) = F(a+c) - F(a)$$

As A(x) will be maximized when the total area between the two roots is found (assuming no improper integrals or infinite areas), then

$$a(x) = f(a+c) - f(a)$$
$$0 = f(a+c) - f(a)$$
$$f(a) = f(a+c)$$

It is rather obvious that this means that the function has equal values (0 as the two x values lie on the x-axis) at f(a) or the first root and f(a+c) or the second root, but I must ask something that has been puzzling me concerning this rather meaningless conclusion : if the fact that the area function is maximized causes f(a) to equal f(a+c), and considering the fact that, for example, a quadratic has an infinite number of solutions for f(a) = f(a+c) that are not restricted to the roots of the equation, why should the maximization condition be necessary to produce the equation f(a) = f(a+c)?

The best explanation that I have for this is that, assuming c is remaining constant, the rate of change in area on the left will be the negation of the rate of change on the right, and so there is no maximum for the area function AS I have defined it. That is, if I were to take a quadratic and decrease a, the area concerned would increase/decrease by a certain amount, but the change in a+c would compensate for this with an equal area change on the right, thus keeping the area constant. The only way I see of defining this better is defining c as a non-constant, as it obviously will change depending on the function.

Please provide any insight that you have!

Thanks,
lapo3399

2. Mar 31, 2007

### Moo Of Doom

A does not depend on x, but rather on a and c. I'll assume a is constant and c is what you're after.

Lost me there. I'll assume a is the derivative of A with respect to c (still not sure where x comes in here). But then F(a) is a constant, and so we actually have

$$a(c) = f(a+c)$$
$$0 = f(a+c)$$

which is practically a tautology.

3. Mar 31, 2007

### uart

Hi Lapo. One problem of course is that your method is fundamentally a circular argument (integrating the function and then setting the derivative to zero). Had you made no errors in the process then your "solution" could only have been to arrive back at the starting point, that is with the original equation f(x)=0

There's a problem right there as this is really a function of both "a" and "c". If you wanted to proceed like this you should have written,

$$A(x1,x2) = F(x2) - F(x1)$$.

You then would have taken the two partial derivatives and set them to zero, each one yeilding nothing but the original problem : f(x1)=0, f(x2)=0.

Easier would have been to simply write $$A(x) = F(x) + const$$ and then differentiate that to give f(x)=0.

Last edited: Mar 31, 2007