Applications of Integral Calculus to Root Solving

lapo3399
Messages
53
Reaction score
0
As a Grade 12 student that is often required to find the roots of quadratics for math, physics, and chemistry problems, I wondered whether there would be any methods for solving these problems excepting the quadratic formula. I was pondering the implications of calculus in algebra and, although this may seem much more complicated than the quadratic formula itself, have determined something interesting regarding roots and integrals.

If the area under a curve is found for a function f(x)

A = \int_{a}^{b}\ f(x) dx

which may also be represented as

A(x) = \int_{a}^{a+c}\ f(x) dx

where h is the different between a and b, then

A(x) = F(a+c) - F(a)

As A(x) will be maximized when the total area between the two roots is found (assuming no improper integrals or infinite areas), then

a(x) = f(a+c) - f(a)
0 = f(a+c) - f(a)
f(a) = f(a+c)

It is rather obvious that this means that the function has equal values (0 as the two x values lie on the x-axis) at f(a) or the first root and f(a+c) or the second root, but I must ask something that has been puzzling me concerning this rather meaningless conclusion : if the fact that the area function is maximized causes f(a) to equal f(a+c), and considering the fact that, for example, a quadratic has an infinite number of solutions for f(a) = f(a+c) that are not restricted to the roots of the equation, why should the maximization condition be necessary to produce the equation f(a) = f(a+c)?

The best explanation that I have for this is that, assuming c is remaining constant, the rate of change in area on the left will be the negation of the rate of change on the right, and so there is no maximum for the area function AS I have defined it. That is, if I were to take a quadratic and decrease a, the area concerned would increase/decrease by a certain amount, but the change in a+c would compensate for this with an equal area change on the right, thus keeping the area constant. The only way I see of defining this better is defining c as a non-constant, as it obviously will change depending on the function.

Please provide any insight that you have!

Thanks,
lapo3399
 
Mathematics news on Phys.org
lapo3399 said:
If the area under a curve is found for a function f(x)

A = \int_{a}^{b}\ f(x) dx

which may also be represented as

A(x) = \int_{a}^{a+c}\ f(x) dx

where h is the different between a and b, then

A(x) = F(a+c) - F(a)

A does not depend on x, but rather on a and c. I'll assume a is constant and c is what you're after.

lapo3399 said:
As A(x) will be maximized when the total area between the two roots is found (assuming no improper integrals or infinite areas), then

a(x) = f(a+c) - f(a)
0 = f(a+c) - f(a)
f(a) = f(a+c)

Lost me there. I'll assume a is the derivative of A with respect to c (still not sure where x comes in here). But then F(a) is a constant, and so we actually have

a(c) = f(a+c)
0 = f(a+c)

which is practically a tautology.
 
Hi Lapo. One problem of course is that your method is fundamentally a circular argument (integrating the function and then setting the derivative to zero). Had you made no errors in the process then your "solution" could only have been to arrive back at the starting point, that is with the original equation f(x)=0

A(x) = F(a+c) - F(a)
There's a problem right there as this is really a function of both "a" and "c". If you wanted to proceed like this you should have written,

A(x1,x2) = F(x2) - F(x1).

You then would have taken the two partial derivatives and set them to zero, each one yeilding nothing but the original problem : f(x1)=0, f(x2)=0.

Easier would have been to simply write A(x) = F(x) + const and then differentiate that to give f(x)=0.
 
Last edited:
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. In Dirac’s Principles of Quantum Mechanics published in 1930 he introduced a “convenient notation” he referred to as a “delta function” which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/ by...
Fermat's Last Theorem has long been one of the most famous mathematical problems, and is now one of the most famous theorems. It simply states that the equation $$ a^n+b^n=c^n $$ has no solutions with positive integers if ##n>2.## It was named after Pierre de Fermat (1607-1665). The problem itself stems from the book Arithmetica by Diophantus of Alexandria. It gained popularity because Fermat noted in his copy "Cubum autem in duos cubos, aut quadratoquadratum in duos quadratoquadratos, et...
Thread 'Imaginary Pythagorus'
I posted this in the Lame Math thread, but it's got me thinking. Is there any validity to this? Or is it really just a mathematical trick? Naively, I see that i2 + plus 12 does equal zero2. But does this have a meaning? I know one can treat the imaginary number line as just another axis like the reals, but does that mean this does represent a triangle in the complex plane with a hypotenuse of length zero? Ibix offered a rendering of the diagram using what I assume is matrix* notation...
Back
Top