Undergrad How can you prove the integral without knowing the derivative?

Click For Summary
The discussion revolves around the challenge of proving the integral of a function, specifically x-cubed, without relying on the relationship between differentiation and integration. The original poster acknowledges the connection between the two concepts but seeks to understand how to derive integrals independently. Participants highlight that it is possible to compute definite integrals using Riemann sums without invoking the fundamental theorem of calculus. An example is provided, demonstrating how to calculate the definite integral of x-squared from first principles. The conversation emphasizes the distinction between indefinite integrals and the process of deriving them without prior knowledge of derivatives.
Trying2Learn
Messages
375
Reaction score
57
TL;DR
How can you prove the integral without knowing the derivative
Hello

(A continued best wishes to all, in these challenging times and a repeated 'thank you' for this site.)

OK, I have read that Newton figured out that differentiation and integration are opposites of each other.
(This is not the core of my question, so if that is wrong, please let it go.)

I can work out the derivative of, say, f(x) = x-squared.

However, I am having trouble proving the integral of, say, x-cubed.

If it is true, that it was 'discovered' that differentiation is anti-integration or integration is anti-derivative...

If that is true that it was 'discovered' and that people were studying differentiation and integration SEPARATELY, then I should be
able to derive the integral of say, x-cubed.

But I cannot do it without the first fundamental theorem of the calculus (which, assumes already that the two are opposites).

First, is this a ridiculous question?

If not, could someone work out one example? Something simple like the indefinite integral of say: x-cubed divided by 3

Thank you
 
Physics news on Phys.org
You can look at the methods for example. A derivative is the quotient ##\dfrac{f(p+h)-f(p)}{h} \longrightarrow \dfrac{df}{dx}## where ##h \longrightarrow dx## is getting smaller and smaller, and ##f(p+h)-f(p) \longrightarrow df##, which is the opposite of Riemannian sums: ##\displaystyle{\sum f(x_h) \cdot h}## where the interval lengths ##h \longrightarrow dx## again. One is a quotient and the other a multiplication.

You can also have a look at easy examples like ##f(x)=c,\; f(x)=x ## or ##f(x)=x^2## and observe the connection of both, and then think about a generalisation.
 
Trying2Learn said:
Summary::
If not, could someone work out one example? Something simple like the indefinite integral of say: x-cubed divided by 3

What do you mean by "indefinite integral" if not just anti-derivative?

What you can do without derivatives, though is compute the definite integral ##\int_a^b x^3 dx=\frac{1}{4}(b^4-a^4)## from the Riemann sum definition of an integral.
 
I understand what both are saying.
(And this is where it may be that the question is silly.)

But I can take the difference of the ratio of the function at two points divided by the difference in the points and get an approximation to the derivative.

I can actually show the derivative of f(x) = x-squared, is 2x.

But can I show the reverse WITHOUT knowing the anti-derivative, WITHOUT knowing the derivative is the opposite of the Riemann sum.

So why can't I formulate the integral WITHOUT knowing about the derivative?

Both answers above do that.

I am confused.
 
You can calculate ##\displaystyle{\int_a^b} x^2\,dx## without knowing the derivative of ##f(x)=x^2## simply by Riemann sums, or without knowing that ##\dfrac{d}{dx}\dfrac{1}{3}x^3 =x^2.##
 
Here is an example of how you would go about computing a definite integral of ##f(x) = x^2## from "first-principle" without utilizing the fundamental theorem of calculus. I will only show the calculations for the right evaluated Riemann sum, as the calculations for the left evaluated Riemann sum is identical to it.
$$ \begin{align*} A(x) &= \int_0^xf(s)\,ds \\
&:= \lim_{N\rightarrow\infty}\frac{x-0}{N}\sum_{k=1}^Nf\Big(k\frac{x-0}{N}\Big) \\
&= \lim_{N\rightarrow\infty}\frac{x}{N}\sum_{k=1}^N\Big(k\frac{x}{N}\Big)^2 \\
&= x^3\lim_{N\rightarrow\infty}\frac{1}{N^3}\sum_{k=1}^Nk^2 \\
&= x^3\lim_{N\rightarrow\infty}\frac{1}{N^3}\frac{N(N+1)(2N+1)}{6} \\
&= x^3\lim_{N\rightarrow\infty}\Big(\frac{1}{3} + \frac{1}{2}\frac{1}{N} + \frac{1}{6}\frac{1}{N^2}\Big) \\
&= \frac{1}{3}x^3.\end{align*}$$
Where third last equality is valid by Faulhaber's Formula <LINK>, particularly the Faulhaber formula for
$$\sum_{k=1}^Nk^2 = \frac{N(N+1)(2N+1)}{6}.$$
 
  • Like
  • Informative
Likes weirdoguy, etotheipi, Stephen Tashi and 2 others
Thank you very much, everyone!
 

Similar threads

  • · Replies 22 ·
Replies
22
Views
4K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 12 ·
Replies
12
Views
3K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 4 ·
Replies
4
Views
4K
Replies
1
Views
3K
  • · Replies 5 ·
Replies
5
Views
2K
Replies
5
Views
2K