# Homework Help: A simple question on integral as an area.

1. Aug 21, 2006

### MathematicalPhysicist

let f be a monotone function on the interval [a,b].
f(a)=$$\alpha$$ f(b)=$$\beta$$.
let g be the inverse function of f i.e f^-1.
prove that:
$$\int_{\alpha}^{\beta}g(y)dy=b\beta-a\alpha-\int_{a}^{b}f(x)dx$$ when you consider the definition of the integral as area.

here what i did:
Sn=$$\sum_{i=0}^n g(\zeta_i)(y_i-y_{i-1})$$
f(x)=y f^-1(y)=x=g(y)
y_i-y_i-1=f(x_i)-f(x_i-1)
S'n=$$\sum_{i=0}^n \zeta_i(x_i-x_{i-1})$$
$$S'_n+\sum_{i=0}^n x_{i-1}\zeta_i=\sum_{i=0}^n x_i\zeta_i=\sum_{i=0}^n g(\zeta_i)y_{i-1}$$
as n approaches infinity S'_n approaches the second integral in with limits a and b, and the first sum of yi times g(zeta_i) minus zeta_i times x_i-1 converges to b*beta- a*alpha, is this correct?

2. Aug 21, 2006

### benorin

Recall that the inverse of a function is (graphically) the reflection of the original function about the line y=x, and that $$b\beta$$ and $$a\alpha$$ are the areas of two rectangles so $$b\beta-a\alpha$$ is the area of a 'box-shaped' L region. Plot it with say $$f(x)=x^2$$.

3. Aug 21, 2006

### quasar987

I think the specification "when you consider the definition of the integral as area" means that you can use the geometric arguments benorin is hinting about. So spoil yourself while you can! :P

4. Aug 21, 2006

### 0rthodontist

I'm confused by this. For example why is the last equality true (mainly I am confused as why the subscript on the y is i-1 and not i) and what is "the first sum of yi times g(zeta_i) minus zeta_i times x_i-1 converges to b*beta- a*alpha"? Also I'm going to use just xi and y[/sub]i[/sub] instead of the zetas. (by the way your limits should probably go from 1 to n, not that it matters much but you'd have a negative subscript on the i-1)

$$S_n = \sum_{i=1}^n g(y_{i-1})(y_i-y_{i-1})$$ (evaluating g on the left of each interval)
$$S_n' = \sum_{i=1}^n f(x_i)(x_i-x_{i-1})$$ (evaluating f on the right of each interval)
where the x_i are chosen so that g(y_i) = x_i, and x_0 = a, x_n = b, y_0 = alpha, y_n = beta.

I chose to evaluate the function on the left of each interval for g and on the right of each interval for f because I drew a picture which suggested it:
http://img152.imageshack.us/img152/7889/evalqs3.png [Broken]

$$S_n = \sum_{i=1}^n x_{i-1}(y_i-y_{i-1})$$
$$= \sum_{i=1}^n x_{i-1} y_i - \sum_{i=1}^n x_{i-1} y_{i-1}$$
$$S_n' = \sum_{i=1}^n y_i(x_i-x_{i-1})$$
$$= \sum_{i=1}^n x_i y_i - \sum_{i=1}^n x_{i-1} y_i$$

Now, note that $$b \beta - a \alpha$$ can be written as
$$\sum_{i=1}^n x_i y_i - x_{i-1} y_{i-1}$$
because the interior sums cancel and you're left with just x_n y_n - x_0 y_0 = b * beta - a * alpha. Then you can see that Sn + Sn' = b beta - a alpha.

Last edited by a moderator: May 2, 2017
5. Aug 21, 2006

### StatusX

Strictly speaking, you would need to prove that as max(yi-yi-1)->0, so does max(xi-xi-1), the condition for the Riemann sum to converge to the integral. This can be done in a line using the assumption of uniform continuity, which in turn can be derived from the continuity of the function on a closed interval by the Heine-Cantor theorem.

6. Aug 21, 2006

### WigneRacah

Try the substitution:

$$y=f(x)$$.

EDIT: I suppose $f$ is a "regular" function.

Last edited: Aug 21, 2006
7. Aug 23, 2006

### MathematicalPhysicist

0rthodontist, i didnt quite understand what does your illustration convey here?
i used here zetas, because according to the defintion the points which we choose for f(x) should be inside the cell, i.e between x_i-1 and x_i, and not necessarily one of these end points of the divided cell.

8. Aug 23, 2006

### 0rthodontist

I have shown examples of S3 and S3', with S3 written sideways along the y axis and S3' written normally. The sum of these can be seen in the diagram as the area b beta - a alpha-the sums interlock so you don't have to go to the limit to get the sum right.
StatusX's point is taken if you want to do it this way. You would have to prove that the norm of the partition for S3' converges to 0.

You can choose any points you like in the intervals, and it can be one of the end points of the cell. But that doesn't have anything to do with how you denote your choice. zetas would work as well as x's.

Last edited: Aug 23, 2006
9. Aug 24, 2006

### MathematicalPhysicist

i have another question:
prove that if f(x) is continuous and $$f(x)=\int_{0}^{x}f(t)dt$$ then f(x) is zero.

what i did so far is:
for every e, there exists d, such that whenever |x-x1|<d, |f(x)-f(x1)|<e
$$|\int_{0}^{x}f(t)dt-\int_{0}^{x_1}f(t)dt|=|\int_{x_1}^{x}f(t)dt|<e$$.
for every e, there exists d, such that whenever |x-x2|<d |f(x)-f(x2)|<e
we get that |x1-x2|<2d |f(x1)-f(x2)|<2e if x2=0 then we get for some x1 different than zero that $$|\int_{0}^{x_1}f(t)dt|<2e$$ and thus f(x1)=0.

i dont feel this proof is correct, but i understand intiutively why f(x) need to be equal zero.

10. Aug 24, 2006

### benorin

If we add the additional requirement that $$f$$ is differentiable, then we many employ the FTC to get

$$f^{\prime}(x)=\frac{d}{dx}\int_{0}^{x}f(t)dt = f(x)$$

which is a differential equation whose only solution is $$f(x)=Ce^x$$ but

$$f(x)=\int_{0}^{x}f(t)dt= C\int_{0}^{x}e^t \, dt = C(e^x-1)$$

so we must have $$Ce^x = C(e^x-1)$$ so $$C=0$$ and so $$f(x)=0$$.

I know it's not the problem at hand, but perhaps it could be modified (or maybe you can prove that f(x) must be differentiable)?

Last edited: Aug 24, 2006
11. Aug 24, 2006

### MathematicalPhysicist

$$\forall\epsilon>0, \exists\delta>0, |x-x_0|=|h|<\delta, |f(x)-f(x_0)|=|f(x_0+h)-f(x_0)|=|\int_{x_0}^{x_0+h}f(t)dt|<\epsilon$$
and i need to show that lim[f(x0+h)-f(x0)]/h as h appraoches 0 exists, if we define:
g(h)=[f(x0+h)-f(x0)]/h then g(h) is continuous, but i need to show that's it continuous at h=0, which is ofcourse the hard part, perhaps dividing the first inequality with the epsilon by h?

12. Aug 24, 2006

### benorin

Isn't f(x) the integral of something?

13. Aug 24, 2006

### MathematicalPhysicist

yes, as i wrote :$$f(x)=\int_{0}^{x}f(t)dt$$

14. Aug 24, 2006

### benorin

So f(x) is a primative (an antiderivative of something) and thus differentiable, right? now prove it.

15. Aug 24, 2006

### benorin

From $$f(x)=\int_{0}^{x}f(t)dt$$ clearly f(0)=0 (I'll use this later). On to differentiation:

$$f(x_0 +h)-f(x_0) = \int_{0}^{x_0+h}f(t)dt-\int_{0}^{x_0}f(t)dt = \int_{0}^{h}f(t)dt+\int_{h}^{x_0+h}f(t)dt-\int_{0}^{x_0}f(t)dt$$
$$=f(h)+\int_{0}^{x_0}f(t+h)dt-\int_{0}^{x_0}f(t)dt=f(h)+ \int_{0}^{x_0}[f(t+h)-f(t)]dt,$$

so we have

$$f^{\prime}(x_0)=\lim_{h\rightarrow 0}\frac{f(x_0+h)-f(x_0)}{h}= \lim_{h\rightarrow 0}\frac{f(h)+ \int_{0}^{x_0}[f(t+h)-f(t)]dt}{h}$$
$$=\lim_{h\rightarrow 0}\frac{f(h)}{h}+ \lim_{h\rightarrow 0}\frac{1}{h}\int_{0}^{x_0}[f(t+h)-f(t)]dt = \lim_{h\rightarrow 0}\frac{f(0+h)-f(0)}{h}+ \lim_{h\rightarrow 0}\int_{0}^{x_0}\frac{f(t+h)-f(t)}{h}dt$$
$$=f^{\prime}(0)+ \lim_{h\rightarrow 0}\int_{0}^{x_0}\frac{f(t+h)-f(t)}{h}dt$$

I want to pass the limit through to the integrand to get $$\int_{0}^{x_0}f^{\prime}(t)dt$$ and I think the continuity of f(x) justifies it, if so, then we have

$$f^{\prime}(x_0)=f^{\prime}(0)+\int_{0}^{x_0}f^{\prime}(t)dt = f^{\prime}(0)+f(x_0)-f(0)=f^{\prime}(0)+f(x_0)$$

which is another differential equation for $$f$$ whose solution is $$f(x)=Ce^{x}-f^{\prime}(0)$$.

CRAP! I realized that I assumed that f is differentiable after I passed the limit through to the integrand... hope some of this is useful. --Ben

16. Aug 24, 2006

### quasar987

I don't have my books with me, but I think there is a thm that says that if f(x) is continuous, then the function F(x) defined by

$$F(x)=\int_0^x f(t)dt$$

is continuous, differentiable, and F'(x)=f(x).

In your problem, you are told that F(x)=f(x). So, f '(x)=f(x). The general solution of this is f(x)=Aexp(x). But f(0)=0, so A=0.

Last edited: Aug 24, 2006
17. Aug 24, 2006

### 0rthodontist

just A exp(x)

There must be a way to do this without depending on ex. I'm wondering, how is it proved that the derivative of e^x is itself and that no other function has that property?

Last edited: Aug 24, 2006
18. Aug 24, 2006

### quasar987

I've made the appropriate correction to the post, thx 0rthodontist.

The most general way* of defining the exponential function is through its power expansion. I.e. let's define the function exp(x) by

$$\exp(x)=\sum_{n=0}^{\infty}\frac{x^n}{n!}$$

After showing that the series' radius of convergence is $\infty$, we can differentiate it term by term to find $\frac{d}{dx}\exp(x)$ and see that it is exp(x) itself.

To prove exp(x) is the only function whose derivative is itself, "try" a series solution to the diff. equ. y'=y. It will come out that the only solution is the series with coefficients $a_n=1/n!$, which is our definiton of exp(x).

*I say most general way because if we define exp(x) by say, the number e to the xth power (where e has been defined by say, the limit of $(1+1/n)^n$), then what is $e^{ix+y}$? What is a real number raised to a non-real power? But with the series definition, it is obvious what exp(ix+y) is.

Last edited: Aug 24, 2006
19. Aug 24, 2006

### 0rthodontist

Then it depends on the whole machinery of infinite series.

Last edited: Aug 24, 2006
20. Aug 24, 2006

### benorin

This ODE has been derived and solved in posts #10 and #15, what is at hand is proving that f is differentiable based on the its continuity and the integral definition (I tried to do it the easy way, appearently an epsilon-delta type proof is prefered).