Areas And Distances (Intro. to Definite Integral)

Click For Summary
The discussion focuses on understanding the notation used in the definite integral, particularly the summation formula for approximating areas under curves. The confusion arises around the indices and the choice of points within subintervals, specifically regarding whether to start the index at 0 or 1. It is clarified that in the summation, each xi represents a point in the i-th subinterval, and that x0 does not exist in this context. The example provided illustrates that for a specific function and interval, the correct summation should start at i=1, confirming that x1 corresponds to the left endpoint of the first subinterval. Overall, the conversation emphasizes the importance of correctly interpreting the notation for future calculus applications.
in the rye
Messages
83
Reaction score
6
Hey everyone,

Today in my Calculus 1 lecture we covered Areas and Distances, which serves as a prequel to the definite integral in my book. I am confused on some notation the book uses, and I cannot seem to find a clear explanation anywhere that I look.

n
∑ f(xi) ΔX ≅ A
i=1

First, let me explain this how I understand it, then correct me where I am wrong.

I understand this this is a simple way of saying that the approximate area under your line. And I know what the summation means.

My confusion is over the xi, and the start/end point. I know that f(xi) is defining the height of your rectangle based on the x value you chose in your x sub-interval. However, I'm confused with the relation of i=1 and n to this point. Say we have f(x) = x2. If we used this formula with left rectangles, one of our 'i's would have to be at 0. Does this mean that we alter the formula to say(?):

n
∑ f(xi-1) ΔX ≅ A
i=1

or

n
∑ f(xi) ΔX ≅ A
i=0

For some reason this is just confusing the hell out of me. My book really doesn't clarify this enough, and i know in the future this will be important for Cal 2, so I want to get a handle on it now. A tutor said told me you would have to change it to one of these formulas, but to me, that doesn't make any sense. Why wouldn't the formula just remain the same, but have f(x1) = 0?

It makes no sense to me why you would write it as either of the two methods the tutor told me because it would mean you're creating an interval that doesn't exist. Interval 0 doesn't exist, where in my mind interval 1 would be f(0) = 0 giving your sub interval area as ΔX(0)2.
 
Physics news on Phys.org
in the rye said:
Hey everyone,

Today in my Calculus 1 lecture we covered Areas and Distances, which serves as a prequel to the definite integral in my book. I am confused on some notation the book uses, and I cannot seem to find a clear explanation anywhere that I look.

n
∑ f(xi) ΔX ≅ A
i=1

First, let me explain this how I understand it, then correct me where I am wrong.

I understand this this is a simple way of saying that the approximate area under your line. And I know what the summation means.

My confusion is over the xi, and the start/end point. I know that f(xi) is defining the height of your rectangle based on the x value you chose in your x sub-interval. However, I'm confused with the relation of i=1 and n to this point. Say we have f(x) = x2. If we used this formula with left rectangles, one of our 'i's would have to be at 0. Does this mean that we alter the formula to say(?):

n
∑ f(xi-1) ΔX ≅ A
i=1

or

n
∑ f(xi) ΔX ≅ A
i=0

For some reason this is just confusing the hell out of me. My book really doesn't clarify this enough, and i know in the future this will be important for Cal 2, so I want to get a handle on it now. A tutor said told me you would have to change it to one of these formulas, but to me, that doesn't make any sense. Why wouldn't the formula just remain the same, but have f(x1) = 0?
In the first summation you show, it is implied that some interval [a, b] is divided up into n subintervals. ##x_1## is some point in the first subinterval, ##x_2## is some point in the second subinterval, and so on, with one ##x_i## in each subinterval.
 
Mark44 said:
In the first summation you show, it is implied that some interval [a, b] is divided up into n subintervals. ##x_1## is some point in the first subinterval, ##x_2## is some point in the second subinterval, and so on, with one ##x_i## in each subinterval.

So I am correct in thinking that x0 doesn't exist using the summation formula, correct? I edited the ending of my comment, which may expand my confusion
 
in the rye said:
So I am correct in thinking that x0 doesn't exist using the summation formula, correct? I edited the ending of my comment, which may expand my confusion
Let me correct your first summation:
##\sum_{i = 1}^n f(x_i)\Delta x##
Here ##x_i## is some point in the i-th subinterval.
 
Mark44 said:
Let me correct your first summation:
##\sum_{i = 1}^n f(x_i)\Delta x##
Here ##x_i## is some point in the i-th subinterval.

Right, for some reason I have difficulty applying this, though. So for simplicity-sake let's say we use y = x2 over the interval [0, 6], with only 2 left rectangles.I would have that my A = 3[f(0) + f(3)]. This would mean that x1 = 0 and x2 = 3in the summation formula, correct? Giving:##\sum_{i = 1}^2 f(x_i)\Delta x##

NOT

##\sum_{i = 0}^1 f(x_i)\Delta x##

Where ##Δx## = 3
 
in the rye said:
Right, for some reason I have difficulty applying this, though. So for simplicity-sake let's say we use y = x2, over the interval [0, 6], with only 2 left rectangles.I would have that my A = 3[f(0) + f(3)]. This would mean that x1 = 0 and x2 = 3 in the summation formula, correct? Giving:##\sum_{i = 1}^2 f(x_i)\Delta x##

NOT

##\sum_{i = 0}^1 f(x_i)\Delta x##

Where ##Δx## = 3
Yes.
 
  • Like
Likes in the rye
There are probably loads of proofs of this online, but I do not want to cheat. Here is my attempt: Convexity says that $$f(\lambda a + (1-\lambda)b) \leq \lambda f(a) + (1-\lambda) f(b)$$ $$f(b + \lambda(a-b)) \leq f(b) + \lambda (f(a) - f(b))$$ We know from the intermediate value theorem that there exists a ##c \in (b,a)## such that $$\frac{f(a) - f(b)}{a-b} = f'(c).$$ Hence $$f(b + \lambda(a-b)) \leq f(b) + \lambda (a - b) f'(c))$$ $$\frac{f(b + \lambda(a-b)) - f(b)}{\lambda(a-b)}...

Similar threads

  • · Replies 24 ·
Replies
24
Views
4K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 20 ·
Replies
20
Views
4K
  • · Replies 16 ·
Replies
16
Views
4K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 23 ·
Replies
23
Views
4K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 6 ·
Replies
6
Views
3K