Proving Rolle's Theorem: Proving Zero's of f(x) in [0,1]

  • Thread starter Thread starter nlews
  • Start date Start date
  • Tags Tags
    Theorem
Click For Summary
SUMMARY

This discussion focuses on proving Rolle's Theorem for the function f(x) = x^3 - (3/2)x^2 + λ, demonstrating that it never has two distinct zeros in the interval [0,1]. The user initially calculated f(0) = λ and f(1) = λ - 1/2, leading to confusion regarding the application of Rolle's Theorem. By assuming two distinct zeros at c1 and c2, the user correctly identifies that f(c1) = f(c2) = 0, which implies the existence of a point c in (c1, c2) where f'(c) = 0. This conclusion is essential for understanding the implications of Rolle's Theorem in this context.

PREREQUISITES
  • Understanding of Rolle's Theorem and its conditions
  • Basic knowledge of calculus, specifically derivatives
  • Familiarity with polynomial functions and their properties
  • Ability to analyze functions over a closed interval
NEXT STEPS
  • Study the implications of Rolle's Theorem in various contexts
  • Explore the properties of polynomial functions, particularly cubic functions
  • Learn about the Mean Value Theorem and its relationship to Rolle's Theorem
  • Practice proving the existence of roots using the Intermediate Value Theorem
USEFUL FOR

Students studying calculus, mathematics educators, and anyone interested in the application of Rolle's Theorem in real analysis.

nlews
Messages
11
Reaction score
0
This is the last part of a problem that I'm working through. The problem is on Rolle's theorem.

Using Rolles Theorem prove that for any real number λ: the function f(x) = x^3 - (3/2)x^2 + λ never has two distinct zeros in [0,1].

So I was thinking about ways I could do this: but when I calculated f(0) = λ, wheras f(1) = λ-1/2, but to use Rolle's theorem isn't f(a) = f(b) on the interval [a,b]?? This has got me a little confused.
Anyway I put that aside to try another way:
I thought perhaps I should assume for contradiction that I should assume there are two distinct zeros at c1 and c2. So f'(c1 = f'(c2) = 0. But then using Rolles Theorem there is a root of f between c1 and c2. But now I don't know where to go with this next! Please help!
 
Physics news on Phys.org
Rolle's theorem states that if a function is identical on two distinct points between which it's defined and has a derivative, then there is a point between them where the derivative vanishes.
So after making the assumption, you have f(c1)=f(c2)=0 rather than f'(c1)=f'(c2)=0 (as you wrote), which means there is a point c where f'(c)=0, and 0<c<1. Is this possible?
 
Moderator's note:

Thread moved from "Calculus & Analysis".

Homework assignments or any textbook style exercises are to be posted in the appropriate forum in our https://www.physicsforums.com/forumdisplay.php?f=152" area. This should be done whether the problem is part of one's assigned coursework or just independent study.
 
Last edited by a moderator:

Similar threads

Replies
9
Views
1K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 11 ·
Replies
11
Views
1K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 12 ·
Replies
12
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
Replies
1
Views
2K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 15 ·
Replies
15
Views
3K
Replies
2
Views
2K