Existence of a Critical Point for Differentiable Functions on Closed Intervals

  • Thread starter Thread starter Samuelb88
  • Start date Start date
  • Tags Tags
    Proof
Click For Summary
SUMMARY

The discussion centers on the existence of a critical point for differentiable functions on closed intervals, specifically proving that if f and g are continuous on [a,b] and differentiable on (a,b), then there exists a point x in (a,b) such that [f(b)-f(a)]g'(x) = [g(b)-g(a)]f'(x). The proof involves defining a function γ(t) and demonstrating that it achieves a maximum value at some point x in (a,b), leading to γ'(x) = 0. The reasoning confirms that x cannot equal the endpoints a or b, ensuring the critical point lies strictly within the interval.

PREREQUISITES
  • Understanding of continuous and differentiable functions on closed intervals
  • Familiarity with the Mean Value Theorem
  • Knowledge of compactness in real analysis
  • Basic concepts of critical points and derivatives
NEXT STEPS
  • Study the Mean Value Theorem and its applications in real analysis
  • Explore properties of compact sets in the context of differentiable functions
  • Learn about the implications of critical points in optimization problems
  • Investigate the relationship between inverse functions and the chain rule in calculus
USEFUL FOR

Mathematics students, particularly those studying real analysis, calculus, or optimization, as well as educators seeking to deepen their understanding of critical points in differentiable functions.

Samuelb88
Messages
160
Reaction score
0

Homework Statement


If f and g are continuous real functions on [a,b] which are differentiable on (a,b), then there exists a point x \in (a,b) such that [f(b)-f(a)]g'(x) = [g(b)-g(a)]f'(x).

The Attempt at a Solution


Not sure if my reasoning is correct here... I can assume that closed intervals are compact, and if a function a real function f defined on an interval [a,b] obtains a maximum value at a point x such that a<x<b, and if f' exists, then f'(x) = 0.

Proof: Define \gamma : [a,b] \rightarrow \mathbb{R} by the rule \gamma(t) = [f(b)-f(a)]g(t) - [g(b) - g(a)]f(t). Want to show their exists a point x \in (a,b) such that \gamma&#039;(x) = 0. Observe that \gamma(a) = \gamma(b). Hence if t \in (a,b) such that \gamma(a) = \gamma(b) &lt; \gamma(t), then since [a,b] is compact, f obtains a maximum value on [a,b]. Call this point at which \gamma obtains a maximum value x. x\neq a since \gamma(a) &lt; \gamma(t). Similar reasoning shows that x \neq b. Hence since \gamma obtains a maximum value at x, it follows that \gamma&#039;(x) = 0. This completes the proof. If instead t \in (a,b) such that \gamma(t) &lt; \gamma(a) = \gamma(b), then same conclusion holds following similar reasoning.

I'd like to know whether my reasoning that x can equal neither a nor b is correct.:smile: (and if this is sufficiently rigorous by your standards.)
 
Last edited:
Physics news on Phys.org
I think it looks good.
 
Wait I have an individual question for you. I admit to my noobiness, but I just want to ask you a question. So, suppose you could treat this as a form of a chain rule. Could you use inverse function to prove this exists as well such that finverseb-finversea=g'(x)? I mean sorry I didn't symbolize it correctly, but when I see this, I automatically recognize some inverse relations applied with the chain rule. I could be wrong though. You care to elaborate?
 

Similar threads

  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
Replies
1
Views
2K
  • · Replies 14 ·
Replies
14
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 43 ·
2
Replies
43
Views
5K
Replies
2
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
Replies
3
Views
2K
  • · Replies 8 ·
Replies
8
Views
1K