Inequality is exactly the one Rudin uses

  • Context: Graduate 
  • Thread starter Thread starter mynameisfunk
  • Start date Start date
  • Tags Tags
    Inequality
Click For Summary
SUMMARY

The discussion centers on proving that a continuous function g:[0,1] → ℝ, which satisfies g(0)=g(1)=0 and the condition g(c)=\frac{1}{2}(g(c+k)+g(c-k)) for some k > 0, must equal zero for all x in [0,1]. Participants reference Rudin's inequality related to local maxima and express confusion regarding the hint involving the supremum of the function. The conversation highlights the challenge of applying the hint effectively and concerns about potential oscillation in the function's values.

PREREQUISITES
  • Understanding of continuous functions and their properties
  • Familiarity with the concept of local maxima and derivatives
  • Knowledge of supremum and infimum in real analysis
  • Proficiency in using LaTeX for mathematical expressions
NEXT STEPS
  • Study the implications of the Mean Value Theorem in relation to continuous functions
  • Learn about the properties of continuous functions on closed intervals
  • Explore the concept of oscillation in functions and its impact on continuity
  • Review proofs involving supremum and maximum values in real analysis
USEFUL FOR

Mathematics students, particularly those studying real analysis, educators teaching calculus concepts, and anyone interested in the properties of continuous functions and their proofs.

mynameisfunk
Messages
122
Reaction score
0
suppose that g:[0,1] \rightarrow \re is continuous, g(0)=g(1)=0 and for every c \in (0,1), there is a k > 0 such that 0 < c-k < c < c+k < 1 and g(c)=\frac(1}{2}(g(c+k)+g(c-k)).
Prove that g(x) = 0 for all x \in [0,1] Hint: Consider sup{x \in [0,1] | f(x)=M} where M is maximum of f on [0,1].




I see that c=\frac{1}{2}((c+k)+(c-k)). I also see that the inequality is exactly the one Rudin uses to prove that the derivative of a local maximum is 0. I don't really understand what the hint is. There is no supremum, right?
What I tried to do and decided I couldn't make it work was to take \delta > 0 and take x_0, x_1 such that d(x_1,1)=d(x_0,0)<\delta and let k=d(x_1,1)=(x_0,0) so that now g(c)=0 when c=\frac{1}{2} and I was going to show that g(c+k),g(c-k) would always have to equal 0, but then I was thinking that what if the function oscillated and intersected the x-axis at 0,1/2, and 1 so that g(c-k)=-g(c+k). Seems like it would hold for my proof, also I didnt use the hint. HELP! This problem seems easy but I can't seem to wrap my head around it.
 
Last edited:
Physics news on Phys.org


Your laytex is sort of screwed up. Take a look. It's not clear what your asking.
 

Similar threads

  • · Replies 9 ·
Replies
9
Views
3K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 7 ·
Replies
7
Views
890
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 10 ·
Replies
10
Views
3K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 3 ·
Replies
3
Views
3K