- #1

- 125

- 0

suppose that [itex]g:[0,1] \rightarrow \re[/itex] is continuous, [itex]g(0)=g(1)=0[/itex] and for every [itex]c \in (0,1)[/itex], there is a [itex]k > 0[/itex] such that [tex]0 < c-k < c < c+k < 1[/tex] and [tex]g(c)=\frac(1}{2}[/tex][tex] (g(c+k)+g(c-k))[/tex].

Prove that [itex]g(x) = 0[/itex] for all [itex]x \in [0,1][/itex] Hint: Consider sup{[itex]x \in [0,1] | f(x)=M [/itex]} where M is maximum of [itex]f[/itex] on [0,1].

I see that [itex]c=\frac{1}{2}((c+k)+(c-k))[/itex]. I also see that the inequality is exactly the one Rudin uses to prove that the derivative of a local maximum is 0. I dont really understand what the hint is. There is no supremum, right?

What I tried to do and decided I couldnt make it work was to take [itex]\delta > 0[/itex] and take [itex]x_0, x_1[/itex] such that [itex]d(x_1,1)=d(x_0,0)<\delta[/itex] and let [itex]k=d(x_1,1)=(x_0,0)[/itex] so that now [itex]g(c)=0[/itex] when [itex]c=\frac{1}{2}[/itex] and I was going to show that [itex]g(c+k),g(c-k)[/itex] would always have to equal 0, but then I was thinking that what if the function oscillated and intersected the x-axis at 0,1/2, and 1 so that [itex]g(c-k)=-g(c+k)[/itex]. Seems like it would hold for my proof, also I didnt use the hint. HELP!! This problem seems easy but I can't seem to wrap my head around it.

Prove that [itex]g(x) = 0[/itex] for all [itex]x \in [0,1][/itex] Hint: Consider sup{[itex]x \in [0,1] | f(x)=M [/itex]} where M is maximum of [itex]f[/itex] on [0,1].

I see that [itex]c=\frac{1}{2}((c+k)+(c-k))[/itex]. I also see that the inequality is exactly the one Rudin uses to prove that the derivative of a local maximum is 0. I dont really understand what the hint is. There is no supremum, right?

What I tried to do and decided I couldnt make it work was to take [itex]\delta > 0[/itex] and take [itex]x_0, x_1[/itex] such that [itex]d(x_1,1)=d(x_0,0)<\delta[/itex] and let [itex]k=d(x_1,1)=(x_0,0)[/itex] so that now [itex]g(c)=0[/itex] when [itex]c=\frac{1}{2}[/itex] and I was going to show that [itex]g(c+k),g(c-k)[/itex] would always have to equal 0, but then I was thinking that what if the function oscillated and intersected the x-axis at 0,1/2, and 1 so that [itex]g(c-k)=-g(c+k)[/itex]. Seems like it would hold for my proof, also I didnt use the hint. HELP!! This problem seems easy but I can't seem to wrap my head around it.

Last edited: