# Show that there exist a C^2 (two times differentiable) that satisfies some conditions

## Homework Statement

Show that there exist a unique C^2-function y(x) defined in some region of 0 such that y(0) = 0 and sin(y(x)) + (x^2)*(e^(y(x))) = 0

What is y'(0) and y''(0)?

## Homework Equations

I know how to show that there exist a C^1 function y(x) for a function f(x,y(x)) the partial derivative of f(x,y(x)) can't be zero.

But I have actually no idea how to generalize this to check if there exist a C^2 function that satisfies the conditions. My textbook provides no example of this.

## The Attempt at a Solution

I calculated the derivative of f(x,y(x)) with respect to y(x) and got:

cos(y(x))*y'(x) + (x^2)*y'(x)*e^(y(x)) = 0

After this, im not sure how to proceed.

Hi Srumix!

How did you show the existence of a C1-function? Just by applying the implicit function theorem? What exactly does your implicit function theorem state?

Hi Srumix!

How did you show the existence of a C1-function? Just by applying the implicit function theorem? What exactly does your implicit function theorem state?

Yes I applied the implicit function theorem. The version I applied states that (if I have understood it correctly) that if U is some open set in R^2 and f is a C^1 function that maps points of U to R and if (a,b) is a point of U and f(a,b) = c and D2f(a,b) is not zero there exist an implicit function y = g(x) which is C^1 differentiable in some interval containing a and such that g(a) = b (Serge lang text).

Is that correct?

Ok, very good!

Now, you've calculated (I didn't check it though), that the derivative of y satisfies cos(y(x))*y'(x) + (x^2)*y'(x)*e^(y(x)) = 0.

What if you applied the implicit function theorem on that? Where you consider y(x) a known function and y'(x) as the unknown function. This would give you a function that coincides with y' and that is C1. So the original function y is C2 then...

Ok, very good!

Now, you've calculated (I didn't check it though), that the derivative of y satisfies cos(y(x))*y'(x) + (x^2)*y'(x)*e^(y(x)) = 0.

What if you applied the implicit function theorem on that? Where you consider y(x) a known function and y'(x) as the unknown function. This would give you a function that coincides with y' and that is C1. So the original function y is C2 then...

Well, in my attempted solution I just calculated the derivative of f(x,y(x)) with respect to y(x).

Aah! I see what you're getting at, that I could just use the implicit function theorem again?
But I'm not quite sure how to pratically do this. Should I just take the derivative of f'(x,y(x)) i.e. the derivative of the expression cos(y(x))*y'(x) + (x^2)*y'(x)*e^(y(x)) = 0 with respect to y'(x)?

Thanks again!

Yes, that's what I mean. Just apply the implicit function theorem again. Thus you must derive your expression with respect to y'(x)!

Yes, that's what I mean. Just apply the implicit function theorem again. Thus you must derive your expression with respect to y'(x)!

Thank you so much for your help micromass!

Also, I would just like to chug a quick off-topic question at you:

If you are given a function f(x,y) and you are asked to show that this function is bounded in R^2. Is this equivalent with asking if the function is differentiable? Or do I have to somehow show that the function does diverge when x or y is made arbitrarily large or small?

Thanks once again for your help!

If you are given a function f(x,y) and you are asked to show that this function is bounded in R^2. Is this equivalent with asking if the function is differentiable? Or do I have to somehow show that the function does diverge when x or y is made arbitrarily large or small?

Hmmm, I don't quite get why you think boundedness is equivalent to differentiability. The function f(x,y)=x is differentiable, but not bounded...

In fact, showing that a function is bounded is quite tricky. You will indeed have to show that the function might diverge when x or y is made large or small. In general, I'm not aware of a general method. For example, when dealing with f(x,y)=sin(x+y), then this is bounded because the sine function is bounded.
On the other hand $$f(x,y)=\frac{xy^2}{x+y}$$ is not bounded. For example, when x=1, then the limit

$$\lim_{y\rightarrow +\infty}{f(1,y)}=\lim_{y\rightarrow +\infty}{\frac{y^2}{1+y}}=+\infty$$

thus the function is not bounded...

Hmmm, I don't quite get why you think boundedness is equivalent to differentiability. The function f(x,y)=x is differentiable, but not bounded...

In fact, showing that a function is bounded is quite tricky. You will indeed have to show that the function might diverge when x or y is made large or small. In general, I'm not aware of a general method. For example, when dealing with f(x,y)=sin(x+y), then this is bounded because the sine function is bounded.
On the other hand $$f(x,y)=\frac{xy^2}{x+y}$$ is not bounded. For example, when x=1, then the limit

$$\lim_{y\rightarrow +\infty}{f(1,y)}=\lim_{y\rightarrow +\infty}{\frac{y^2}{1+y}}=+\infty$$

thus the function is not bounded...

Thank you very much for your detailed explanation.

With the differentiability remark, I meant that if you can show that a function has a minimum and a maximum value, then the function is bounded. But that's just something I've heard and it wouldn't suprise me if it's incorrect to assume that.

For example:

If i want to show that the function f(x,y) = (x2 - y2)e-x2 - y2 is bounded, Is it enough to investigate the two derivatives df/dx and df/dy? Or do I have to show that the function does not diverge no matter what values of x and y you pick with some kind of "limit argument" as you did, i.e. chose x = 1 and investigate how the function behaves for small/large values of y and vice versa?

Last edited:

Ah, I see what you mean. But I fear that it won't work. Sure, you can find local minima and maxima, but you will not be able to show that these are global extrema with derivatives. The problem is that a derivative cannot investigate the situation at infinity. This won't even work in function $$f:\mathbb{R}\rightarrow \mathbb{R}$$...

Ah, I see what you mean. But I fear that it won't work. Sure, you can find local minima and maxima, but you will not be able to show that these are global extrema with derivatives. The problem is that a derivative cannot investigate the situation at infinity. This won't even work in function $$f:\mathbb{R}\rightarrow \mathbb{R}$$...

Ah yeah, I was afraid of that.

But to investigate the situation at infinity, is it OK to choose for example y = 1 and investigate x -> inf and x-> 0 and then do the same for y?

Ah yeah, I was afraid of that.

But to investigate the situation at infinity, is it OK to choose for example y = 1 and investigate x -> inf and x-> 0 and then do the same for y?

I'll assume you meant x-> -inf and not x->0... But not, this is not enough. I'm afraid I can't give you a standard method for checking boundedness. I fear it will depend on your intuition and creativity

My two examples were just standard methods of showing these things. But it might not always work...

I'll assume you meant x-> -inf and not x->0... But not, this is not enough. I'm afraid I can't give you a standard method for checking boundedness. I fear it will depend on your intuition and creativity

My two examples were just standard methods of showing these things. But it might not always work...

Oh alright, I suppose I have to read up a bit on that then :-).

Thank you very much for your help micromass!