# Maxima and Minima of z=f(x,y) theorems!

Hi all,

In my Calculus III course, we are using Stewart's book, so as you might know there is not much rigor in there.
Likewise, when it came to the section on Maximum and Minimum values of a function with two variables z=f(x,y), they ommited a lot of stuff.

Hence, i tried to use simmilar ideas as in single variable calculus, to derive some of the theorems and also tried to prove them as well concerning some topics.

Specifically, i started out by first trying to define what it means for a function of two variables to be concace up or down, then moved on trying to establish a theorem(equivalent to single-variable calc) that says if the first derivative is >0,<0 then the function is increasinng/decreasing respectively. But in this case, i used the idea of directional derivatives.

And then moved on to constructing another theorem that says if the second directional derivative of f is >0 then the function is C.U.(again here, extrapolating from single-variable calculus).

The statements of these theorems and their proofs are attached.

Bear in mind, i have never seen anywhere neither these theorems nor their proofs, i just tired to extrapolate from single-variable calculus, so my question is do they make any sense? I mean are they mathematically correct?

All the best!

#### Attachments

• Directional_derivatives1.jpg
26.3 KB · Views: 341
• Directional_Derivative2.jpg
24.6 KB · Views: 372
• Dierectional_3.gif
36.2 KB · Views: 436
Last edited:

Here are the next two pages!

#### Attachments

• directional_derivative4.jpg
38.5 KB · Views: 325
• directional_derivative_5.jpg
25.1 KB · Views: 360
In my edition of the stewart text (edition two) 11.7 is the max and min values for 2 variables and 11.8 is lagrange's method. Also, there is a proof of the maximum and mininum second derivitive test that is in chapter 11.7 in the appendix, in appendix E in my book.

In my edition of the stewart text (edition two) 11.7 is the max and min values for 2 variables and 11.8 is lagrange's method. Also, there is a proof of the maximum and mininum second derivitive test that is in chapter 11.7 in the appendix, in appendix E in my book.

Thank you for your input. But if you have read what i wrote, and also if you have looked at the attachments, you will see that i am not talking about the proof of the Second Derivative Test. Because, if you haven't noticed, the second derivative test theorem(in stewart's book) already assumes as known the things that i am trying to prove, and that i have attached here. I know that the proof of the second derivative test is in the appendix, and i already know how to prove it, but the fact that if

$$D_u^2f(x,y)>0, \forall (x,y) \in B_{\delta}$$ then f is concave up, is assumed as known in this theorem, and this is one part that i am trying to prove(actually that i have proved and attached in my previous posts).

All the best!

Any thoughts?

In your "proof" for an increasing function, what is the vector u=<h,k>? What direction is it in? A differentiable function, f, will have Duf(a,b) ranging from -M to M depending on the direction of u, where M is the magnitude of the gradient of f at (a,b). So to say Duf(a,b) is greater than zero is false if we are allowing ANY direction u with a differentiable function f whose gradient is not 0.

Also, the notion of "increasing" for f(x,y) is unclear.
For example, is the function f(x,y) = x + y increasing?

I have been a little bit ambiguous, and i appreciate that you pointed out those things.

Of course, since

$$D_uf(x,y)=\nabla f *u=|\nabla f||u|cos(\theta)=|\nabla f|cos(\theta)$$

where theta is the angle between the gradien vector and the vector u.

So, since we need : $$D_uf(x,y)>0$$ then i guess we have to choose the angle in such a manner so this will hold. But, here i am not sure whether this will endanger loosing the generality of the proof.(?)

So, i would choose the angle to be:

$$-\frac{\pi}{2}< \theta <\frac{\pi}{2}$$

or only starting from 0 and going to pi/2.

Regarding the notion('my notion') of increasing, then yeah, i believe the way i defined a function f(x,y) to be increasing then the function z=f(x,y) would be increasing as well.

Because, let $$(x_1,y_1)<(x_2,y_2)....i.e. x_1<x_2..or..y_1<y_2$$ from here it follows that

$$x_1+y_1<x_2+y_2=>f(x_1,y_1)<f(x_2,y_2)$$

so f increasing.

P.S.My professor didn't even mention these things, so i was just curious to give it a shot on my own!

Last edited:
There is still a problem with “increasing” for multivariable functions. Try this out:

Can you find a non-horizontal plane z = Ax + By that you would consider to be “increasing”?

Also, about that function being increasing...

f(x,y)=x+y

Using this definition “x1 < x2 OR y1 < y2” => f(x1,y1) < f(x2,y2)

But look at

(3, 5) and (4, -20)

Then, “3<4 OR 5<-20” is true but f(3,5)=8 > f(4,-20)=-16

well, then i guess, it is better to say if x1<x2 AND y1<y2 then (x1,y1)<(x2,y2).

This would take care of the above raised contradiction.

Or, not to get stuck for a long time at this point, how is the 'increasing/decreasing' defined for functions of several variables? I tried to google it but didin't come across anything useful.

Last edited:
There is no good definition, they all seem to fail. You can talk about increasing along one direction only. Just think about the simple cases of planes. f(x,y) = Ax + By. If there is some concept of increasing on a disk, then it should work for a plane. Think about the disk centered at the origin (0,0).
Then,
f(x,y) < f(x+h,y+k) only for <h,k> where <A,B>dot<h,k> > 0.

That is, where Ah + Bk > 0.

So you would want to define “increasing” based upon <A,B>. That is,

f is increasing on the disk if for all points (x,y), f(x,y) < f(x+h,y+k) whenever Ah+Bk > 0.

BUT, this definition will only apply to this particular plane!
Just look at another plane slightly turned about the origin, g(x,y) = Cx + Dy, where <C,D> is not parallel to <A,B>, but close to parallel. The “increasing” definition fails for this new plane. There are directions along this new plane that satisfy our definition requirement that Ah+Bk > 0, but it is NOT true that g(x,y) < g(x+h,y+k).

So how would one prove that if Duf(x,y)>0 where u=<h,k> then f(x,y) is an increasing function at least in the direction of u.?

I tried to do it, but i see that there actually is another 'flaw' in my 'proof'. And i couldn't fix it.

I posted the whole thing in here, but PF today seems crappy, it just wiped it out..and i've been typing for 30 min or more...lol....

All the best!

$$D_u^2f(x,y)>0, \forall (x,y) \in B_{\delta}$$ then f is concave up, is assumed as known in this theorem, and this is one part that i am trying to prove(actually that i have proved and attached in my previous posts).
All the best!

As far as I can tell "concave up" was only defined for 1-dimensional functions. Without a definition for concave up, there is nothing to prove.

Of course, a natural definition for concave up in 2 dimensions would be that all points on the function are greater than or equal to the tangent plane at that point. This definition is equivalent to saying that the class of 1-dimensional functions generated by taking a slice out of the two dimensional function are all concave up. Since a proof for one dimensional functions was already given, there would be nothing to re-derive, because it already generalizes to 2 dimensions given this intuitive definition for concave up.

The concept of monotonically increasing or decreasing is inherently one-dimensional, and has no multi-dimensional analog.

If you're more interested in the geometry of surfaces I suggest you take a course in differential geometry. The concepts of umbilical points, the shape operator, Gaussian curvature, etc, are all related.

As far as I can tell "concave up" was only defined for 1-dimensional functions. Without a definition for concave up, there is nothing to prove.

Of course, a natural definition for concave up in 2 dimensions would be that all points on the function are greater than or equal to the tangent plane at that point. This definition is equivalent to saying that the class of 1-dimensional functions generated by taking a slice out of the two dimensional function are all concave up. Since a proof for one dimensional functions was already given, there would be nothing to re-derive, because it already generalizes to 2 dimensions given this intuitive definition for concave up.

.

This is exactly how i 'defined' concave up in my first post( the pages scanned).

When you say that "Since a proof for one dimensional functions was already given, there would be nothing to re-derive, because it already generalizes to 2 dimensions given this intuitive definition for concave up." How come there is nothing to derive?

It looks to me that more than simply an intuitive understanding is needed. I.E. i think that a rigorous proof needs to be derived. However, the problem that i see here is that in order to derive a rigorous proof for concavity of two variable functions, we need to have a handle of monotonicity of two variable functions.And this is exactly what i am failing to overcome.

It looks rather challenging to even prove that a function of two variables can be monotono incereasing in a particular direction given by a vector u=<h,k>. At least using simmilar approaches as for single-variable functions. This part also looks very simple to get an intuitive understanding, but i am failing to rigorously prove it.

It looks to me that more than simply an intuitive understanding is needed. I.E. i think that a rigorous proof needs to be derived. However, the problem that i see here is that in order to derive a rigorous proof for concavity of two variable functions, we need to have a handle of monotonicity of two variable functions.And this is exactly what i am failing to overcome.

Ok, let me break it down then:

1) You agreed with me on the definition for concave up of a 2-dimensional function; namely, that the function is greater at all points than the tangent plane.
2) For any point on the 2-dimensional function, there is a corresponding 1-dimensional slice that yields a 1-dimensional function passing through that point.
3) For all 1-dimensional slices, the existing definition can be used to prove that the one-dimensional function is concave up.
4) If all 1-dimensional slices are concave up, and all points on the two-dimensional function are contained within one of the slices, then this is sufficient to define the 2-dimensional case for concave up.

Also, you actually can generalize the notion of "increasing" (from the first derivative) to 2-dimensions (it's just not particularly useful). An increasing function in one dimensions was one that has a positive derivative (in the direction of X). An increasing function in 2-dimensions could be taken as one that has a positive directional derivative in any direction that is in the first quadrant.

Ok, let me break it down then:

1) You agreed with me on the definition for concave up of a 2-dimensional function; namely, that the function is greater at all points than the tangent plane.
2) For any point on the 2-dimensional function, there is a corresponding 1-dimensional slice that yields a 1-dimensional function passing through that point.
3) For all 1-dimensional slices, the existing definition can be used to prove that the one-dimensional function is concave up.
4) If all 1-dimensional slices are concave up, and all points on the two-dimensional function are contained within one of the slices, then this is sufficient to define the 2-dimensional case for concave up.

Also, you actually can generalize the notion of "increasing" (from the first derivative) to 2-dimensions (it's just not particularly useful). An increasing function in one dimensions was one that has a positive derivative (in the direction of X). An increasing function in 2-dimensions could be taken as one that has a positive directional derivative in any direction that is in the first quadrant.

I completely understand the 'how it should be' part. That is, i understand that if, this, this and this, then f is concave up or concave down. In other words, i am not having any trouble intuitively grasping the issue at hand. The problem is how to rigorously construct a proof for that.

The idea is that when dealing with directional derivatives, we are only looking at one single slice of the surface that is cut by a plane in the direction of the vector u and that is parallel to the z-axis. So, all we can talk about in this case is about the points that lie on the line whose direction vector is the vector u. any point out of this line, and we(I in this case) are not sure how to go around it(when trying to prove it).

In my attempt to prove it, this was my main problem, anytime i tried to construct a proof, somewhere along the lines i always ended up throwing in a set of points which did not lie to this line...and hence i was not sure whether this is going to affect the end result or not.

I completely understand the 'how it should be' part. That is, i understand that if, this, this and this, then f is concave up or concave down. In other words, i am not having any trouble intuitively grasping the issue at hand. The problem is how to rigorously construct a proof for that.

How is what I just said not a rigorous proof?

(1) Definition we want to prove
(2) Perhaps you'd be more convinced with it written out mathematically?

Given a function f(x,y) and any point (u,v), there exists a function f(t) that lies on f and passes through (u,v) given by f(t) = f(t*u,t*v)

(3) Just invoking the previous proof
(4) By deduction on (2) and (3)

You may also be interested in the definition of the tangent plane:

Note: a curve on the surface is simply a mapping from an interval of real numbers $$I = [a,b] \subseteq \mathbf{R}$$ to the surface $$M$$, $$\alpha : I \mapsto M$$.

Formally, say that a vector $$v_p \in T_p(M)$$ is tangent to M at p if $$v_p$$ is the velocity vector of some curve on M. That is, there is some $$\alpha : I \mapsto M$$ with $$\alpha(0) = p$$ and $$\alpha'(0) = v_p$$. Then, the tangent plane of M at p is defined to be $$T_p(M) = ( v_p | v_p$$ is tangent to M at p $$)$$.

Further, $$v_p \in T_p(M)$$ if and only if $$v = \lambda_1 x_u + \lambda_2 x_v$$, where $$x_u, x_v$$ are velocity vectors in the u and v-directions evaluated at $$(u_0, v_0)$$.

An increasing function in 2-dimensions could be taken as one that has a positive directional derivative in any direction that is in the first quadrant.

Maybe i was a little ambiguous in my previoius post.

When i said i was looking for a rigorous proof, i was actually referring to this particular sentence in your previous post.

In other words, i'm initally trying to prove that if: z=f(x,y) is a differentiable function on some disc B and if

$$D_uf(x,y)>0$$ where u=<h,k>, and (x,y) is in B. Then, f is an increasing function at least in the direction of u.

Could you show a step by step proof to this statement, or at least reffer me to some other sources where i could find a simmilar proof?

Regards!

$$D_uf(x,y)>0$$ where u=<h,k>, and (x,y) is in B. Then, f is an increasing function at least in the direction of u.

Could you show a step by step proof to this statement, or at least reffer me to some other sources where i could find a simmilar proof?

Increasing Test
If $$\nabla_v f(x,y) > 0$$ on an interval, for any constant vector v, then f is increasing on that interval in the direction of v.
-----------
Define $$g(t)$$ to be the value of $$f(x,y)$$ starting at $$(0,0)$$ in the direction of $$u = (h,k)$$. Then $$g(t) = f(h t,k t)$$. By definition of the directional derivative, we have $$\nabla_v f(x,y) = g'(t)$$. From the 1-dimensional increasing test, if $$g'(t) > 0$$ on an interval then $$g$$ is an increasing function on that interval -- which is what we originally wanted to prove.

Increasing Test
If $$\nabla_v f(x,y) > 0$$ on an interval, for any constant vector v, then f is increasing on that interval in the direction of v.
-----------
Define $$g(t)$$ to be the value of $$f(x,y)$$ starting at $$(0,0)$$ in the direction of $$u = (h,k)$$. Then $$g(t) = f(h t,k t)$$. By definition of the directional derivative, we have $$\nabla_v f(x,y) = g'(t)$$. From the 1-dimensional increasing test, if $$g'(t) > 0$$ on an interval then $$g$$ is an increasing function on that interval -- which is what we originally wanted to prove.

Hi junglebeast,

I was trying simmilar approaches, that is i also tried to define a function g in tems of f, in a simmilar fashion, but as i previously admitted, somehow i couldn't manage to take into account only points (x,y) that lie on the line that contains the direction vector u=<h,k>, but now i see where i was going wrong.

Just to make sure that i am not missing, or misinterpreting something, i will try to 'restate'(with a few changes)the whole steps in the proof, so i hope you are not bored.

Proof:

Let:

$$D_uf(x,y)=f_x(x,y)h+f_y(x,y)k>0$$ where u=<h,k>, for (x,y) in some disc B.

Now, let $$(x_o,y_o)$$ be the coordinates of the tail of the vector u.

Now, let (x,y) be any other point in the curve C, which is obtained when we cut the surface S with a plane that contains u, and is parallel with the z-axis.

This way if we let: $$P(x,y)...and ... P_o(x_o,y_o)$$ we get

$$P_oP=tu=<th,tk>$$

From here, we can see that:

$$x(t)=x_o+th....and....y(t)=y_o+tk$$

Now, since we are interested only what happens with f(x,y), when (x,y) lie on the line that contains the vector u, from above we get:

f(x(t),y(t)). Hence we can define a function g(t) in this manner;

g(t)=f(x(t),y(t)). =>

$$g'(t)=f_x(x(t),y(t))h+f_y(x(t),y(t))k>0$$.

Hence, from the theory of functions of single variable we conclude that since g'(t)>0=> g(t)=f(x(t),y(t)) is an increasing function on the direction of u.

Thanks again, you have been helpful!