# Meaning of zero gradient vector with existant directional vector

I'm supposed to find the gradient vector of the function below at (0,0), and then use the dot product with the unit vector to find the directional derivative. Then find the directional derivative using the limit definition of a directional derivative, and explain why I get two different answers.

f(x,y) = (X^2y)/x^2+y^2 for (x,y) not equal to (0,0)
and 0 for (x,y) equal to (0,0)

unit vector = <-sqrt(3)/2, 1/2>

When I find the partial derivatives of the function at (0,0), they both equal zero, so I conclude that the gradient vector at (0,0) = <0, 0> Obviously any dot product with this vector will be zero, and yet when we used the limit definition of a directional derivative in class, we got that the direction vector at f(0,0) = 3/8.

According to the other definition of a directional derivative, it should = the dot product of the gradient vector and unit vector.

I'm not entirely sure what it means for the gradient vector to = a zero vector, but I think it just means that the function increases or decreases at the same rate in whatever direction you move from that point? So it should be like a max or min?

So if a zero gradient vector says you move at the same rate in any direction from that point, does the definition of the direction derivative being equal to the dot product of gradient vector and unit vector become meaningless? And is that because a zero vector has no direction/has arbitrary direction, and so will be orthogonal to ANY unit vector in ANY direction at that point?

But given that, you still have a rate of change in the function from that point, so you have to conclude that the directional derivative still exists, and must be found using the definition involving limits?

Obviously I'm a little cloudy on the geometric meaning of this stuff too..

## Answers and Replies

Let ##f'_{\mathbf{v}}(\mathbf{a}) ## denote the directional derivative (by definition) of ##f## in the direction ##\mathbf{v}## at the point ##\mathbf{a}##, where ##\mathbf{v}## is a unit vector. Then ##f'_{\mathbf{v}}(\mathbf{a}) = \nabla f(\mathbf{a}) \cdot \mathbf{v}## holds only under a certain condition on ##f##, which? It should be a theorem in your book. What can you conclude from this?

There's no condition on the definition in the book, but another theorem says that f must be a differentiable function at (0,0), and because the limit of the function (when x=rcos(theta) and y=rsin(theta)) equals zero conclusively, the function is differentiable, so the directional derivative must exist.

Or is this supposed to be like the 3D version of a cusp or hole in 2D? Where the limit from either direction equals the same value, but there is no one tangent line you could draw at that point, but many? If so, how does that work in 3D? I'm thinking of a sort of hill shape, with a tangent plane that is completely horizontal - is that the wrong idea? If I know what all these means geometrically it would help me figure out why exactly it doesn't work at this point.

There's no condition on the definition in the book, but another theorem says that f must be a differentiable function at (0,0), and because the limit of the function (when x=rcos(theta) and y=rsin(theta)) equals zero conclusively, the function is differentiable, so the directional derivative must exist.

Or is this supposed to be like the 3D version of a cusp or hole in 2D? Where the limit from either direction equals the same value, but there is no one tangent line you could draw at that point, but many? If so, how does that work in 3D? I'm thinking of a sort of hill shape, with a tangent plane that is completely horizontal - is that the wrong idea? If I know what all these means geometrically it would help me figure out why exactly it doesn't work at this point.

That ##f## has to be differentiable at ##\mathbf{a}## for ##f'_{\mathbf{v}}(\mathbf{a}) = \nabla f(\mathbf{a}) \cdot \mathbf{v}## to hold was indeed what I meant. In your exercise, you have shown that this equality does not hold, have you not? Does this not contradict your statement that your function is differentiable at the origin?

You wrote "...because the limit of the function (when x = rcos(theta) and y = rsin(theta)) equals zero conclusively, the function is differentiable...". Would you care to elaborate on this?

Last edited:
What does it mean for the partial derivatives to "exist" near (0,0)? I think they exist. Are they continuous at (0,0)...I guess not? The function itself is defined and continuous because the limit from either direction equals zero, and the function at (0,0) equals zero, but the partial derivatives are not subject to the same parameters as the original function....so they may approach the same limit, but are undefined at (0,0) because they would have zero in the denominator? What does all of this mean geometrically though? In terms of max, or min and saddle points? Is it geometrically "because" the gradient vector is a zero vector that is always orthogonal to direction vectors at that point?

So the original function is not differentiable? It has a limit, and it equals that limit at (0,0) - doesn't that make it differentiable?

So the original function is not differentiable? It has a limit, and it equals that limit at (0,0) - doesn't that make it differentiable?

That would make it continuous, not differentiable. Do you know the definition of a function being differentiable? If not, I suggest you look it up.

I would also like to mention something about limits for functions of several variables in general. Perhaps you already know this, but I will mention it just in case.

To say that a limit for a multi-variable function exists at a point, the limit must exist and be the same no matter how you approach the point. In particular, it is not enough that a limit exist and has the same value as you approach it along straight lines. One must also account for other ways that the limit can be approached. For example, you can imagine that the limit is approached along a spiral.

yes, that was the point of saying y=rsin(t) and x=rcos(t) - somehow that proves that the limit is the same through all...things/lines going through that point. For some reason the limit of this substitution being equal to zero is conclusive, even though another substitution - say y=mx, is not.

The definition of differentiable in my book says that "if the partial derivatives of f exist near (a,b) and are continuous at (a,b) then the function f is differentiable at (a,b)." so then f is differentiable. When we took the derivative a piecewise function in Calc I we took the deriv of each part, which means the partial derivatives ARE defined at (0,0) still. I'm going to stick with the idea of having a zero vector, because i'm not convinced it's not differentiable - by the definition in my book. Thanks

yes, that was the point of saying y=rsin(t) and x=rcos(t) - somehow that proves that the limit is the same through all...things/lines going through that point. For some reason the limit of this substitution being equal to zero is conclusive, even though another substitution - say y=mx, is not.

The definition of differentiable in my book says that "if the partial derivatives of f exist near (a,b) and are continuous at (a,b) then the function f is differentiable at (a,b)." so then f is differentiable. When we took the derivative a piecewise function in Calc I we took the deriv of each part, which means the partial derivatives ARE defined at (0,0) still. I'm going to stick with the idea of having a zero vector, because i'm not convinced it's not differentiable - by the definition in my book. Thanks

That is not the definition of differentiable. That's a theorem.

You have stated that your book has a theorem that is something like:

"If ##f## is differentiable at ##\mathbf{a}## then ## f'_{\mathbf{v}}(\mathbf{a}) = \nabla f(\mathbf{a}) \cdot \mathbf{v}##."

You have also shown that the equality in this theorem does not hold for your choice of ##f##. This proves that your ##f## cannot be differentiable. The only way to dispute this would be to say that you do not believe the theorem is true. But it's a theorem, so your book should have a proof to convince you that it is true.

The limit of the partial derivatives does not exist, even though I think they're defined (continuous) at (0,0), so I guess that's why f is not differentiable.

By definition, ##f## is differentiable at ##\mathbf{a}## if the partial derivatives exist and
\begin{equation*}
\lim_{\mathbf{h} \rightarrow \mathbf{0}} \frac{f(\mathbf{a} + \mathbf{h}) - f(\mathbf{a}) - \sum_{i=1}^n \frac{\partial f}{\partial x_i}(\mathbf{a}) h_i}{\mathbf{|h|}} = 0.
\end{equation*}
In your case the partial derivatives are zero. Can you see why the limit isn't always zero?

Edit: Typo in the definition

Last edited:
As an aside, you have noted that since ##f## is not differentiable, the partial derivatives cannot be continuous. You could verify directly by computing ##\frac{\partial f}{\partial x}##, for example, for a point ## (x,y) \neq (0,0)## and compute the limit as ##(x,y)## approaches the origin. Will the limit always be zero?

• 1 person
Unfortunately my Calc I teacher talked more about banking than calculus, so using the definition of derivatives is a little shaky - but, I do see how limit as h approaches zero of f(h,0)/h is 0/h = 0. And I do understand that you have to get the partial derivative that way, rather than just stealing 0 at (0,0) from the original piecewise. I'm going to have to go back and revisit definitions of derivatives. Thanks!