Problem about existence of partial derivatives at a point

Click For Summary
The discussion revolves around the function f(x,y) defined piecewise, where the challenge is to find the partial derivatives at the origin (0,0). The calculated partial derivatives using the definition yield values of 3 and -1, yet direct computation of limits shows that these derivatives do not exist at that point due to their dependence on the path taken. This highlights that the existence of partial derivatives does not guarantee continuity or the existence of limits at that point. The case serves as an example of a "pathological" function, demonstrating that functions can behave nicely in some respects while being problematic in others. Ultimately, the discussion emphasizes the nuanced relationship between derivatives and continuity in real-valued functions.
Joker93
Messages
502
Reaction score
37

Homework Statement


I have the function:
f(x,y)=x-y+2x^3/(x^2+y^2) when (x,y) is not equal to (0,0). Otherwise, f(x,y)=0.
I need to find the partial derivatives at (0,0).
With the use of the definition of the partial derivative as a limit, I get df/dx(0,0)=3 and df/dy(0,0)=-1. However, my problem here is that if I just compute the derivative in the standard way and then take the limit (x,y)-->(0,0) I get that the derivatives don't exist at the origin.

Homework Equations


df/dx=1+(2x^4+6x^2*y^2)/((x^2+y^2)^2
df/dy=-1-4x^3*y)/(x^2+y^2)^2

The Attempt at a Solution


When I take the limit (x,y)->(0,0) and I put y=mx I get:
df/dx=1+(2+6m^2)/(1+m^2)^2
df/dy=-1-(4m)/(1+m^2)^2
So, clearly both limits depend on the value of m, so they do not exist. So, why does the original method(using the definition of the derivative) wield the values 3 and -1 for each respectively?
I also checked the answer to the problem and the answer gives 3 and -1 and quotes "by using the definition of the partial derivatives".

Thanks in advance.
 
Physics news on Phys.org
The result is counterintuitive, but there is no contradiction. What we observe is that both partial derivatives ##D_xf## and ##D_yf## are functions that are discontinuous at (0,0). That is, they have no limit at that point, but they have a value there. There is no theorem saying that if the derivative of a real-valued function exists everywhere on the domain, that derivative must be continuous.
There is such a theorem for complex-valued functions on complex domains, given certain fairly minor constraints. But there is no equivalent for real-valued functions.

I'm pretty sure there is a theorem saying that, for a real-valued function, IF the limit exists at a point, it must be equal to the value. But that's not applicable here because the limit does not exist.

If you were set this problem in a course, I'd hazard a guess that the intent of the lecturer was to teach that the existence of partial derivatives at a point does not imply 'absolute niceness' of the function at that point. This is one of a number of ways in which that is the case.
 
  • Like
Likes Joker93
andrewkirk said:
The result is counterintuitive, but there is no contradiction. What we observe is that both partial derivatives ##D_xf## and ##D_yf## are functions that are discontinuous at (0,0). That is, they have no limit at that point, but they have a value there. There is no theorem saying that if the derivative of a real-valued function exists everywhere on the domain, that derivative must be continuous.
There is such a theorem for complex-valued functions on complex domains, given certain fairly minor constraints. But there is no equivalent for real-valued functions.

I'm pretty sure there is a theorem saying that, for a real-valued function, IF the limit exists at a point, it must be equal to the value. But that's not applicable here because the limit does not exist.

If you were set this problem in a course, I'd hazard a guess that the intent of the lecturer was to teach that the existence of partial derivatives at a point does not imply 'absolute niceness' of the function at that point. This is one of a number of ways in which that is the case.
Wow! I expected something like this but it's really counter-intuitive.
So, my question is: what is conceptually the difference between the two ways? I mean, what is the number obtained by the definition of the derivative and what is the limit that I am trying to find by taking the limit of the derivative function?
Also, if we call the partial derivative wrt x as g(x,y), then if I treat it like a any other function shouldn't its limit at (0,0) exist in order for it to have an actual value there? I can't understand how can we obtain a definite value for it at (0,0) through the definition if the second limit does not exist. The only thing I can think of is an analogy with the following 1-D case: https://www.google.com.cy/search?q=...hWHyRoKHf9oDRQQ_AUIBigB#imgrc=RgP0D0_uI8nfKM:
Here, the function at x=b has a discontinuous derivative. So, by the definition I find the value at x=b but by the second limit I try to find its value by taking different branches(approach x=b- and x=b+) so I won't get the same result through each branch. Is this example an analogy of what's happening in my case?
 
Last edited:
Joker93 said:
Is this example an analogy of what's happening in my case?
There are certainly similarities. But it seems more reasonable for that linked function to behave badly at ##b##, because it is discontinuous there. For the case in the OP, the function is continuous at the point in question (the origin), yet it still behaves badly.

Knowing functions that behave nicely in some respects, but badly in others, is an important part of the mathematician's tool kit. They are called 'pathological' functions (as in 'diseased'!). They are very useful as test cases for propositions that you intuitively expect to be true but have not yet proven. If a theorem that sounds like it should be true fails in some cases, those cases are likely to be pathological functions.

In the OP, one feels that there 'ought to be' a theorem that says that if the derivative of a function exists at that point and everywhere in a neighbourhood around it, then the limit of the derivative at that point should be equal to the derivative at that point. But the example you were given is a case that demonstrates that that theorem is not true. It is probably true for analytic functions - functions that can be expressed as power series - which are in many ways the 'nicest' (best behaved) sort of function. But the function in the OP is not analytic.

Here is a 1D version I cooked up for you:
##f(x)=x^2\sin\frac1{x^3}## for ##x\neq 0##, and ##f(0)=0##.

This function is continuous and differentiable everywhere on ##\mathbb R##. Its derivative at zero is ##f'(0)=0##. But ##f'(x)## has no limit as ##x## approaches zero. This function, like the one in the OP, is not analytic.
 
  • Like
Likes Joker93
Question: A clock's minute hand has length 4 and its hour hand has length 3. What is the distance between the tips at the moment when it is increasing most rapidly?(Putnam Exam Question) Answer: Making assumption that both the hands moves at constant angular velocities, the answer is ## \sqrt{7} .## But don't you think this assumption is somewhat doubtful and wrong?

Similar threads

  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 5 ·
Replies
5
Views
1K
  • · Replies 4 ·
Replies
4
Views
1K
Replies
26
Views
4K
Replies
9
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 23 ·
Replies
23
Views
2K
Replies
6
Views
2K