Partial Derivatives Explained: Real-Life Examples and Solutions

AI Thread Summary
A one-dimensional derivative and a partial derivative are related but distinct concepts, primarily differing in their application to higher dimensions. In one dimension, the derivative measures the rate of change along a single direction, while a partial derivative focuses on the rate of change with respect to one variable while holding others constant. The discussion emphasizes that in higher dimensions, the choice of direction significantly impacts the derivative's value, unlike in one dimension where the direction is less critical. Additionally, the continuity of partial derivatives across the entire domain is essential for establishing differentiability, which is more complex than proving continuity. The thread concludes with a query about evaluating limits for piecewise functions, illustrating the nuanced understanding required for differentiability in multivariable calculus.
omgitsroy326
Messages
29
Reaction score
0
To me a derivative and a partial derivatice is the same thing. You just take it with respect to another vairable ... move some things around and solve...


Can someone give me an example explainin what's happening... The difference between the two. I can solve it and i just absorb it , but any real life example that can be thought of ??
 
Physics news on Phys.org
A one-dimensional derivative and a partial derivative are related, but they aren't the same. To calculate a derivative, you must first choose a direction along which you're going to find the rate of change. In one dimension, you have two choices, the positive direction and the negative direction, and it's really pretty irrelevant which one you choose, since the derivative of one is just the opposite of the other.

But when you get into higher dimensions, you have many choices of direction, and which path you pick is not irrelevant. Your derivative depends on the direction you take, and the distinction among the directions is not just a negative sign. A partial derivative is just a derivative choosing to go along one of the coordinate axes, i.e. along the vector (1,0) or (0,1) for the two dimensional Cartesian case. However, you could have picked a different direction, say going along the line y = x, i.e. (1,1).

However, the one dimensional derivative is more closely related in dimensions higher than 1 to the total derivative than the partial derivative.

--J
 
Justin Lazear said:
A one-dimensional derivative and a partial derivative are related, but they aren't the same. To calculate a derivative, you must first choose a direction along which you're going to find the rate of change. In one dimension, you have two choices, the positive direction and the negative direction, and it's really pretty irrelevant which one you choose, since the derivative of one is just the opposite of the other.

I know,i'm being picky,but there are gazzillions of functions that are not C^{2} \mbox{on} \ I\subseteq \mathbb{R}

Daniel.
 
Well, supposedly, if he understands a 1D derivative, he should already understand when it exists.

--J
 
Partial derivatives should not be confused with one-dimensional derivatives.

Rather, a one-dimensional analogue to the partial derivative can, for example, be the limit of the difference quotient on a particular convergent sequence.

For example, let x_{n}=\frac{1}{n}
Then, the partial derivative of a function f with respect to this sequence at x_{0}\in\mathbb{R} can be defined as:
\frac{\partial{f}}{\partial{x}_{n}}\mid_{x=x_{0}}=\lim_{n\to\infty}\frac{f(x_{0}+x_{n})-f(x_{0})}{x_{n}}

With this informal definition of a one-dimensional partial derivative, we see that f is differentiable in the normal sense at x_{0} if all one-dimensional partial derivatives at x_{0} exist, and are equal to each other.

Note that in higher dimensions, it is not sufficient that all partial derivatives exist in order to guarantee differentiability (or even, continuity!) of the function at the point.
Just because a function behaves nicely along line segments (i.e existing partial derivatives) is not enough.
 
Last edited:
Sorry to ask in this thread but I would just like to know what is meant bt C^2, and any other variations of it. I ask this because in the lecture notes for a subject I'm taking(it has introductory multivariable calculus in it), the 'term' C^1 is used - and I thought it was just some personally adopted notation by the person who prepared the notes. After reading dextercioby's post, C^1 and its variations(if there are any) seem to actually have a generally accepted meaning. If that is the case, can someone please explain to me what C^(1,2 etc mean)? Just a brief description would be great.

Just one more thing, in my notes there is a little paragraph which says some things, in particular, it says "In practice it is much easier to establish that a function is C^1 than to show it is differentiable by definition. To show that a function if C^1 you only need to calculate the partial derivatives and to show that they are continuous which often can be done by inspection."

I highlighted the points I am unsure of. When it says show that the partial derivatives are continuous, does it mean for the entire domain(not sure if this is the right word to use for functions of two variables) of the function the PDs are continuous, or just for a particular point? The other thing I'm not sure of is how you can show that a function is continuous "by inspection." Does that just mean the required equation/s that need to be set up can easily be deduced? From my understanding of continuity, the limit at a point needs to exist for a function to be continuous. But to show that the limit exists e-d arguments would be needed so wouldn't that be more difficult to show?
 
Last edited:
C^1 functions are functions which have continuous first order partial derivatives; and C^2 functions have continuous second order partial derivatives. For example, you can differentiate f(x) = \sin(x) as many times as you like and it will remain a continuous function for all values of x. So we say that \sin(x) \in C^{\infty}, that is, \sin(x) is a member of the infinitely differentiable continuous functions.

This kind of notation is similar to saying that \textbf{x} = (x,y) \in \mathbb{R}^2 which says that the vector \textbf{x} is a vector in a 2 dimensional vector space.

To prove that a function is differentiable is somewhat different that it is to prove that it is continuous. All you need to do to prove that a function is continuous at some point is to show

\lim_{x\rightarrow x_0} f(x)

exists and equals f(x_0). Often proving that a limit exists can be an easy task. However, if you tried to prove that a function is differentiable, then you'd have to prove that

\lim_{h\rightarrow 0}\frac{f(a+h)-f(a)}{h}

exists. Further, sometimes we already know that a function is continuous. When this happens we say that a function is continuous by inspection. This generally occurs when we want to prove that say f(x) = x is continuous - the continuity of this function is widely accepted and the proof may be omitted. However, you can't generally say that f(x) = \sin(1/x) is continuous by inspection over the interval [-1,1] because this is (1) not continuous at 0, and (2) the function isn't "general" enough to say that you can by inspection.

When you say that the partial derivatives are continuous, it generally means for the entire domain of the function - you are right. For instance,

\frac{\partial}{\partial x}\sin x

is continuous on its domain. To be specific, its domain is the real line: -\infty < x < \infty or the interval (-\infty,\infty). But if a function has continuous partial derivatives, then it must be continuous at every point on its domain.

Sometimes, you can get special restrictions such as "continuous everywhere except for at finitely many points". These sorts of functions are called piecewise continuous. Which form a special subset of continuous functions, and can be included in theories such as Fourier Theorem, which includes continuous functions and piecewise continuous functions.
 
Last edited:
Thanks for the thorough explanation Oxymoron.

Can I ask just one more thing. With functions of two variables, if we want to show that they are differentiable then we apply the definition(there seem to be slight variations depending on the source of the definition) and take the limit. How would you for example, take the limit if you have a piecewise function such as f(x,y) = 1, if x = y = 0 and zero otherwise as (x,y) -> (0,0). I mean to prove that its not differentiable at (x,y) = (0,0) all I need to do is show that it the limit as (x,y) -> (0,0) is not equal to f(0,0) = 1. I can see why the limit won't be equal to f(0,0) = 1 but I'm unsure about how to set up the limit correctly.

<br /> \mathop {\mathop {\lim }\limits_{\left( {x,y} \right) \to \left( {0,0} \right)} }\limits_{y = x} f(x,y) = 0 \ne f(0,0) = 1<br />

To show that the limit is not equal to f(0,0) all I need to show is that the limit along a single line is not equal to f(0,0) right? So is what I just did ok? At first I thought about simply writing the limit(as above) but without the y = x bit.
 
Back
Top