Paritial derivative of function of dependent variables

In summary: I realize that this is really nitpicky, but I prefer to avoid saying things like "of one variable, x", because f is the same function no matter what symbols we use to represent members of its domain.
  • #1
ato
30
0
i am having a hard time understanding partial derivative for function of dependent variables.

for example let's consider
$$z=x+y$$
so by usual steps that are mentioned on e.g wikipedia etc.

$$\frac{\partial}{\partial x}z=1$$

but what if its also true that $$y=x$$ (or in other words why the steps does not take into account that the input variables might depend on each other)

$$\frac{\partial}{\partial x}z=2$$

so the question is how paritial derivative is defined for function of dependent variables ?
 
Physics news on Phys.org
  • #2
On what do x and y depend.

If y = x, then they are the same variable, no?


I believe it is customary to have a function depending on an independent variable, e.g., a set of coordinates and/or time.
 
  • #3
Astronuc said:
I believe it is customary to have a function depending on an independent variable, e.g., a set of coordinates and/or time.

Astronuc said:
On what do x and y depend

how can both x and y be independent variables if both depend on some other variable ? would not the value of x (for example) imply the value of y ?

if x and y are truly independent would not

$$\frac{\partial f(x,y)}{\partial x}=\frac{df(x,y)}{dx}$$

Astronuc said:
If y = x, then they are the same variable, no?

does it matter ?

thank you
 
  • #4
still waiting for a reply, guys !

so, I am going to bump the thread, just this once .
 
  • #5
ato said:
how can both x and y be independent variables if both depend on some other variable ? would not the value of x (for example) imply the value of y ?

if x and y are truly independent would not

$$\frac{\partial f(x,y)}{\partial x}=\frac{df(x,y)}{dx}$$



does it matter ?

thank you
Left-hand side: "The value at (x,y), of the partial derivative of f with respect to the first variable".

Right-hand side: "The value at x, of the derivative of the function ##t\mapsto f(t,y)##".

Yes, those two are the same number, regardless of the values of x and y.
 
  • Like
Likes 1 person
  • #6
ato said:
i am having a hard time understanding partial derivative for function of dependent variables.

for example let's consider
$$z=x+y$$
so by usual steps that are mentioned on e.g wikipedia etc.

$$\frac{\partial}{\partial x}z=1$$

but what if its also true that $$y=x$$ (or in other words why the steps does not take into account that the input variables might depend on each other)

$$\frac{\partial}{\partial x}z=2$$

so the question is how paritial derivative is defined for function of dependent variables ?
You don't take partial derivatives of variables. You take partial derivatives of functions. If you don't know what the function is, you can't compute the partial derivative. As you have already discovered, there's nothing in the notation
$$\frac{\partial }{\partial x}z$$ that informs us what function we are to take a partial derivative of. What you are describing is the convention to use this notation even though it's ambiguous, and to specify what function we're dealing with in a separate statement. To say that z=x+y is to suggest that the function is the f such that f(s,t)=s+t for all s,t, and that we're supposed to take the partial derivative with respect to the first variable. To supplement this by saying that y=x, is to suggest that the function is the g such that g(t)=f(t,t)=2t for all t.

I'm using the word "suggest" because these specifications of the functions are still ambiguous. For example, in the first case, the function could be the h defined by h(r,s,t)=s+t for all r,s,t. Then the partial derivative would be with respect to the second variable.
 
Last edited:
  • #7
Fredrik said:
[...]∂f(x,y)/∂x=df(x,y)/dx

Right-hand side: "The value at x, of the derivative of the function ##t\mapsto f(t,y)##".

[...]

Also it's important to say that in this case x is the variable, y is a parameter. The y defines then a family of functions of one variable, x. For example:

f(x,y=2) = 2x.

f(x,y=4) = 4x.
 
  • Like
Likes 1 person
  • #8
dextercioby said:
Also it's important to say that in this case x is the variable, y is a parameter.
I did, by using the words "the function ##t\mapsto f(t,y)##" instead of e.g. "the function ##(s,t)\mapsto f(s,t)##" or "some function whose value at (t,y) is f(t,y)"

dextercioby said:
The y defines then a family of functions of one variable, x. For example:

f(x,y=2) = 2x.

f(x,y=4) = 4x.
I realize that this is really nitpicky, but I prefer to avoid saying things like "of one variable, x", because f is the same function no matter what symbols we use to represent members of its domain.

I'm also not sure what you're saying here. At first I thought you meant this: We can use the function ##f:\mathbb R^2\to\mathbb R## to define a function ##f_y:\mathbb R\to\mathbb R## for each ##y\in\mathbb R## by ##f_y(x)=f(x,y)## for all ##x\in\mathbb R##. For all ##x\in\mathbb R##, we have
\begin{align}
f_2(x)=x+2\\
f_4(x)=x+4.
\end{align} But you wrote 2x and 4x.
 
  • Like
Likes 1 person
  • #9
The problem you're having here is that physics (and traditional calculus notation) tend to conflate variables and functions.

The function here is z. We write z(x, y) = x + y. It is a function of two parameters.

The nice thing about functions is that their parameters are *always* independent. We don't even need to use the word "independent". The choice of the parameters is always at the discretion of the person using the function.

Being a smooth function of two parameters, f has two partial derivatives: z_1(x, y) = 1 and z_2(x, y) = 1.

Notational aside: I'm using an alternate notation here. The Leibniz notation (the one you used, and the standard one in calculus) mentions the parameter name instead of its position. However, one valid "move" in math is renaming of bound variables. Even though I wrote z(x, y) = x + y, I could just as easily say z(p, q) = p + q. They denote the exact same function, however, once I write $$\frac{\partial}{\partial x}z$$, I have "locked myself into" a particular choice of variable, and that valid move of mine suddenly stops working. Leibniz notation is convenient for back-of-the-napkin calculations, but it's kinda gross.

Now your question is that, "what if $$x = y$$"? Well, that changes things a little bit. As I said above, function parameters are independent always. In order to capture their dependence on each other I need... another function!

Let w(x) = z(x, x).

Now we have a new function w. It is a function of only ONE parameter. Computing w(x) gives us z(x, y) under the restriction that x = y.

Since w is a function of one variable, it doesn't have partial derivatives. It just has the regular 1-variable derivative from Calc I.

My (somewhat controversial) advice is, whenever you can, try to formulate your problems in terms of functions instead of just equations. It might end up looking a bit different from what's in your textbook, and it might be a little bit longer to write out by hand, but it's the correct formalization. The standard notation becomes a useful shorthand later once you understand the underlying concept.
 
  • Like
Likes 1 person
  • #10
Tac-Tics said:
My (somewhat controversial) advice is, whenever you can, try to formulate your problems in terms of functions instead of just equations.
Seconded.
 
  • #11
Tac-Tics said:
The problem you're having here is that physics (and traditional calculus notation) tend to conflate variables and functions.

The function here is z. We write z(x, y) = x + y. It is a function of two parameters.

The nice thing about functions is that their parameters are *always* independent. We don't even need to use the word "independent". The choice of the parameters is always at the discretion of the person using the function.

Being a smooth function of two parameters, f has two partial derivatives: z_1(x, y) = 1 and z_2(x, y) = 1.

Notational aside: I'm using an alternate notation here. The Leibniz notation (the one you used, and the standard one in calculus) mentions the parameter name instead of its position. However, one valid "move" in math is renaming of bound variables. Even though I wrote z(x, y) = x + y, I could just as easily say z(p, q) = p + q. They denote the exact same function, however, once I write $$\frac{\partial}{\partial x}z$$, I have "locked myself into" a particular choice of variable, and that valid move of mine suddenly stops working. Leibniz notation is convenient for back-of-the-napkin calculations, but it's kinda gross.

Now your question is that, "what if $$x = y$$"? Well, that changes things a little bit. As I said above, function parameters are independent always. In order to capture their dependence on each other I need... another function!

Let w(x) = z(x, x).

Now we have a new function w. It is a function of only ONE parameter. Computing w(x) gives us z(x, y) under the restriction that x = y.

Since w is a function of one variable, it doesn't have partial derivatives. It just has the regular 1-variable derivative from Calc I.

let me add parameter in the confusing concepts list.

assuming for a function,all parameters are independent.
then you can't say
z(x, y) = x + y. It is a function of two parameters.
because we don't know if x,y are independent of each other. in fact i added x = y so its false that x and y are parameters. may be x or y is a parameter but both can't be . so for a general function f(x1,x2,x3...) before saying xi,xj,xk... are parameters of f, first it has to be proved that they are independent variables.

and if you are saying a function contains only those variables that are independent of each other. then z(x,y) is undefined for x=y.

so i don't understand why its false
$$\frac{\partial}{\partial t}f(x_{i},x_{j},x_{k},...) = \frac{\partial}{\partial t}f(x_{i})=\frac{d}{dt}f(x_{i})$$

for example total derivative does not assume any relation between x1,x2,x3...

$$\frac{d}{dx}z(x,y)=\frac{d}{dx}(x+y)=1+\frac{dy}{dx}$$
so if x and y are indendent z' = 1 and if x and y are dependent then z' != 1 and y' != 0

Tac-Tics said:
My (somewhat controversial) advice is, whenever you can, try to formulate your problems in terms of functions instead of just equations.

how do you mean ?

Fredrik said:
You don't take partial derivatives of variables. You take partial derivatives of functions.

i don't see the difference .

as Tac-Tics suggested, i AM confused about function and variable. first let me give all relevant "i think" points, so you can tell me where i am wrong

1. in an equation, if a side is variable then the other side is also a variable. e.g.,
$$\left(z=x+y\; and\; z\: is\: variable\right)\Rightarrow x+y\: is\: variable$$
$$\left(x+y=f(x,y)\; and\; x+y\: is\: variable\right)\Rightarrow f(x,y)\: is\: variable$$

2. f(x1,x2,x3...) is a short notation which says f is equal to some expression involving x1,x2,x3... . eg,
$$x+y=f(x,y)=f(x)=f(y)$$
more importantly f(x1,x2,x3...) does not assumes no relation between x1,x2,x3... . moer than one function notation can be formed out of a single expression.

thank you
 
  • #12
ato said:
let me add parameter in the confusing concepts list.

assuming for a function,all parameters are independent.
then you can't say

because we don't know if x,y are independent of each other.

Sorry if I confused you! I will try to elaborate a little bit.

Again, suppose f(x, y) = x + y. And let's take the partial derivative with respect to x on both sides.

The ultimate problem is that "taking a partial derivative of an equation" (partial or otherwise) doesn't actually make any sense. Derivatives (again, partial or not) are something you do to a function. NOT to one or both sides of an equation.

This is stated clearly in the early chapters of every calculus textbook:

Let f be a function from R -> R.
It's derivative is defined as f'(x) = lim h→0 of (f(x + h) - f(x)) / h, or

Let f be a function from RxR -> R.
It's first partial derivative is defined as f_0(x, y) = lim h→0 of (f(x+h, y) - f(x, y)) / h, and
it's second partial derivative is f_1(x, y) = lim h→0 of (f(x, y+h) - f(x, y)) / h.

Both of these definitions assume we start with a function and we get another function for free (and we call it its derivative).

What is a function? It's a rule for assigning an output to each input. In our f example, our output is x + y, and our input is a pair (x, y).

Standard mathematical notation for defining the function is as an equation f(x, y) = x + y. But what that means is "f applied to the arguments x and y equals x + y". It tells us what f(x, y) equals... but what does f itself equal?

Borrowing some notation, we might write f = (x, y) ↦ x + y. The meaning here is that f is a function that maps pairs of numbers to their sum. This notation makes the input and output of our function f explicit: everything to the left of the ↦ is input, while everything to the right is output.

So back to derivatives (including partial derivatives). We know in order to take a derivative, we need a function. Where is our function in the equation f(x, y) = x + y? What are we even taking the derivative of?

This is where textbooks and teachers will quickly start abusing notation. (And most of the time, they won't even realize they're doing it). As long as you are lucky, this abuse is harmless, and "taking the derivative of an equation" works out just the same.

However, when you start throwing questions like "what is x = y?", you run into trouble.

The best bet is to always try to be as explicit as possible with what functions you are deriving, and keep it in the back of your head that you cannot "take derivatives of both sides of an equation".






in fact i added x = y so its false that x and y are parameters. may be x or y is a parameter but both can't be . so for a general function f(x1,x2,x3...) before saying xi,xj,xk... are parameters of f, first it has to be proved that they are independent variables.

and if you are saying a function contains only those variables that are independent of each other. then z(x,y) is undefined for x=y.

so i don't understand why its false
$$\frac{\partial}{\partial t}f(x_{i},x_{j},x_{k},...) = \frac{\partial}{\partial t}f(x_{i})=\frac{d}{dt}f(x_{i})$$

for example total derivative does not assume any relation between x1,x2,x3...

$$\frac{d}{dx}z(x,y)=\frac{d}{dx}(x+y)=1+\frac{dy}{dx}$$
so if x and y are indendent z' = 1 and if x and y are dependent then z' != 1 and y' != 0



how do you mean ?



i don't see the difference .

as Tac-Tics suggested, i AM confused about function and variable. first let me give all relevant "i think" points, so you can tell me where i am wrong

1. in an equation, if a side is variable then the other side is also a variable. e.g.,
$$\left(z=x+y\; and\; z\: is\: variable\right)\Rightarrow x+y\: is\: variable$$
$$\left(x+y=f(x,y)\; and\; x+y\: is\: variable\right)\Rightarrow f(x,y)\: is\: variable$$

2. f(x1,x2,x3...) is a short notation which says f is equal to some expression involving x1,x2,x3... . eg,
$$x+y=f(x,y)=f(x)=f(y)$$
more importantly f(x1,x2,x3...) does not assumes no relation between x1,x2,x3... . moer than one function notation can be formed out of a single expression.

thank you[/QUOTE]
 
  • Like
Likes 1 person
  • #13
ato said:
Fredrik said:
You don't take partial derivatives of variables. You take partial derivatives of functions.
i don't see the difference .
A variable is a symbol that's used to represent a member of some set. A constant is a variable that always represents the same thing. A function should be thought of as a "rule" that associates exactly one member of a set (called the codomain) with each member of a set (called the domain). (This is not a definition of "function". It's just an explanation of the concept. The actual definition is a bit tricky and not very relevant here. If you're curious, see this post. Definition 2 is probably the most useful one).

Most functions can be defined by specifying a relationship between variables. For example, the specification x-y=1 implicitly defines two functions ##f,g:\mathbb R\to\mathbb R## that can also be defined by f(t)=1+t for all ##t\in\mathbb R##, and g(t)=1-t for all ##t\in\mathbb R##. (Note that it never matters what variable is used in a "for all" statement).

These functions are differentiable. For all ##t\in\mathbb R##, we have f'(t)=1 and ##g'(t)=-1##. Note that this makes f' and g' constant functions, not to be confused with constants as defined above.

The notation dy/dx doesn't refer to a derivative of y. It can't, because y isn't a function. The y in the numerator and the x in the denominator let's us know that we're supposed to compute the derivative of the function that "takes x to y" and then plug x into the result (which is another function) to get the final result. If x-y=1, then the function that "takes x to y" is the one I called f.

There's no such thing as a partial derivative of a variable. Typically, a math book will say something like this: Let n be a positive integer. Let E be a subset of ##\mathbb R^n##. Let x be an interior point of ##E##. Let ##\{e_1,e_2,\dots,e_n\}## be the standard basis for ##\mathbb R^n##. Let ##k\in\{1,2,\dots,n\}## be arbitrary. If there's a number A such that the limit
$$\lim_{t\to 0}\frac{f(x+te_k)-f(x)}{t}$$
exists, then this number is called the kth partial derivative of f at x, or the "partial derivative of f at x, with respect to the kth variable". There are many different notations for it, for example ##D_kf(x)##, ##\partial_kf(x)##, ##\partial f(x)/\partial x_k##, ##f^{(k)}(x)## and ##f_{,\,k}(x)##.

Now, if n=2, we will usually write f(x,y) instead of f(x) (with ##x\in\mathbb R^2##) or ##f(x_1,x_2)##. Because x is traditionally put into the first variable "slot" of f, the notation ##\partial f(x,y)/\partial x## is an alternative to the five notations mentioned above (with k=1 and (x,y) replacing x).

ato said:
assuming for a function,all parameters are independent.
then you can't say

z(x,y)=x+y

because we don't know if x,y are independent of each other.
The thing you said that he can't say is "z(x, y) = x + y. It is a function of two parameters." Tac-tics was talking about the ##z:\mathbb R^2\to\mathbb R## defined by ##z(x,y)=x+y## for all ##x,y\in\mathbb R##. It never matters what variables are used in a "for all" statement, so we could have said "...defined by ##z(s,t)=s+t## for all ##s,t\in\mathbb R##." The definition of z doesn't in any way depend on some variables x and y, so there's nothing you can say about x and y that will change the function z.

On the other hand, if you say that x,y,z are variables representing real numbers, and then say that z=x+y, then this (i.e. the string of text "z=x+y") is a constraint that prevents us from assigning arbitrary values to all three variables, and also ensures that if we assign values to any two of them, the value of the third is fixed. This means that the constraint implicitly defines at least three functions, and we can compute the partial derivatives of those.

If you also specify that x=y, then this further reduces our ability to assign values to the three variables. Now if we assign a value to any one of them, the values of the other two are fixed. So the pair of constraints (z=x+y, x=y) defines at least six functions implicitly, and we can compute the derivatives of those.

ato said:
so i don't understand why its false
$$\frac{\partial}{\partial t}f(x_{i},x_{j},x_{k},...) = \frac{\partial}{\partial t}f(x_{i})=\frac{d}{dt}f(x_{i})$$
The partial derivative notation doesn't really make sense here. Since t isn't one of the variables in the "slots" of f, there's no way to interpret ##\partial f/\partial t## as ##D_kf## for some k.

ato said:
as Tac-Tics suggested, i AM confused about function and variable.
That's OK. A lot of people are. I think the reason is that math books and math teachers don't explain the stuff that I said at the start of this post.

ato said:
1. in an equation, if a side is variable then the other side is also a variable.
I wouldn't say that. ##x^2=\pi## is an equation. Here x is a variable because it represents a real number, and ##\pi## is a constant because it represents a real number and it represents that same real number when it appears in other equations.

ato said:
2. f(x1,x2,x3...) is a short notation which says f is equal to some expression involving x1,x2,x3... . eg,
$$x+y=f(x,y)=f(x)=f(y)$$
more importantly f(x1,x2,x3...) does not assumes no relation between x1,x2,x3... . moer than one function notation can be formed out of a single expression.
f is a function. (x1,x2,x3...) is a member of the domain of f. f(x1,x2,x3...) is a member the codomain of f, called "the value of f at (x1,x2,x3...)".

For example, if ##f:\mathbb R^2\to\mathbb R## is defined by ##f(x,y)=5x+y^2## for all ##x,y\in\mathbb R##, then f is a function, x and y are real numbers, (x,y) is an ordered pair of real numbers, i.e. a member of ##\mathbb R^2##, f(x,y) is a real number (but we only know which one if we know the values of x and y), f(7,3) is a real number (and we know that it's 44).

It is never OK to write f(x,y)=f(x), because the left-hand side only makes sense if the domain of f is a subset of ##\mathbb R^2## and the right-hand side only makes sense if the domain of f is a subset of ##\mathbb R##.

Note that the definition of a function is always a "for all" statement, even if the words "for all" have been omitted. For example, the words "the function ##x^2##" is strictly speaking nonsense. The correct way to say it is "the function ##f:\mathbb R\to\mathbb R## defined by ##f(x)=x^2## for all ##x\in\mathbb R##". Even the phrase "the function ##f(x)=x^2##" is flawed in four(!) different ways: 1. Neither of the strings of text "f(x)=x2" or "f(x)" represents a function. The function is denoted by f. 2. There's no specification of the domain. 3. There's no specification of the codomain. 4. The absence of the words "for all" hides the fact that x is a dummy variable (one that can be replaced by any other symbol without changing the meaning of the statement).
 
Last edited:
  • Like
Likes 1 person
  • #14
ato said:
for example let's consider
$$z=x+y$$
so by usual steps that are mentioned on e.g wikipedia etc.

$$\frac{\partial}{\partial x}z=1$$

but what if its also true that $$y=x$$ (or in other words why the steps does not take into account that the input variables might depend on each other)

$$\frac{\partial}{\partial x}z=2$$

so the question is how paritial derivative is defined for function of dependent variables ?

[itex]\partial[/itex]z/[itex]\partial[/itex]x = 1 + [itex]\partial[/itex]y/[itex]\partial[/itex]x

If y is independent of x then [itex]\partial[/itex]y/[itex]\partial[/itex]x = 0
 
  • #15
Fredrik said:
A variable is a symbol that's used to represent a member of some set. A constant is a variable that always represents the same thing. A function should be thought of as a "rule" that associates exactly one member of a set (called the codomain) with each member of a set (called the domain). (This is not a definition of "function". It's just an explanation of the concept. The actual definition is a bit tricky and not very relevant here. If you're curious, see this post. Definition 2 is probably the most useful one).[

Most functions can be defined by specifying a relationship between variables. For example, the specification x-y=1 implicitly defines two functions ##f,g:\mathbb R\to\mathbb R## that can also be defined by f(t)=1+t for all ##t\in\mathbb R##, and g(t)=1-t for all ##t\in\mathbb R##. (Note that it never matters what variable is used in a "for all" statement).

These functions are differentiable. For all ##t\in\mathbb R##, we have f'(t)=1 and ##g'(t)=-1##. Note that this makes f' and g' constant functions, not to be confused with constants as defined above.

The notation dy/dx doesn't refer to a derivative of y. It can't, because y isn't a function. The y in the numerator and the x in the denominator let's us know that we're supposed to compute the derivative of the function that "takes x to y" and then plug x into the result (which is another function) to get the final result. If x-y=1, then the function that "takes x to y" is the one I called f.

reading the definition (given in the link), i admit following things
1. a function is a set , where each element is an orderd pair and composed of two elements taken from two sets. the vice versa may or may not be true. (f is still a variable though).
Fredrik said:
ato said:
1. in an equation, if a side is variable then the other side is also a variable.
I wouldn't say that. ##x^2=\pi## is an equation. Here x is a variable because it represents a real number, and ##\pi## is a constant because it represents a real number and it represents that same real number when it appears in other equations.
but you also say
Fredrik said:
A constant is a variable that always represents the same thing
is not that contradictory ?
anyway if it is assumed that a constant is not a variable, then my 1st point is wrong (only for constant though, but still wrong). but i don't assume . i don't have any reason to. please mention if there are any.

the reason i mentioned it here is that even though f is a set why it is false that
2. ##f(x_{1},x_{2},...)## is a non-set variable (i don't know what else to call, please tell me if there is a standard term).
when ##f(x_{1},x_{2},...)=y## , ##y## is a non-set variable , it implies that ##f(x_{1},x_{2},...)## is non-set variable too.

3. it seems you and Tac-Ticks are saying the correct total derivative notation is ##\frac{d}{dx}f## not ##\frac{d}{dx}f(x)## . (first let me add it to the set of bizarre notations that i don't understand ##\left\{ \frac{df}{dx},dx,df,...\right\}##) luckily i don't have to understand, because i know ##\frac{df}{dx}=lim_{h\rightarrow0}\frac{f(x+h)-f(x)}{h}##, so whenever i need to deal with lhs i just calculate the rhs.

Fredrik said:
There's no such thing as a partial derivative of a variable. Typically, a math book will say something like this: Let n be a positive integer. Let E be a subset of ##\mathbb R^n##. Let x be an interior point of ##E##. Let ##\{e_1,e_2,\dots,e_n\}## be the standard basis for ##\mathbb R^n##. Let ##k\in\{1,2,\dots,n\}## be arbitrary. If there's a number A such that the limit
$$\lim_{t\to 0}\frac{f(x+te_k)-f(x)}{t}$$
exists, then this number is called the kth partial derivative of f at x, or the "partial derivative of f at x, with respect to the kth variable". There are many different notations for it, for example ##D_kf(x)##, ##\partial_kf(x)##, ##\partial f(x)/\partial x_k##, ##f^{(k)}(x)## and ##f_{,\,k}(x)##.

Now, if n=2, we will usually write f(x,y) instead of f(x) (with ##x\in\mathbb R^2##) or ##f(x_1,x_2)##. Because x is traditionally put into the first variable "slot" of f, the notation ##\partial f(x,y)/\partial x## is an alternative to the five notations mentioned above (with k=1 and (x,y) replacing x).


lets assume ##f:A\mapsto R##, ##A\subset R^{n}## so it can be written that ##f(x_{1},x_{2},...,x_{n})=y## but later it is found that ##A=\{(x_{1},x_{1}+1,x_{1}+2,...,x_{1}+n-1):x_{1}\in R\}## so for all following implication, mention if it is correct/incorrect and why ?
##f(x_{1},x_{2},...,x_{n})=f(x_{1},x_{1}+1,...,x_{1}+n-1)##
##f(x_{1},x_{2},...,x_{n})=f(x_{1})##
##f(x_{1},x_{2},...,x_{n})=g(x_{1})##
##A=\{(x_{1},x_{1}+1,x_{1}+2,...,x_{1}+n-1):x_{1}\in R\}## is contradictory statement

does ##f:A\mapsto R##, ##A\subset R^{n}## and ##f(x_{1},x_{2},...,x_{n})=y\Rightarrow(x_{1},x_{2},...,x_{n})## are independent.

my another question is how to make a function out of an expression ? in other words , how to make a function (a graph,a table or a mapping in your words) depicting the changes between the expression and the variables involved in that expression ? assuming all variables belongs to real set, will the domain be ##R^{n}## if n is the total number of variables involved ? what if the constants are assumed as variables ? is it neccerry to expresse the same expression in terms of independent variables only ? if not would not a expression led to more than one function mapping. would not it give different partial derivative of functions of the expression ? is it ok to give different results ? if yes why is it ok ?

lavinia said:
[itex]\partial[/itex]z/[itex]\partial[/itex]x = 1 + [itex]\partial[/itex]y/[itex]\partial[/itex]x

If y is independent of x then [itex]\partial[/itex]y/[itex]\partial[/itex]x = 0

could you generalize it for an aribrary function of aribtrary number of variables ?

-----

its really annoying when oversimplification leads to more doubts.

thank you
 
  • #16
ato said:
but you also say

is not that contradictory ?
I don't see the contradiction. Those statements of mine that you quoted seem consistent to me.

I should probably have mentioned that those definitions of "constant" and "variable" are my own. I haven't seen those terms defined in textbooks.

ato said:
the reason i mentioned it here is that even though f is a set why it is false that
2. ##f(x_{1},x_{2},...)## is a non-set variable (i don't know what else to call, please tell me if there is a standard term).
when ##f(x_{1},x_{2},...)=y## , ##y## is a non-set variable , it implies that ##f(x_{1},x_{2},...)## is non-set variable too.
I'm not sure I understand what you're asking. If you're asking if e.g. the string of text "f(x,y)" represents a set, then the answer is yes, because we're working within the branch of mathematics defined by ZFC set theory, and in ZFC set theory, the members of sets are themselves sets. However, if e.g. S={1,2,3} then we prefer to call the members of S "numbers" instead of "sets". Similarly, if f:X→Y (i.e. if f is a function with domain X and codomain Y), and x is a member of X, then we prefer to describe f(x) as a member of Y instead of as a "set".

ato said:
3. it seems you and Tac-Ticks are saying the correct total derivative notation is ##\frac{d}{dx}f## not ##\frac{d}{dx}f(x)## .
The derivative of f is another function, which I would denote by f' or maybe Df (but not ##\frac{d}{dx}f##). The value of that function at x is then denoted by f'(x), Df(x) or ##\frac{d}{dx}f(x)##.

ato said:
(first let me add it to the set of bizarre notations that i don't understand ##\left\{ \frac{df}{dx},dx,df,...\right\}##) luckily i don't have to understand, because i know ##\frac{df}{dx}=lim_{h\rightarrow0}\frac{f(x+h)-f(x)}{h}##, so whenever i need to deal with lhs i just calculate the rhs.
##\frac{df}{dx}## is just a sloppy way of writing ##\frac{df(x)}{dx}##, which means the same thing as ##\frac{d}{dx}f(x)##.

If ##f:\mathbb R\to\mathbb R##, then we can define a function ##df:\mathbb R^2\to\mathbb R## by ##df(x,h)=f'(x)h## for all ##x,h\in\mathbb R##. If we then use the notation "dx" for the number h, we have ##df(x,dx)=f'(x)dx##, and if we recklessly write the left-hand side as "df", we have df=f'(x)dx.

Note that this does not provide any justification for tricks like
$$\frac{dz}{dx}=\frac{dz}{dy}\frac{dy}{dx}.$$ This is the chain rule, which I prefer to write as ##(f\circ g)'(x)=f'(g(x))g'(x)##. It has a relatively complicated proof.

ato said:
lets assume ##f:A\mapsto R##, ##A\subset R^{n}## so it can be written that ##f(x_{1},x_{2},...,x_{n})=y## but later it is found that ##A=\{(x_{1},x_{1}+1,x_{1}+2,...,x_{1}+n-1):x_{1}\in R\}##
You have defined A as a straight line in ##\mathbb R^n##. Every point x on that line has the problem that no open ball* around x is a subset of A. This prevents us from making sense of the limits in the definitions of the partial derivatives. So this function doesn't have any partial derivatives.

*) The open ball of radius r around x is defined as the set ##B_r(x)=\{y\in\mathbb R^n:|y-x|<r\}##.

By the way, the \mapsto arrow is supposed to be used to specify what happens to the members of the domain. Example: The function ##f:\mathbb R\to\mathbb R## defined by ##f(x)=x^2## for all ##x\in\mathbb R## shouldn't be written as just ##x^2##, but it can be written as ##x\mapsto x^2##. Some people like to state the definition of this f like this: ##f:\mathbb R\to\mathbb R:x\mapsto x^2##.
ato said:
so for all following implication, mention if it is correct/incorrect and why ?
##f(x_{1},x_{2},...,x_{n})=f(x_{1},x_{1}+1,...,x_{1}+n-1)##
You haven't really defined ##x_2,\dots,x_n##, but if you define them by ##x_2=x_1+1## and so on, then this is correct.

ato said:
##f(x_{1},x_{2},...,x_{n})=f(x_{1})##
Again, you haven't defined ##x_2,\dots,x_n##. Assuming the same definition as before, then this is wrong. The right-hand side doesn't make sense. f is defined to take an n-tuple of real numbers as input, not a single real number.

ato said:
##A=\{(x_{1},x_{1}+1,x_{1}+2,...,x_{1}+n-1):x_{1}\in R\}## is contradictory statement
Nothing contradictory here. All you had said before is that ##A\subset\mathbb R^n##, and here you define A as a set whose members are n-tuples of real numbers.

ato said:
does ##f:A\mapsto R##, ##A\subset R^{n}## and ##f(x_{1},x_{2},...,x_{n})=y\Rightarrow(x_{1},x_{2},...,x_{n})## are independent.
The statement after the "and" is stated in a strange way (and you should use the \to arrow instead of the \mapsto arrow), but if you mean what I think you mean, then the answer is "sort of yes". You never have to worry about a possible relationship between the variables when you're asked to compute a partial derivative of a given function. However, if you're asked to compute a partial derivative of a function that is defined implicitly by a relationship between variables, then things are of course very different.

ato said:
my another question is how to make a function out of an expression ?
In the simple cases, the answer is almost obvious. For example, if the constraint is x-y=1, then you use the fact that this equality is equivalent to both x=1+y and y=x-1. The former tells you very clearly that for each value of y, there's exactly one possible value of x. So you define a function, conveniently denoted by x, by ##x:\mathbb R\to\mathbb R:t\mapsto 1+t##. Then for all ##t\in\mathbb R##, we have x(t)=1+t. We also have x'(t)=1 for all t, and x'(y)=1. This last equality is (somewhat sloppily) written as ##\frac{\partial x}{\partial y}=1##.

In the complicated cases, the answer is very non-trivial. You need a pretty hard theorem called the implict function theorem.

ato said:
assuming all variables belongs to real set, will the domain be ##R^{n}## if n is the total number of variables involved ?
It's typically a subset of ##\mathbb R^n## with n one less than that (e.g. if x+y+z=1, you can solve for any of these variables, and use that to define a function from ##\mathbb R^2## into ##\mathbb R##), but you will have to think this through for each problem you encounter. It can be tricky to figure out the number of variables that can be chosen freely if there are multiple constraints.

ato said:
could you generalize it for an aribrary function of aribtrary number of variables ?
Lavinia just used a general rule that says that partial derivative operators (like ##\frac{\partial}{\partial x}## are linear). This allows you to do stuff like
$$\frac{\partial}{\partial x}(x+2y+z^3)=\frac{\partial}{\partial x}x+2\frac{\partial}{\partial x}y+\frac{\partial}{\partial x}z^3$$ and then think about how to evaluate each term. (Note that this involves using the constraints you've been given to reinterpret y and z as functions instead of as variables representing real numbers).
 
Last edited:
  • Like
Likes 1 person
  • #17
Fredrik said:
I should probably have mentioned that those definitions of "constant" and "variable" are my own. I haven't seen those terms defined in textbooks.

The "official" definition for constant is probably a distraction from the topic at hand, but if anyone is interested:

Variables don't exist on their own. Every variable you use in an expression must first be declared. We call expressions that declare variables "binding forms". Here are some examples of binding forms:

f(x) = ... (a function definition declares x)

Σi=0 to 100 ... (a summation declares i)

∫ ... dt (integration declares t)

let x = 2 + 3 ... (a variable definition declares x... obviously :)

Each binding form also has a scope for the variable. The variable "comes into existence" in the scope, and ceases to exist entirely outside of the scope. The scopes above are given by the ... part (which can be any expression).

Looking at a particular expression and a variable x. We say x is bound if its binder appears in the expression. Otherwise, we say it is free.

Some examples:

let x = 1, x + x = 2. (x is bound by the "let")

f(r) = π * r^2 (r is bound by the function definition. π is also a variable, but it's free!)

Σi=1 to 5 = 10 (i is bound by the summation)

∫ ct^2 dt = c∫ t^2 dt (we have two variables: t is bound by the integral, c is free).

Notice that whether a variable is bound or free depends on the expression we're looking at! If we restrict our attention to some subexpression, a variable's freedom may change:

f(x) = x^2 (x is bound)

but

x^2 (x is free!)

Now, here's the surprise twist ending: Free variables are also called constants.

This explains why e and π are always called constants. We gave their definition a long time ago in high school, and no one bothers writing the definitions in their expressions.
 
  • Like
Likes 1 person
  • #18
Tac-Tics said:
Now, here's the surprise twist ending: Free variables are also called constants.

This explains why e and π are always called constants. We gave their definition a long time ago in high school, and no one bothers writing the definitions in their expressions.
This sounds a bit odd actually. It's not that I have anything against calling free variables "constants". It's just that if I see the statement "There exists a real number x such that x2=e.", I'm not thinking that e is a free variable. I'm thinking that this statement is just a lazy way of writing this:

Let e=exp(1).
There exists a real number x such that x2=e.

And here e is not a free variable. I don't have any objections against using the lazy (and inaccurate) version of the statement, since everyone knows exactly what we're omitting.
 
Last edited:
  • Like
Likes 1 person
  • #19
If F(x,y) is a function of the two variables, x and y, and it is true that y= g(x), then we can write f(x)= F(x, g(x)) and find the derivative of f with the chain rule:
[tex]\frac{df}{dx}= \frac{\partial F}{\partial x}+ \frac{\partial F}{\partial y}\frac{dg}{dx}[/tex]
 
  • #20
Fredrik said:
This sounds a bit odd actually. It's not that I have anything against calling free variables "constants". It's just that if I see the statement "There exists a real number x such that x2=e.", I'm not thinking that e is a free variable. I'm thinking that this statement is just a lazy way of writing this:

Let e=exp(1).
There exists a real number x such that x2=e.

And here e is not a free variable. I don't have any objections against using the lazy (and inaccurate) version of the statement, since everyone knows exactly what we're omitting.

The expression you're devoting your attention is important. You can also use this to define the notion of constancy of one variable with respect to another. In the above case, e is constant with respect to x. The reasoning would be something like "restricting your attention to the scope of the binder for x, e is free".
 
  • Like
Likes 1 person
  • #21
lets consider a textbook question
if ##f(x,y,z) = x^{2} + y^{2} + z^{2}##, ##x=t## , ##y=t^2##, ##z=2t## find out ##\frac{\partial f(x,y,z)}{\partial x}## .

here is my solution :
1. ##f(x,y,z) = x^{2} + y^{2} + z^{2}=x^{2}+x^{4}+4x^{2}=x^{4}+5x^{2}##
2. ##\Rightarrow f(x,y,z)=x^{4}+5x^{2}##
3. ##\Rightarrow\frac{\partial}{\partial x}f(x,y,z)=\frac{\partial}{\partial x}(x^{4}+5x^{2})##
4. ##\Rightarrow\frac{\partial}{\partial x}f(x,y,z)= 4x^{3} + 10x##

but the author says ##\frac{\partial}{\partial x}f(x,y,z)= 2x##

this might sound little annoying, but i still don't know how am i wrong ?

Fredrik said:
The derivative of f is another function, which I would denote by f' or maybe Df (but not ##\frac{d}{dx}f##). The value of that function at x is then denoted by f'(x), Df(x) or ##\frac{d}{dx}f(x)##.


##\frac{df}{dx}## is just a sloppy way of writing ##\frac{df(x)}{dx}##, which means the same thing as ##\frac{d}{dx}f(x)##.

"...sloppy...same thing as...". its little bit vague. so what are you saying,
1. ##\frac{df}{dx}=\frac{d}{dx}f(x)## and by extension ##f=f(x)## or
2. ##\frac{df}{dx}\neq\frac{d}{dx}f(x)##


Fredrik said:
The statement after the "and" is stated in a strange way (and you should use the \to arrow instead of the \mapsto arrow), but if you mean what I think you mean, then the answer is "sort of yes". You never have to worry about a possible relationship between the variables when you're asked to compute a partial derivative of a given function. However, if you're asked to compute a partial derivative of a function that is defined implicitly by a relationship between variables, then things are of course very different.

ofcourse i have to worry about that. i have to make sure (or at least assume) that the input variables are independent or not ? without that i can't move forward.

Fredrik said:
I'm not sure I understand what you're asking. If you're asking if e.g. the string of text "f(x,y)" represents a set, then the answer is yes, because we're working within the branch of mathematics defined by ZFC set theory, and in ZFC set theory, the members of sets are themselves sets. However, if e.g. S={1,2,3} then we prefer to call the members of S "numbers" instead of "sets". Similarly, if f:X→Y (i.e. if f is a function with domain X and codomain Y), and x is a member of X, then we prefer to describe f(x) as a member of Y instead of as a "set".

now ##f(x)## is a set too ? i thought it is not a set i.e. (a non-set). i don't know how to imagine this ##-A## if ##A## is a set. for example
what is ##-1## since ##1## is a set ?
what is ##-\{1,2,3\}## ?

why make everything set ?

p.s. sorry for the late reply.

thank you
 
  • #22
ato said:
lets consider a textbook questionhere is my solution :
1. ##f(x,y,z) = x^{2} + y^{2} + z^{2}=x^{2}+x^{4}+4x^{2}=x^{4}+5x^{2}##
2. ##\Rightarrow f(x,y,z)=x^{4}+5x^{2}##
3. ##\Rightarrow\frac{\partial}{\partial x}f(x,y,z)=\frac{\partial}{\partial x}(x^{4}+5x^{2})##
4. ##\Rightarrow\frac{\partial}{\partial x}f(x,y,z)= 4x^{3} + 10x##

but the author says ##\frac{\partial}{\partial x}f(x,y,z)= 2x##

this might sound little annoying, but i still don't know how am i wrong ?
You're supposed to find the partial derivative with respect to the first variable of f, evaluated at (x,y,z). f is defined by ##f(x,y,z)=x^2+y^2+z^2## for all ##x,y,z\in\mathbb R##. (It wouldn't make much sense to think of it as defined by ##f(x,y,z)=x^2+y^2+z^2## for all ##x,y,z\in\mathbb R## such that ##x=t, y=t^2, z=2t##, because then the domain of f is a single point, and f can't have any partial derivatives).
\begin{align}
\frac{\partial f(x,y,z)}{\partial x}&=D_1f(x,y,z)=(s\mapsto f(s,y,z))'(x)=\lim_{h\to 0}\frac{f(x+h,y,z)-f(x,y,z)}{h}\\
&=\lim_{h\to 0}\frac{(x+h)^2-x^2}{h}=\frac{d}{dx}x^2.\end{align} The specification ##x=t, y=t^2, z=2t## is a red herring. It's only relevant if you're asked to compute
$$\frac{d}{dt}f(x(t),y(t),z(t))=\frac{d}{dt}(5t^2+4t^4).$$

ato said:
"...sloppy...same thing as...". its little bit vague. so what are you saying, f
1. ##\frac{df}{dx}=\frac{d}{dx}f(x)## and by extension ##f=f(x)## or
2. ##\frac{df}{dx}\neq\frac{d}{dx}f(x)##
Sloppy = bad, inaccurate.
Same thing as = is equal to.
##\frac{df}{dx}## is a bad notation for f'(x). I don't want to write things like f=f(x) since f is a function and f(x) an element of its range. But people sometimes do use these notations interchangeably.
ato said:
ofcourse i have to worry about that. i have to make sure (or at least assume) that the input variables are independent or not ? without that i can't move forward.
I think you didn't understand the sentences you quoted there. I'm saying that if you know the function, no additional information can be relevant (since it's not part of the definition of the function), so you can move forward.
ato said:
now ##f(x)## is a set too ? i thought it is not a set i.e. (a non-set). i don't know how to imagine this ##-A## if ##A## is a set. for example
what is ##-1## since ##1## is a set ?
what is ##-\{1,2,3\}## ?

why make everything set ?

p.s. sorry for the late reply.

thank you
-{1,2,3} is not defined. 1 and -1 typically denotes two members of a field (a set on which there's a multiplication operation and an addition operation). 1 denotes the multiplicative identity and -1 its additive inverse.

It's convenient to make everything sets, because then the theory only needs to leave two things undefined: what a set is, and what it means for a set to be a member of a set.
 
  • Like
Likes 1 person
  • #23
Fredrik said:
You're supposed to find the partial derivative with respect to the first variable of f, evaluated at (x,y,z). f is defined by ##f(x,y,z)=x^2+y^2+z^2## for all ##x,y,z\in\mathbb R##. (It wouldn't make much sense to think of it as defined by ##f(x,y,z)=x^2+y^2+z^2## for all ##x,y,z\in\mathbb R## such that ##x=t, y=t^2, z=2t##, because then the domain of f is a single point, and f can't have any partial derivatives).
\begin{align}
\frac{\partial f(x,y,z)}{\partial x}&=D_1f(x,y,z)=(s\mapsto f(s,y,z))'(x)=\lim_{h\to 0}\frac{f(x+h,y,z)-f(x,y,z)}{h}\\
&=\lim_{h\to 0}\frac{(x+h)^2-x^2}{h}=\frac{d}{dx}x^2.\end{align} The specification ##x=t, y=t^2, z=2t## is a red herring. It's only relevant if you're asked to compute
$$\frac{d}{dt}f(x(t),y(t),z(t))=\frac{d}{dt}(5t^2+4t^4).$$

yes the question did ask for ##\frac{d}{dt}f(x(t),y(t),z(t))## . the theoram used to solve is

##\frac{\partial f(x,y,z)}{\partial t}=\frac{\partial f(x,y,z)}{\partial x}\frac{\partial x}{\partial t}+\frac{\partial f(x,y,z)}{\partial y}\frac{\partial y}{\partial t}+\frac{\partial f(x,y,z)}{\partial z}\frac{\partial z}{\partial t}##
and i understood everything except ##\frac{\partial f(x,y,z)}{\partial x},\frac{\partial f(x,y,z)}{\partial y},\frac{\partial f(x,y,z)}{\partial z}##.

why should not i take the partial derivative from this equation, ##f(x,y,z)=x^4+5x^2##. what's wrong with that ?

Fredrik said:
It's only relevant if you're asked to compute
$$\frac{d}{dt}f(x(t),y(t),z(t))=\frac{d}{dt}(5t^2+4t^4).$$

so i am supposed to ignore ##x=t, y=t^2, z=2t## .

you can't ignore anything like that.


Fredrik said:
##\frac{df}{dx}## is a bad notation for f'(x). I don't want to write things like f=f(x) since f is a function and f(x) an element of its range.

so f(x) IS a variable . so what was the point of "we don't take total derivative of variable but function" as now you can easily define total derivative (probably the whole calculus) without bringing the concept of function,
let ##y=f(x)## so ##\frac{df(x)}{dx}=\frac{dy}{dx}=lim_{h\rightarrow0}\frac{\left.y\right|_{x=x}^{x=x+h}}{h}##

(what good this (weird) definition of function do anyway ?)

Fredrik said:
-{1,2,3} is not defined.

since 1,2,3... all are sets, -1 is defined means - operator for a set is defined so i wanted to know how it is defined for a generel set.

Fredrik said:
1 and -1 typically denotes two members of a field (a set on which there's a multiplication operation and an addition operation). 1 denotes the multiplicative identity and -1 its additive i
inverse.
no i asked could you give me something resembling '1 = {...}'

Fredrik said:
It's convenient to make everything sets, because then the theory only needs to leave two things undefined: what a set is, and what it means for a set to be a member of a set.

could give me something like a paradox or a problem that it solves ?

thank you
 
Last edited:
  • #24
ato said:
yes the question did ask for ##\frac{d}{dt}f(x(t),y(t),z(t))## .
The part of it that you posted didn't. The simplest way to find ##\frac{d}{dt}f(x(t),y(t),z(t))## is
$$\frac{d}{dt}f(x(t),y(t),z(t))=\frac{d}{dt}(t^4+5t^2)=4t^3+10t.$$
ato said:
the theoram used to solve is

##\frac{\partial f(x,y,z)}{\partial t}=\frac{\partial f(x,y,z)}{\partial x}\frac{\partial x}{\partial t}+\frac{\partial f(x,y,z)}{\partial y}\frac{\partial y}{\partial t}+\frac{\partial f(x,y,z)}{\partial z}\frac{\partial z}{\partial t}##
This is another way.

ato said:
why should not i take the partial derivative from this equation, ##f(x,y,z)=x^4+5x^2##. what's wrong with that ?
The problem is that there's no open set ##E\subset\mathbb R^3## such that ##f(x,y,z)=x^4+5x^2## for all ##(x,y,z)\in E##. If there had been such a set (this is only possible with a different definition of f), and (x,y,z) is a point in that set, then we would have had ##D_1f(x,y,z)=4x^3+10x##.

ato said:
so i am supposed to ignore ##x=t, y=t^2, z=2t## .

you can't ignore anything like that.
Yes you can, because a definition of a function is always a "for all" statement, and you can always replace the variable in a "for all" statement without changing the meaning of the statement.
$$f(x,y,z)=x^2+y^2+z^2$$ really means
$$\forall x,y,z\in\mathbb R,\quad f(x,y,z)=x^2+y^2+z^2.$$ And this statement is equivalent to
$$\forall p,q,r\in\mathbb R,\quad f(p,q,r)=p^2+q^2+r^2.$$ If this is followed by a separate statement x=t, then this can't be the same x, since the first one was a dummy (replaceable) variable and this one isn't.

The statements
$$x=t, y=t^2, z=2t$$ are intended as definitions of three more functions x,y,z. So what they really mean is this:
\begin{align}
&\forall t\in\mathbb R,\quad x(t)=t\\
&\forall t\in\mathbb R,\quad y(t)=t^2\\
&\forall t\in\mathbb R,\quad z(t)=2t
\end{align} To find ##D_1f##, you only need the definition of f. (Just look at the definition. It doesn't involve any other functions). So these three functions are irrelevant when you compute ##D_1f##.

ato said:
so f(x) IS a variable .
I don't consider "f(x)" a variable, because it's a string of text that consists of four characters. I would only use the term "variable" about 1-character strings, not longer strings.

ato said:
so what was the point of "we don't take total derivative of variable but function" as now you can easily define total derivative (probably the whole calculus) without bringing the concept of function,
let ##y=f(x)## so ## ##\frac{df(x)}{dx}=\frac{dy}{dx}=lim_{h\rightarrow0}\frac{\left.y\right|_{x=x}^{x=x+h}}{h}##

(what good this (weird) definition of function do anyway ?)
The concept of "function" is so extremely important in all areas of mathematics that I wouldn't know where to begin to answer that. I would have had more sympathy for the complaint if you had found these calculations really easy to do without involving the concept of "function", but you are clearly still struggling, and I'm not the only one in this thread who has said that it's probably because you're thinking in terms of variables than instead of in terms of functions.

ato said:
since 1,2,3... all are sets, -1 is defined means - operator for a set is defined so i wanted to know how it is defined for a generel set.
It's not.

ato said:
no i asked could you give me something resembling '1 = {...}'
Ah. OK, then we first have to define the natural numbers as sets. A standard way to do this is
0=∅
1={0}
2={0,1}
3={0,1,2}
...

I haven't seen a similar definition of the integers in a textbook, but it's not hard to think of one. Let S be any set with two members. Denote those members by p,n. Now consider the set
$$Z=\{(n,1),(n,2),\dots\}\cup \{0\}\cup\{(p,1),(p,2),\dots\}.$$
With an appropriate definition of addition, multiplication, etc, the subset
$$\{0\}\cup\{(p,1),(p,2),\dots\}$$ will have all the same properties (as far as the standard operations are concerned) as the set {0,1,2,...}. So it has just as much right to be called "the set of positive integers" as {0,1,2,...}. And now we can define the term "integer" as a member of Z. At this point, it would be convenient to simplify the notation like this:

Write k instead of (p,k) for all k in {1,2,...}.
Write -k instead of (n,k) for all k in {1,2,...}.

These things are very far from the things you need to understand to compute partial derivatives.

ato said:
could give me something like a paradox or a problem that it solves ?

thank you
The problem it solves is just that it's annoying and kind of dangerous to have more undefined concepts and more axioms than is absolutely necessary. Every new concept that requires its own set of assumptions increases the probability that we will screw up by making inconsistent assumptions.
 
Last edited:
  • Like
Likes 1 person
  • #25
Fredrik said:
The problem is that there's no open set ##E\subset\mathbb R^3## such that ##f(x,y,z)=x^4+5x^2## for all ##(x,y,z)\in E##.

##E## does exist, ##\{(a,a^{2},2a)|a\in R\}\subset R^3##

Fredrik said:
It's not.


Ah. OK, then we first have to define the natural numbers as sets. A standard way to do this is
0=∅
1={0}
2={0,1}
3={0,1,2}
...

we konw -2 exist right. should not -{0,1} exists.

##3 - 2 = 1##

##\{ 0,1,2 \} - \{ 0,1 \} = \{ 2 \} \neq 1##
is not that a paradox/contradiction.

thank you
 
  • #26
ato said:
##E## does exist, ##\{(a,a^{2},2a)|a\in R\}\subset R^3##
That's a curve, not an open set.

ato said:
we konw -2 exist right. should not -{0,1} exists.
Right, but {0,1} isn't an arbitrary set. It's a member of a set on which there's a unary operation for which we have chosen the notation ##x\mapsto -x##.

ato said:
##3 - 2 = 1##

##\{ 0,1,2 \} - \{ 0,1 \} = \{ 2 \} \neq 1##
is not that a paradox/contradiction.
The minus sign here isn't the one defined by ##A-B=\{x\in A|x\notin B\}##. It's a binary operation on the set ##\mathbb N##. I don't have time to think about its definition right now. I suggest a book on set theory. Hrbacek and Jech is good for set theory in general. Goldrei is good alternative if you're especially interested in the constructions of the number systems.
 
  • Like
Likes 1 person

FAQ: Paritial derivative of function of dependent variables

1. What is a partial derivative?

A partial derivative is a mathematical concept that measures how a function changes when only one of its variables is varied, keeping all other variables constant. It is represented by the symbol ∂.

2. How is a partial derivative different from a regular derivative?

A partial derivative is different from a regular derivative in that it only considers one variable at a time, while a regular derivative considers the change of a function with respect to all of its variables simultaneously. In other words, a partial derivative treats all other variables in the function as constants.

3. Why are partial derivatives important in science?

Partial derivatives are important in science because they allow us to better understand how a system or process changes when only one variable is changed. This is useful in fields such as physics, engineering, and economics, where multiple variables interact and need to be analyzed separately.

4. How do you calculate a partial derivative?

To calculate a partial derivative, you first identify which variable is being held constant and which variable is being varied. Then, you take the derivative of the function with respect to the variable being varied, treating all other variables as constants. The resulting expression is the partial derivative.

5. Can a function have more than one partial derivative?

Yes, a function can have multiple partial derivatives. The number of partial derivatives a function has is equal to the number of variables in the function. Each partial derivative represents the rate of change of the function with respect to a specific variable.

Similar threads

Replies
6
Views
2K
Replies
2
Views
2K
Replies
12
Views
2K
Replies
3
Views
3K
Replies
1
Views
2K
Replies
3
Views
2K
Replies
1
Views
2K
Replies
1
Views
1K
Back
Top