# Why f is no longer a function of x and y ?

• I
What about the function:$$G(x_1, x_2, x_3) = \frac{\rho}{x_1^2 + y_1^2 + z_1^2}$$
How would you differentiate that with respect to ##y_2##?

It doesnâ€™t depend on ##y_2## therefore the whole expression will act as a constant and hence the derivative will be zero.
Oh My God!

PeroK
I have understood from your post is that ##x## is any arbitrary point on ##\phi## axis and ##xâ€™## is also an arbitrary point on ##\phi## axis.
The ##x##-axis or (##\varphi##-axis, call it whatever) is but a way of representing ##\mathbb{R}##.
In ##n##-dimensional problems, we naturally think of ##n## axes orthogonal to each other. This same concept is mathematically described using the vector space of dimension ##n##, ##\mathbb{R}^n##. Vector spaces have a (there is not necessarily just one) set of vectors (contained in that vector space), called a base of that vector space, whose combination (##\alpha_1\vec e_1+...+\alpha_n\vec e_n##) can give you every other vector in that vector space, and the vectors cannot be added with the coefficient ##1## to get the zero vector (##\vec e_1+...+\vec e_n\neq\vec 0##). In your case, i.e a 2D problem, we use the vector space ##\mathbb{R}^2## and the base consisting of the vectors ##(1,0)## and ##(0,1)##.
Whenever you learn linear algebra, I think you'll think of these a bit differently.

Sir, please explain what does it mean to differentiate something w.r.t. to xxx.
You let ##x## vary a variation that tends to ##0##, i.e ##x+h## as ##h\to0##, and use the normal definition. We think of it as a tiny segment on a line, but that is just how we geometrically imagine the set of real numbers.
Keep in mind that I am a student as you are and prone to error. I'm just sharing the way I see some things and gladly would welcome a correction if I am at some point mistaken.

The ##x##-axis or (##\varphi##-axis, call it whatever) is but a way of representing ##\mathbb{R}##.
In ##n##-dimensional problems, we naturally think of ##n## axes orthogonal to each other. This same concept is mathematically described using the vector space of dimension ##n##, ##\mathbb{R}^n##. Vector spaces have a (there is not necessarily just one) set of vectors (contained in that vector space), called a base of that vector space, whose combination (##\alpha_1\vec e_1+...+\alpha_n\vec e_n##) can give you every other vector in that vector space, and the vectors cannot be added with the coefficient ##1## to get the zero vector (##\vec e_1+...+\vec e_n\neq\vec 0##). In your case, i.e a 2D problem, we use the vector space ##\mathbb{R}^2## and the base consisting of the vectors ##(1,0)## and ##(0,1)##.
Whenever you learn linear algebra, I think you'll think of these a bit differently.
So, whatâ€™s the difference between differentiating with respect to ##x## and ##xâ€™## ?

PeroK
Science Advisor
Homework Helper
Gold Member
2020 Award
So, whatâ€™s the difference between differentiating with respect to ##x## and ##xâ€™## ?

The same difference as differentiating with respect to ##y_1## and ##y_2## in the above examples.

Adesh
The same difference as differentiating with respect to ##y_1## and ##y_2## in the above examples.
That solves my whole problem. Thank you so much.

So, whatâ€™s the difference between differentiating with respect to ##x## and ##xâ€™## ?
In that response I was trying to illustrate to you the way we think of axes, but:
Define ##\vec a=f(x)\vec e_1=(f(x), 0)## and ##\vec c=g(x')\vec e_1=(g(x'),0)##. You know how to do differentiation on vectors as you are reading Griffith's book, so what do you think?
That solves my whole problem. Thank you so much.
I think the main point that you need to remember is to think of the axes as the set of real numbers. The function you have of both ##x## and ##x'## who are on the same "axis" can be seen as ##\vec a = f(x, x')\vec e##.

Adesh
On #4
$$\nabla\times J=[\frac{\partial J_y(x',y',z')}{\partial z}-\frac{\partial J_z(x',y',z')}{\partial y}]\hat x + ...$$
is zero.

PeroK