In partial differentiation why we have to use the jacobian?

amaresh92
Messages
163
Reaction score
0
in partial differentiation why we have to use the jacobian?what does signifies?how does it differ from normal partial derivative?
thanks
 
Physics news on Phys.org


A Jacobian is a factor that appears when you change variables for a double integral. Much like in single variable calc you perform u-substitutions (change in variable) for integrals like

\int_a^bf(x)dx

and you set

x=g(u)
dx=g'(u)du
\int_a^bf(x)dx=\int^{u(b)}_{u(a)}f(g(u))g'(u)du

so you have an extra factor g'(u) in the integrand caused by the change of variable.

When you change variable in double integrals, you end up with a more complex factor defined as the Jacobian:

\frac{\partial (x,y)}{\partial(u,v)}=\frac{\partial x}{\partial u}\frac{\partial y}{\partial v}-\frac{\partial y}{\partial u}\frac{\partial x}{\partial v}



\int_R\int f(x,y)dxdy=\int_S\int f(g(u,v),h(u,v))\mid\frac{\partial (x,y)}{\partial(u,v)}\mid dudv

This factor occurs when you convert a double integral to polar coordinates and the dxdy has to be replaced with rdrd\theta, the r was the jacobian for this conversion.
 
Last edited:


Did you mean the Jacobian determinant? Then, as AdkinsJr said, it's useful mainly for a change of variables in a multiple integral.

Did you mean the Jacobian matrix? As you know, it's defined as the matrix of partial derivatives of the component functions. For example, if f:\mathbb R^n\rightarrow\mathbb R^m, then you can write f(x)=(f^1(x),\dots,f^m(x), where f^i:\mathbb R^n\rightarrow\mathbb R for i=1,...,m. The Jacobian matrix of f at x is the matrix J_f(x) defined by

J_f(x)^i_j=f^i_{,j}(x)

(The notation means that the element on the ith row, jth column, is the partial derivative of the ith component function with respect to the jth variable). This matrix shows up in the chain rule, which I like to remember in the following forms:

(f\circ g)'(x)=f'(g(x))g'(x)

(f\circ g)'(x)=f_{,i}(g(x))g^i'(x)

(f\circ g)_{,i}(x)=f_{,j}(g(x))g^j_{,i}(x)

(f\circ g)^i_{,j}(x)=(f^i\circ g)_{,j}(x)=f^i_{,k}(g(x))g^k_{,j}(x)

The first equality in the last line is just rewriting the expression in a form that makes it obvious that we can apply the version of the chain rule on the line above. Indices that appear twice in the same expression are summed over (that would be i in the second line, the j in the third, and the k in the fourth). It's conventional to not write any summation sigmas here. (Einstein's summation convention). Note the appearence of the (components of) a Jacobian matrix before the last g in each line. Also note that all of the earlier versions are special cases of the last one.
 
Last edited:


amaresh92 said:
in partial differentiation why we have to use the jacobian?what does signifies?how does it differ from normal partial derivative?
thanks

The physical interpretation of the Jacobian represents a kind of measure like length, area, volume, hyper-volume and so on.

When we populate the matrix with differentials, we are in fact finding something that relates to the change of such a measure. So when you find the Jacobian you are finding how some measure "contracts" or "expands" depending on the measure you are trying to find.

(PS If my definition of measure is wrong or misleading, please correct me)
 

Similar threads

Back
Top