A (challenging?) question around the Jacobian matrix

  • Thread starter Thread starter Aleph-0
  • Start date Start date
  • Tags Tags
    Jacobian Matrix
Aleph-0
Messages
12
Reaction score
0
Here is the problem:

Suppose that g is a diffeomorphism on R^n. Then we know that its jacobian matrix is everywhere invertible.
Let us define the following matrix valued function on R^n
<br /> H_{i,j} (x) = \int_0^1 \partial_i g^j(tx) dt <br />
where g^j are the components of g.
Question : Is (H_{i,j}(x))_{i,j} (which could be interpreted as a mean of the Jacobian matrix of g) invertible for any x ?

My guess is that the answer is negative, but I find no counter-examples.
Any Help ?
 
Physics news on Phys.org
I believe you may have to move into three dimensions to obtains a counterexample. It is possible that this theorem is in fact true.

Edit:
Actually, I believe that a counterexample is provided by;

\mathbf{g}((x,y,z))=(x \cos(2\pi z)-y \sin(2\pi z),x \sin(2\pi z)+y \cos(2\pi z),z)
with
H((0,0,1)) being singular.

Can anyone else check this?
 
Last edited:
Indeed, we even have H(x,y,1) singular for any x,y !
Thank you !
I wonder if the theorem is true in dimension 2...
(it is trivially true in dimension 1).
 
Interesting problem. I thought about this for awhile, and couldn't come up with a counter in D =2. Can anyone prove this statement? Its not clear to me why it should be true, and there seems to be something interesting (maybe topological) lurking behind it.
 
Last edited:
Haelfix said:
Interesting problem. I thought about this for awhile, and couldn't come up with a counter in D =2. Can anyone prove this statement? Its not clear to me why it should be true, and there seems to be something interesting (maybe topological) lurking behind it.
I believe it is true because of the nature of coordinate lines and their tangents in 2D.

Firstly, assume that we can find a H in 2D that is singular. Therefore the columns of H, which we'll denote \mathbf{h}_i are linearly dependant. So there exist non zero \alpha_i such that

\Sigma \alpha_i \mathbf{h}_i = \mathbf{0}

Now by the definition

\mathbf{h}_i = \int_0^1 \frac{\partial \mathbf{g}}{\partial x_i}(t \mathbf{x}) dt
Therefore

\Sigma \alpha_i \mathbf{h}_i =\int_0^1 \Sigma \alpha_i \frac{\partial \mathbf{g}}{\partial x_i}(t \mathbf{x}) dt = \mathbf{0}

Now looking at the integral along this curve. We let
\mathbf{v}(t)=\Sigma \alpha_i \frac{\partial \mathbf{g}}{\partial x_i}(t(1,0)) = \alpha_1 \frac{\partial \mathbf{g}}{\partial x_1}(t(1,0)) + \alpha_2 \frac{\partial \mathbf{g}}{\partial x_2}(t(1,0))
And so we have
\int_0^1 \mathbf{v}(t) dt = \mathbf{0}
We cannot have \mathbf{v}=\mathbf{0}, because the jacobian is always fully ranked, and hence any linear combination of its columns cannot be zero.

So \mathbf{v}(t) is non zero everywhere, and its integral is zero. But if we let \mathbf{v}(t)=\frac{d \mathbf{z}}{dt}(t), we can see that

\int_0^1 \mathbf{v}(t) dt = \int_0^1 \frac{d \mathbf{z}}{dt}(t) dt = \mathbf{z}(1)-\mathbf{z}(0) = \mathbf{0}

Which means that \mathbf{v} is the tangent vector to some closed curve \mathbf{z}(t)

We now make a change of variables by letting
\mathbf{x}(\mathbf{u}) = \left( \begin{array}{cc}<br /> \alpha_1 &amp; -\alpha_2 \\<br /> \alpha_2 &amp; \alpha_1 <br /> \end{array} \right) \mathbf{u}<br />

Then it can be shown that \mathbf{v}(t) = \frac{d \mathbf{g}}{d u_1}(t(1,0)), i.e. v is the tangent vector to the u_2=0 coordinate curve. But this means that this coordinate curve is closed. Since the mapping from u to x is a diffeomorphism, and the function g is a diffeomorphism, the function from u to g is a diffeomorphism on R^2. This means that it cannot have any closed coordinate curves. Therefore we have a contridiction, and so H must be non singular.

I hope this is OK. I'm not being especially rigorous here.

In higher dimensions, some of the \alpha_i can be zero, and hence the mapping from u to x may not be a diffeomorphism. Essentially you have some wiggle room once you move into higher dimensions.
 
Back
Top