- #1
psholtz
- 136
- 0
I'm reading a text on PDEs..
I'm trying to follow some of the argument the author is presenting, but I'm having a bit of difficulty.
We start w/ a collection of p functions in n variables (with p <= n). That is to say, we have:
[tex]u_1, u_2, ..., u_p[/tex]
where
[tex]u_i : \mathbb{R}^n \rightarrow \mathbb{R} [/tex]
for all i, where [tex]1 <= i <= p[/tex].
We now take another function [tex]\Phi[/tex] in p variables, and we write:
[tex]\Phi(u_1,u_2,...,u_p) = 0[/tex]
Since [tex]\Phi[/tex] is ultimately a function in the n variables x(i), we can generate a set of p <= n (partial) differential equations by taking the derivatives of [tex]\Phi[/tex] with respect to p of the x(i). That is, we can write:
[tex]\frac{\partial \Phi}{\partial u_1} \cdot \frac{\partial u_1}{\partial x_1} + ... + \frac{\partial \Phi}{\partial u_p} \cdot \frac{\partial u_p}{\partial x_1} = 0[/tex]
[tex]...[/tex]
[tex]\frac{\partial \Phi}{\partial u_1} \cdot \frac{\partial u_1}{\partial x_p} + ... + \frac{\partial \Phi}{\partial u_p} \cdot \frac{\partial u_p}{\partial x_p} = 0[/tex]
So far, so good.. That much I follow.
Next, the author states simply that from this, we can conclude that the Jacobian is null, that is, we can conclude that:
[tex]\left| \begin{array}{ccc}
\frac{\partial u_1}{\partial x_1} & ... & \frac{\partial u_p}{\partial x_1} \\
& ... & \\
\frac{\partial u_1}{\partial x_p} & ... & \frac{\partial u_p}{\partial x_p} \end{array} \right| = 0[/tex]
That's the part I don't get.. How does he arrive at that result?
I can understand how, from the system of p PDEs given above, we can arrive at the linear system:
[tex]\left( \begin{array}{ccc}
\frac{\partial u_1}{\partial x_1} & ... & \frac{\partial u_p}{\partial x_1} \\
& ... & \\
\frac{\partial u_1}{\partial x_p} & ... & \frac{\partial u_p}{\partial x_p}
\end{array}
\right) \cdot
\left(\begin{array}{c} \frac{\partial \Phi}{\partial u_1} \\ ... \\ \frac{\partial \Phi}{\partial u_p} \end{array} \right) = \left(\begin{array}{c} 0 \\ ... \\ 0 \end{array} \right)[/tex]
But how do we get from there, to that the determinant of the linear system is zero?
Is this just some incredibly obvious, incredibly simple result from linear algebra that I'm forgetting?
I thought that if the determinant of a linear system is zero, that means the system is not invertible/solvable/etc..??
I'm trying to follow some of the argument the author is presenting, but I'm having a bit of difficulty.
We start w/ a collection of p functions in n variables (with p <= n). That is to say, we have:
[tex]u_1, u_2, ..., u_p[/tex]
where
[tex]u_i : \mathbb{R}^n \rightarrow \mathbb{R} [/tex]
for all i, where [tex]1 <= i <= p[/tex].
We now take another function [tex]\Phi[/tex] in p variables, and we write:
[tex]\Phi(u_1,u_2,...,u_p) = 0[/tex]
Since [tex]\Phi[/tex] is ultimately a function in the n variables x(i), we can generate a set of p <= n (partial) differential equations by taking the derivatives of [tex]\Phi[/tex] with respect to p of the x(i). That is, we can write:
[tex]\frac{\partial \Phi}{\partial u_1} \cdot \frac{\partial u_1}{\partial x_1} + ... + \frac{\partial \Phi}{\partial u_p} \cdot \frac{\partial u_p}{\partial x_1} = 0[/tex]
[tex]...[/tex]
[tex]\frac{\partial \Phi}{\partial u_1} \cdot \frac{\partial u_1}{\partial x_p} + ... + \frac{\partial \Phi}{\partial u_p} \cdot \frac{\partial u_p}{\partial x_p} = 0[/tex]
So far, so good.. That much I follow.
Next, the author states simply that from this, we can conclude that the Jacobian is null, that is, we can conclude that:
[tex]\left| \begin{array}{ccc}
\frac{\partial u_1}{\partial x_1} & ... & \frac{\partial u_p}{\partial x_1} \\
& ... & \\
\frac{\partial u_1}{\partial x_p} & ... & \frac{\partial u_p}{\partial x_p} \end{array} \right| = 0[/tex]
That's the part I don't get.. How does he arrive at that result?
I can understand how, from the system of p PDEs given above, we can arrive at the linear system:
[tex]\left( \begin{array}{ccc}
\frac{\partial u_1}{\partial x_1} & ... & \frac{\partial u_p}{\partial x_1} \\
& ... & \\
\frac{\partial u_1}{\partial x_p} & ... & \frac{\partial u_p}{\partial x_p}
\end{array}
\right) \cdot
\left(\begin{array}{c} \frac{\partial \Phi}{\partial u_1} \\ ... \\ \frac{\partial \Phi}{\partial u_p} \end{array} \right) = \left(\begin{array}{c} 0 \\ ... \\ 0 \end{array} \right)[/tex]
But how do we get from there, to that the determinant of the linear system is zero?
Is this just some incredibly obvious, incredibly simple result from linear algebra that I'm forgetting?
I thought that if the determinant of a linear system is zero, that means the system is not invertible/solvable/etc..??