# The notion of independance of equations

Homework Helper
Gold Member

## Main Question or Discussion Point

I got here in my classical mechanics textbook a set of k equations

$$f_{\alpha}(x_1,...,x_N)=0, \ \ \ \ \ \ \alpha=1,...,k$$

and it is said that these k equations are independant when the rank of the matrix

$$A_{\alpha i}=\left(\frac{\partial f_{\alpha}}{\partial x_i}\right)$$

is maximal, i.e. equals k.

Could someone explain why this definition makes sense. I.e. why does it meet the intuitive notion of independance, and exactly what this notion of independance is when we're talking about equations. Some references would be nice to!

Thank you all.

arildno
Homework Helper
Gold Member
Dearly Missed
Let us assume that there exists a continuous set of solutions about a solution point $\vec{x}_{0}[/tex] Then, we would have for some perturbation vector [itex]d\vec{x}$ that
$$f_{\alpha}(\vec{x}_{0}+d\vec{x})=0$$
Now, rewriting the left-hand side we get in the limit of a tiny perturbation:
$$f_{\alpha}(\vec{x}_{0})+A_{\alpha{i}}dx_{i}=0\to{A}_{\alpha{i}}dx_{i}=0$$

Thus, if we are to ensure that there does NOT exist some non-zero perturbation vector in the neighbourhood of a solution $\vec{x}_{0}$, we must require that A is invertible.
This is in tune with standard ideas of linear independence.

HallsofIvy
Homework Helper
I'm not sure what you mean by the "intuitive notion" of independence but the standard definition (from linear algebra) is that the only way we can have $a_1f_1(x_1,...,x_N)+ a_2f_2(x_1,...,x_N)+ \cdot\cdot\cdot+ a_kf_k(x_1,...x_N)= 0$ for all values of $x_1,...,x_N$ is to have $a_1= a_2= \cdot\cdot\cdot= a_k= 0$. That is the same as saying that the only solution to the system of equations
$a_1f_1(x_1,...,x_N)+ a_2f_2(x_1,...,x_N)+ \cdot\cdot\cdot+ a_kf_k(x_1,...x_N)= 0$
$a_1f_1_{x_1}(x_1,...,x_N)+ a_2f_2_{x_1}(x_1,...,x_N)+ \cdot\cdot\cdot+ a_kf_k_{x_1}(x_1,...x_N)= 0$
...
$a_1f_1_{x_N}(x_1,...,x_N)+ a_2f_2_{x_N}(x_1,...,x_N)+ \cdot\cdot\cdot+ a_kf_k_{x_N}(x_1,...x_N)= 0$
for any specific values of the xs has a unique solution. That is true if and only if the coefficient matrix, which is just the matrix you cite, has rank k.

I believe that is pretty much what arildno is saying from a slightly different point of view.