The notion of independance of equations

  • Thread starter quasar987
  • Start date
Essentially what you are saying is that if the rank of the matrix A is maximal, then the equations are independant.
  • #1

quasar987

Science Advisor
Homework Helper
Gold Member
4,809
31
I got here in my classical mechanics textbook a set of k equations

[tex]f_{\alpha}(x_1,...,x_N)=0, \ \ \ \ \ \ \alpha=1,...,k[/tex]

and it is said that these k equations are independant when the rank of the matrix

[tex]A_{\alpha i}=\left(\frac{\partial f_{\alpha}}{\partial x_i}\right)[/tex]

is maximal, i.e. equals k.

Could someone explain why this definition makes sense. I.e. why does it meet the intuitive notion of independance, and exactly what this notion of independance is when we're talking about equations. Some references would be nice to!

Thank you all.
 
Physics news on Phys.org
  • #2
Let us assume that there exists a continuous set of solutions about a solution point [itex]\vec{x}_{0}[/tex]

Then, we would have for some perturbation vector [itex]d\vec{x}[/itex] that
[tex]f_{\alpha}(\vec{x}_{0}+d\vec{x})=0[/tex]
Now, rewriting the left-hand side we get in the limit of a tiny perturbation:
[tex]f_{\alpha}(\vec{x}_{0})+A_{\alpha{i}}dx_{i}=0\to{A}_{\alpha{i}}dx_{i}=0[/tex]

Thus, if we are to ensure that there does NOT exist some non-zero perturbation vector in the neighbourhood of a solution [itex]\vec{x}_{0}[/itex], we must require that A is invertible.
This is in tune with standard ideas of linear independence.
 
  • #3
I'm not sure what you mean by the "intuitive notion" of independence but the standard definition (from linear algebra) is that the only way we can have [itex]a_1f_1(x_1,...,x_N)+ a_2f_2(x_1,...,x_N)+ \cdot\cdot\cdot+ a_kf_k(x_1,...x_N)= 0[/itex] for all values of [itex]x_1,...,x_N[/itex] is to have [itex]a_1= a_2= \cdot\cdot\cdot= a_k= 0[/itex]. That is the same as saying that the only solution to the system of equations
[itex]a_1f_1(x_1,...,x_N)+ a_2f_2(x_1,...,x_N)+ \cdot\cdot\cdot+ a_kf_k(x_1,...x_N)= 0[/itex]
[itex]a_1f_1_{x_1}(x_1,...,x_N)+ a_2f_2_{x_1}(x_1,...,x_N)+ \cdot\cdot\cdot+ a_kf_k_{x_1}(x_1,...x_N)= 0[/itex]
...
[itex]a_1f_1_{x_N}(x_1,...,x_N)+ a_2f_2_{x_N}(x_1,...,x_N)+ \cdot\cdot\cdot+ a_kf_k_{x_N}(x_1,...x_N)= 0[/itex]
for any specific values of the xs has a unique solution. That is true if and only if the coefficient matrix, which is just the matrix you cite, has rank k.

I believe that is pretty much what arildno is saying from a slightly different point of view.
 

Suggested for: The notion of independance of equations

Back
Top