The notion of independance of equations

  • Thread starter Thread starter quasar987
  • Start date Start date
quasar987
Science Advisor
Homework Helper
Gold Member
Messages
4,796
Reaction score
32
I got here in my classical mechanics textbook a set of k equations

f_{\alpha}(x_1,...,x_N)=0, \ \ \ \ \ \ \alpha=1,...,k

and it is said that these k equations are independent when the rank of the matrix

A_{\alpha i}=\left(\frac{\partial f_{\alpha}}{\partial x_i}\right)

is maximal, i.e. equals k.

Could someone explain why this definition makes sense. I.e. why does it meet the intuitive notion of independence, and exactly what this notion of independence is when we're talking about equations. Some references would be nice to!

Thank you all.
 
Physics news on Phys.org
Let us assume that there exists a continuous set of solutions about a solution point \vec{x}_{0}[/tex]<br /> <br /> Then, we would have for some perturbation vector d\vec{x} that <br /> f_{\alpha}(\vec{x}_{0}+d\vec{x})=0<br /> Now, rewriting the left-hand side we get in the limit of a tiny perturbation:<br /> f_{\alpha}(\vec{x}_{0})+A_{\alpha{i}}dx_{i}=0\to{A}_{\alpha{i}}dx_{i}=0<br /> <br /> Thus, if we are to ensure that there does NOT exist some non-zero perturbation vector in the neighbourhood of a solution \vec{x}_{0}, we must require that A is invertible.<br /> This is in tune with standard ideas of linear independence.
 
I'm not sure what you mean by the "intuitive notion" of independence but the standard definition (from linear algebra) is that the only way we can have a_1f_1(x_1,...,x_N)+ a_2f_2(x_1,...,x_N)+ \cdot\cdot\cdot+ a_kf_k(x_1,...x_N)= 0 for all values of x_1,...,x_N is to have a_1= a_2= \cdot\cdot\cdot= a_k= 0. That is the same as saying that the only solution to the system of equations
a_1f_1(x_1,...,x_N)+ a_2f_2(x_1,...,x_N)+ \cdot\cdot\cdot+ a_kf_k(x_1,...x_N)= 0
a_1f_1_{x_1}(x_1,...,x_N)+ a_2f_2_{x_1}(x_1,...,x_N)+ \cdot\cdot\cdot+ a_kf_k_{x_1}(x_1,...x_N)= 0
...
a_1f_1_{x_N}(x_1,...,x_N)+ a_2f_2_{x_N}(x_1,...,x_N)+ \cdot\cdot\cdot+ a_kf_k_{x_N}(x_1,...x_N)= 0
for any specific values of the xs has a unique solution. That is true if and only if the coefficient matrix, which is just the matrix you cite, has rank k.

I believe that is pretty much what arildno is saying from a slightly different point of view.
 

Similar threads

  • · Replies 16 ·
Replies
16
Views
3K
  • · Replies 3 ·
Replies
3
Views
3K
Replies
4
Views
2K
  • · Replies 40 ·
2
Replies
40
Views
3K
  • · Replies 9 ·
Replies
9
Views
1K
  • · Replies 17 ·
Replies
17
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K
Replies
8
Views
2K
  • · Replies 11 ·
Replies
11
Views
1K
  • · Replies 1 ·
Replies
1
Views
551