Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

The notion of independance of equations

  1. Mar 31, 2007 #1


    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    I got here in my classical mechanics textbook a set of k equations

    [tex]f_{\alpha}(x_1,...,x_N)=0, \ \ \ \ \ \ \alpha=1,...,k[/tex]

    and it is said that these k equations are independant when the rank of the matrix

    [tex]A_{\alpha i}=\left(\frac{\partial f_{\alpha}}{\partial x_i}\right)[/tex]

    is maximal, i.e. equals k.

    Could someone explain why this definition makes sense. I.e. why does it meet the intuitive notion of independance, and exactly what this notion of independance is when we're talking about equations. Some references would be nice to!

    Thank you all.
  2. jcsd
  3. Mar 31, 2007 #2


    User Avatar
    Science Advisor
    Homework Helper
    Gold Member
    Dearly Missed

    Let us assume that there exists a continuous set of solutions about a solution point [itex]\vec{x}_{0}[/tex]

    Then, we would have for some perturbation vector [itex]d\vec{x}[/itex] that
    Now, rewriting the left-hand side we get in the limit of a tiny perturbation:

    Thus, if we are to ensure that there does NOT exist some non-zero perturbation vector in the neighbourhood of a solution [itex]\vec{x}_{0}[/itex], we must require that A is invertible.
    This is in tune with standard ideas of linear independence.
  4. Apr 1, 2007 #3


    User Avatar
    Science Advisor

    I'm not sure what you mean by the "intuitive notion" of independence but the standard definition (from linear algebra) is that the only way we can have [itex]a_1f_1(x_1,...,x_N)+ a_2f_2(x_1,...,x_N)+ \cdot\cdot\cdot+ a_kf_k(x_1,...x_N)= 0[/itex] for all values of [itex]x_1,...,x_N[/itex] is to have [itex]a_1= a_2= \cdot\cdot\cdot= a_k= 0[/itex]. That is the same as saying that the only solution to the system of equations
    [itex]a_1f_1(x_1,...,x_N)+ a_2f_2(x_1,...,x_N)+ \cdot\cdot\cdot+ a_kf_k(x_1,...x_N)= 0[/itex]
    [itex]a_1f_1_{x_1}(x_1,...,x_N)+ a_2f_2_{x_1}(x_1,...,x_N)+ \cdot\cdot\cdot+ a_kf_k_{x_1}(x_1,...x_N)= 0[/itex]
    [itex]a_1f_1_{x_N}(x_1,...,x_N)+ a_2f_2_{x_N}(x_1,...,x_N)+ \cdot\cdot\cdot+ a_kf_k_{x_N}(x_1,...x_N)= 0[/itex]
    for any specific values of the xs has a unique solution. That is true if and only if the coefficient matrix, which is just the matrix you cite, has rank k.

    I believe that is pretty much what arildno is saying from a slightly different point of view.
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook