Several unknowns but only one equation

  • Thread starter stabu
  • Start date
  • Tags
    Unknowns
  • #1
26
0
I have a query please, if anybody can shed some light, thanks:

So from an early age we get this idea of needing one equation for each unknown variable whose unique value we need to discover. Three unknowns? well, need three nonsingular equations: it's a kind of a rule of thumb I guess.

However, I have noticed that in some areas, notably perturbation expansions, and others, one arrives at a single equation, and can actually discover not one, but several variables form the one equation. Smells of a free lunch, eh? Of course, it can't happen just like that, In perturbation theory, a very common step is to expand a Taylor poly, rearrange into coefficients of rising powers of the key variable (say x), and then say (drums rolling ...) that if the expression equates to zero, then we can say that each coefficient (with its own unknowns) also equates to zero.

This enables us to pull out three, four, even more equations from the original expansion. Golly.

I've been over a few textbooks on this .. and they seem to treat it as a normal course of deduction. No flicker of the eyelids!

I admit this is a rough description ... I'll try and get more flesh on it later. Initially however, I wanted to post up about it ... to see if anybody recognises what I'm describing.

Thanks!
 

Answers and Replies

  • #2
As you said yourself the rule only applies to a nonsingular system of linear equations, not to arbitrary systems of equations (this is one of the most basic facts in linear algebra). Consider, a^2 + b^2 = 0 with a,b unknown real variables. We have two unknowns, one equation and one unique solution (0,0). I don't really see the problem. Some people may state "you need n equations to determine n unknowns" but they either implicitly assume equations to mean nonsingular linear equations or they are stating something that is often false, but true in many simple cases. I don't really see what you have trouble understanding. You know that this only applies in a special case, and you can come up with cases where it doesn't apply.
 
  • #3
"Nonsingular" is the wrong word here. You mean "independent" equations.
 
  • #4
I have a query please, if anybody can shed some light, thanks:

So from an early age we get this idea of needing one equation for each unknown variable whose unique value we need to discover. Three unknowns? well, need three nonsingular equations: it's a kind of a rule of thumb I guess.

However, I have noticed that in some areas, notably perturbation expansions, and others, one arrives at a single equation, and can actually discover not one, but several variables form the one equation. Smells of a free lunch, eh? Of course, it can't happen just like that, In perturbation theory, a very common step is to expand a Taylor poly, rearrange into coefficients of rising powers of the key variable (say x), and then say (drums rolling ...) that if the expression equates to zero, then we can say that each coefficient (with its own unknowns) also equates to zero.
Yes, it is certainly true that if a polynomial (or power series) is equal to 0 for all x[/itex] then every coefficient must be 0. That's not one equation, that is an infinite number of equations- one for each value of x.

This enables us to pull out three, four, even more equations from the original expansion. Golly.

I've been over a few textbooks on this .. and they seem to treat it as a normal course of deduction. No flicker of the eyelids!

I admit this is a rough description ... I'll try and get more flesh on it later. Initially however, I wanted to post up about it ... to see if anybody recognises what I'm describing.

Thanks!
 

Suggested for: Several unknowns but only one equation

Replies
2
Views
658
Replies
6
Views
639
Replies
6
Views
1K
Replies
18
Views
1K
Replies
1
Views
1K
Back
Top