I have a query please, if anybody can shed some light, thanks: So from an early age we get this idea of needing one equation for each unknown variable whose unique value we need to discover. Three unknowns? well, need three nonsingular equations: it's a kind of a rule of thumb I guess. However, I have noticed that in some areas, notably perturbation expansions, and others, one arrives at a single equation, and can actually discover not one, but several variables form the one equation. Smells of a free lunch, eh? Of course, it can't happen just like that, In perturbation theory, a very common step is to expand a Taylor poly, rearrange into coefficients of rising powers of the key variable (say x), and then say (drums rolling ...) that if the expression equates to zero, then we can say that each coefficient (with its own unknowns) also equates to zero. This enables us to pull out three, four, even more equations from the original expansion. Golly. I've been over a few textbooks on this .. and they seem to treat it as a normal course of deduction. No flicker of the eyelids! I admit this is a rough description ... I'll try and get more flesh on it later. Initially however, I wanted to post up about it ... to see if anybody recognises what I'm describing. Thanks!