Ah yes, linear. Here's the thing. If you can assume some system or circuit is linear, all sorts of calculations become easier. In some cases, calculation simply become possible (vs. impossible). Since engineers deal in the possible, this is important.
The reason comes from your high school algebra calls: distributed law applies with linear networks or systems. This means you can do algebraic transformations like: ab+ac = a(b+c). Or for an EE doing a KVL loop: Vcc - IR1
= Vcc - I (R1+R2
) = 0 . Another example is cascading amplifiers. If they are linear, you can say Asystem
for all frequency components when you cascade them. If they are nonlinear, you can't say that. See IMD/THD below.
For example, matrix math instantly applies. You can do things like Thevenin/Norton reductions and general analog circuit analysis strictly if and only if things are linear.
The opposite is, of course, non-linear. This all gets back to polynomials and Taylor expansions. Being linear means you can truncate the Taylor expansion representation of a nonlinear system without too much error. In fact, audio equipment specs like IMD and THD are nothing more than measures of that truncation error.
You also get more complex math available like Fourier, Laplace and Z-transforms if things are linear. This is usually part of "Linear Systems" vs. "Linear Circuits" which is analog circuits. For example, in the IC industry, analog IC circuits == linear circuits.
So you can think of a large part of engineering school as figuring out how to keep things linear (and predictable) rather than nonlinear and less predictable.
It's not just EE that used linearity like this - 1st order statics in ME do the same thing for the same reasons.