# Why normalize equations when solving numerically?

1. Sep 3, 2014

### GabDX

In most textbooks I've read and programs I've work with, differential equations are normalized (made dimensionless) before being solved with some numerical method. What is the point of this? It's seems to be a lot of work for no benefice.

So, after a lot of derivations, you end up with some nice equations but you're not done. No, you have to make them dimensionless, increasing the chances of making a mistake somewhere! Then, the software users don't want to use dimensionless variables (with good reasons) so their input has to be normalized. After processing the data, the results have to be converted back to dimensional units.

This is really frustating since I don't see the point at all.

2. Sep 3, 2014

### Simon Bridge

There is a famous pair of problems - goes as follows:

(a) You have a lighter. You are in a room with a table. There is a lamp on a hook on the wall. List, in order, the sequence of steps needed to light the lamp.

This is not difficult.
1. take the lamp off the hook
2. put lamp on table
3. expose the wick
... etc. something like that until you have a lit lamp.

The second question goes:
(b) You have a lighter. Same room as before but the lamp on the table. List, in order, the sequence of steps needed to light the lamp.

Most people would do something like 1. expose the wick... but a mathematician would do this:
1. hang the lamp on a hook on the wall
2. proceed as per answer to prev question.

Setting up dimensionless units at the start is a bit like hanging the lamp on the wall - it gives a standard starting place for a set of instructions making it easier to tell you what to do. One of the advantages is that you can use the same block of code, in many cases, for the working out part, even though you start with different units in the setup.

When you are designing an application, and you find that the normalizing is superfluous - you can always just implement the process without normalizing ... make sure you understand what you are doing first though.

3. Sep 4, 2014

### the_wolfman

When solving differential equations, most numerical methods aim to convert a system of differential equations into an algebraic equation of the form

$A \vec x = \vec b$.

They then solve for $\vec x$ by inverting the matrix $A$
$\vec x =A^{-1} \vec b$.

Inverting the matrix $A$ is often the hardest part of the computation. A lot of effort goes into finding optimal ways to invert large matrices. The number of operations needed to invert A increases with the number of degrees of freedom $n$ to some power. Depending on the nature of A the scaling is often $n^2$ or $n^3$. This means that if you double you system size the amount of effort required to solve your system is increased by a factor of 4 to 8. In large computations, the amount of effort required to invert $A$ often limits what we can and cannot simulate. Also keep in mind that computers do finite precision math. That means that every computation introduces a small error. While these errors are small, they quickly add up when you try to invert large matrices.

You can perform a number of algebraic manipulations to the original equation $A \vec x = \vec b$ to give yourself a new equation $A' \vec x = \vec {b'}$. These manipulations don't change the solution
$\vec x$, but they do change how much computational effort is needed to invert the matrix. This is not a trivial point! Some matrices are significantly easier to invert than others, and finding the right form of the equation $A' \vec x = \vec {b'}$ can make a huge difference.

As a rule of thumb, the algebraic system of equations that results for normalized equations is often easier to solve than the system that results from the dimensional equations. So if you are looking for ways to improve the performance, normalizing your equations is an easy fix to try.

4. Sep 7, 2014

### X89codered89X

Often times it's to minimize roundoff error.

if you different dimensions for things, data you're working with can vary greatly in sheer numbers represented (e.g. measurements of 103957108 [unit a] and .001234 [unit b] ), but computers are dumb, and they represent these both with floating point data types (or double... etc), and this can cause a whole slew of issues when dealing with numerical operations over and over across large orders of magnitude. You can lose significant figures in your data, without a way to recover it. Normalizing the data *first* is one way to help minimize these potential losses.

Any decent numerical analysis textbook should at least discuss these issues for implementations on a computer with finite precision.