Why normalize equations when solving numerically?

  • Context: Undergrad 
  • Thread starter Thread starter GabDX
  • Start date Start date
  • Tags Tags
    Normalize
Click For Summary

Discussion Overview

The discussion centers around the practice of normalizing differential equations before solving them numerically. Participants explore the rationale behind this approach, including its implications for computational efficiency and error management, while also expressing skepticism about its necessity.

Discussion Character

  • Debate/contested
  • Technical explanation
  • Mathematical reasoning

Main Points Raised

  • One participant questions the necessity of normalizing equations, arguing that it complicates the process without clear benefits.
  • Another participant compares normalization to establishing a standard starting point for problem-solving, suggesting it allows for more efficient coding and application design.
  • A different viewpoint emphasizes that normalizing equations can lead to easier algebraic manipulation, which may reduce computational effort when inverting matrices.
  • One participant highlights that normalizing can help minimize roundoff errors, particularly when dealing with values that vary greatly in magnitude, which can affect numerical stability.

Areas of Agreement / Disagreement

Participants express differing opinions on the value of normalizing equations, with some advocating for its benefits in computational efficiency and error reduction, while others remain skeptical about its necessity and practicality.

Contextual Notes

Participants note that the computational challenges associated with inverting matrices can vary significantly based on the form of the equations used, and that normalization may influence these challenges. There is also mention of finite precision in computer calculations, which can introduce errors that normalization might help mitigate.

GabDX
Messages
11
Reaction score
0
In most textbooks I've read and programs I've work with, differential equations are normalized (made dimensionless) before being solved with some numerical method. What is the point of this? It's seems to be a lot of work for no benefice.

So, after a lot of derivations, you end up with some nice equations but you're not done. No, you have to make them dimensionless, increasing the chances of making a mistake somewhere! Then, the software users don't want to use dimensionless variables (with good reasons) so their input has to be normalized. After processing the data, the results have to be converted back to dimensional units.

This is really frustating since I don't see the point at all.
 
Physics news on Phys.org
There is a famous pair of problems - goes as follows:

(a) You have a lighter. You are in a room with a table. There is a lamp on a hook on the wall. List, in order, the sequence of steps needed to light the lamp.

This is not difficult.
1. take the lamp off the hook
2. put lamp on table
3. expose the wick
... etc. something like that until you have a lit lamp.

The second question goes:
(b) You have a lighter. Same room as before but the lamp on the table. List, in order, the sequence of steps needed to light the lamp.

Most people would do something like 1. expose the wick... but a mathematician would do this:
1. hang the lamp on a hook on the wall
2. proceed as per answer to prev question.

Setting up dimensionless units at the start is a bit like hanging the lamp on the wall - it gives a standard starting place for a set of instructions making it easier to tell you what to do. One of the advantages is that you can use the same block of code, in many cases, for the working out part, even though you start with different units in the setup.

When you are designing an application, and you find that the normalizing is superfluous - you can always just implement the process without normalizing ... make sure you understand what you are doing first though.
 
When solving differential equations, most numerical methods aim to convert a system of differential equations into an algebraic equation of the form

A \vec x = \vec b.

They then solve for \vec x by inverting the matrix A
\vec x =A^{-1} \vec b.

Inverting the matrix A is often the hardest part of the computation. A lot of effort goes into finding optimal ways to invert large matrices. The number of operations needed to invert A increases with the number of degrees of freedom n to some power. Depending on the nature of A the scaling is often n^2 or n^3. This means that if you double you system size the amount of effort required to solve your system is increased by a factor of 4 to 8. In large computations, the amount of effort required to invert A often limits what we can and cannot simulate. Also keep in mind that computers do finite precision math. That means that every computation introduces a small error. While these errors are small, they quickly add up when you try to invert large matrices.

You can perform a number of algebraic manipulations to the original equation A \vec x = \vec b to give yourself a new equation A' \vec x = \vec {b'}. These manipulations don't change the solution
\vec x, but they do change how much computational effort is needed to invert the matrix. This is not a trivial point! Some matrices are significantly easier to invert than others, and finding the right form of the equation A' \vec x = \vec {b'} can make a huge difference.

As a rule of thumb, the algebraic system of equations that results for normalized equations is often easier to solve than the system that results from the dimensional equations. So if you are looking for ways to improve the performance, normalizing your equations is an easy fix to try.
 
Often times it's to minimize roundoff error.

if you different dimensions for things, data you're working with can vary greatly in sheer numbers represented (e.g. measurements of 103957108 [unit a] and .001234 [unit b] ), but computers are dumb, and they represent these both with floating point data types (or double... etc), and this can cause a whole slew of issues when dealing with numerical operations over and over across large orders of magnitude. You can lose significant figures in your data, without a way to recover it. Normalizing the data *first* is one way to help minimize these potential losses.

Any decent numerical analysis textbook should at least discuss these issues for implementations on a computer with finite precision.
 

Similar threads

  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 3 ·
Replies
3
Views
4K
  • · Replies 16 ·
Replies
16
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 14 ·
Replies
14
Views
3K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 11 ·
Replies
11
Views
2K
  • · Replies 2 ·
Replies
2
Views
4K