Adding very small values to very large numbers

In summary: I want to be able to factor out any number, not just the ones that have been factored out so far.You need to use an arbitrary-precision arithmetic library. complex is a class, but it still uses regular old IEEE floating points to store data, that's no where near precise enough for the scale you are trying to calculate. Try the GNU MP Bignum library.
  • #1
TheCanadian
367
13
I am using complex numbers and was wondering if there's any way that I can get output to match my exact input when performing basic arithmetic on them. For example, if I use type = complex_ or complex256, with A = 1. and B = 6.626 x 10^(-34), then C = A*B yields C = 6.626 x 10^(-34) as wanted. But D = A + B yields D = 1. which is not what I want. Is there any way for me to get A + B to yield an output of sufficient precision (i.e. to the highest precision of the individual elements I am using to get D)?
 
Technology news on Phys.org
  • #2
You need to use an arbitrary-precision arithmetic library. complex is a class, but it still uses regular old IEEE floating points to store data, that's no where near precise enough for the scale you are trying to calculate. Try the GNU MP Bignum library.
 
  • #3
TheCanadian said:
I am using complex numbers and was wondering if there's any way that I can get output to match my exact input when performing basic arithmetic on them. For example, if I use type = complex_ or complex256, with A = 1. and B = 6.626 x 10^(-34), then C = A*B yields C = 6.626 x 10^(-34) as wanted. But D = A + B yields D = 1. which is not what I want. Is there any way for me to get A + B to yield an output of sufficient precision (i.e. to the highest precision of the individual elements I am using to get D)?
Do you understand how many significant digits are involved in the sum? As njrunner said, you need an arbitrary-precision math package.
 
  • #4
newjerseyrunner said:
You need to use an arbitrary-precision arithmetic library. complex is a class, but it still uses regular old IEEE floating points to store data, that's no where near precise enough for the scale you are trying to calculate. Try the GNU MP Bignum library.

Thank you for the reply. I'll definitely look into this library. Just curious, is it possible to determine the (relative) error on these values when performing a calculation of arbitrary precision? For example, I'd assume the error could grow quite large if I am running a loop which involves squaring an initially very small number, adding another small number to this value, and then taking the square root of this for each iteration, no?
 
  • #5
First off, I have to ask: Why do you think you need this kind of precision?

Python uses the IEEE floating point representation to represent real numbers. This represents non-zero numbers as being of the form sign*(1.fraction_bits)*2exponent-1023, where the fraction is 52 bits long and the exponent is between 1 and 2047, inclusive. (An exponent of zero is reserved for representing special numbers such as zero, infinity, the ever popular "NaN" (not a number), and some other oddities.) In this representation, the smallest positive number x such that 1+x is not equal to 1 is about 2.2*1016.

This representation is a built-in hardware capability on most computers nowadays. It is what you should prefer to use for scientific purposes unless there is a very good reason for not using it. There are a number of extended precision packages for python. Two of the more popular ones are mpmath and gmpy2, both of which can be obtained from https://pypi.python.org.Errata: There's a typo in the above. That 2.2*1016 should have been 2.2*10-16.
 
Last edited:
  • Like
Likes jim mcnamara
  • #6
newjerseyrunner said:
Try the GNU MP Bignum library.
That's not a good answer for Python, which is the language with which this thread was tagged.
 
  • Like
Likes newjerseyrunner
  • #7
D H said:
First off, I have to ask: Why do you think you need this kind of precision?

Python uses the IEEE floating point representation to represent real numbers. This represents non-zero numbers as being of the form sign*(1.fraction_bits)*2exponent-1023, where the fraction is 52 bits long and the exponent is between 1 and 2047, inclusive. (An exponent of zero is reserved for representing special numbers such as zero, infinity, the ever popular "NaN" (not a number), and some other oddities.) In this representation, the smallest positive number x such that 1+x is not equal to 1 is about 2.2*1016.

This representation is a built-in hardware capability on most computers nowadays. It is what you should prefer to use for scientific purposes unless there is a very good reason for not using it. There are a number of extended precision packages for python. Two of the more popular ones are mpmath and gmpy2, both of which can be obtained from https://pypi.python.org.

It's to try and solve a few coupled differential equations over large scales that involve some very small values. I've been able to factor out these values thus far and just include them in the end, which has been successful. But I'm trying to see if there's a better way to make my code a bit more general so that I don't have to change it drastically if I wanted to insert/remove single term in the differential equations or modify my domain of interest. It's also somewhat out of curiosity.
 
  • #8
D H said:
Python uses the IEEE floating point representation to represent real numbers. This represents non-zero numbers as being of the form sign*(1.fraction_bits)*2exponent-1023, where the fraction is 52 bits long and the exponent is between 1 and 2047, inclusive. (An exponent of zero is reserved for representing special numbers such as zero, infinity, the ever popular "NaN" (not a number), and some other oddities.) In this representation, the smallest positive number x such that 1+x is not equal to 1 is about 2.2*1016.
No doubt you meant 2.2*10 -16.
 
  • #9
TheCanadian said:
It's to try and solve a few coupled differential equations over large scales that involve some very small values. I've been able to factor out these values thus far and just include them in the end, which has been successful. But I'm trying to see if there's a better way to make my code a bit more general so that I don't have to change it drastically if I wanted to insert/remove single term in the differential equations or modify my domain of interest. It's also somewhat out of curiosity.
But the issue is, of what possible utility is this? I've never heard of any actual physical measurement made to more than 12 significant digits and it astounds me that things can be done to THAT degree of precision. 34 or 35 digits is just ridiculous.
 
  • #10
phinds said:
But the issue is, of what possible utility is this? I've never heard of any actual physical measurement made to more than 12 significant digits and it astounds me that things can be done to THAT degree of precision. 34 or 35 digits is just ridiculous.

If I had these set of differential equations:

## \frac {\partial N}{\partial t} = \frac {i}{\hbar}(PE - E^*P^*) - \frac {N}{a} \\ \\

\frac {\partial P}{\partial t} = i \hbar E^*N - \frac {P}{b} \\ \\

\frac {\partial E}{\partial z} = \frac {i}{\hbar}P^* - \frac{E}{c} ##

where a, b, and c are constants ~ ##10^8 - 10^{12}##; E, P and N are the complex-valued solutions I am seeking; z and t are real numbers. Whether I add more terms or change the domain over which t and z are being solved (typical time intervals for the differential equation could be ## 10^8 - 10^{12} ## and ## 10^0 - 10^6 ## for z). I've thus far been factoring my equations for the few special cases I've analyzed and would be interested in seeing if there's a better approach in case I remove/insert terms or vary my t and z scales even more.
 
  • #11
If you're trying to increase the accuracy of the numbers, then as posted, you'll need to use some type of arbitrary precision math.

If you're trying to sum a large set of very small numbers (possibly along with some larger numbers), you can improve the accuracy by using a summation function that adds numbers with the same exponent. This requires being able to determine the exponent of a floating point number. In C or C++, you can reinterpret a float or double in order to access the exponent. The alternative would be to take natural log of the number divided by natural log of 2, then add an offset equal to the bias used for the floating point number representation, but this would be slower. Here is a C++ example, where unsigned long long is a 64 bit unsigned integer (same size as double):

Code:
//  SUM contains an array of 2048 doubles, indexed by exponent,
//  used to minimize rounding / truncation issues when summing
//  a large number of values

class SUM{
    double asum[2048];
public:
    SUM(){for(int i = 0; i < 2048; i++)asum[i] = 0.;}
    void clear(){for(int i = 0; i < 2048; i++)asum[i] = 0.;}
//  getsum returns the current sum of the array
    double getsum(){double d = 0.; for(int i = 0; i < 2048; i++)d += asum[i]; return(d);}
    void addnum(double);
};

void SUM::addnum(double d)      // add a number into the array
{
size_t i;

    while(1){
//      i = exponent of d
        i = ((size_t)((*(unsigned long long *)&d)>>52))&0x7ff;
        if(i == 0x7ff){         // max exponent, could be overflow
            asum[i] += d;
            return;
        }
        if(asum[i] == 0.){      // if empty slot store d
            asum[i] = d;
            return;
        }
        d += asum[i];           // else add slot to d, clear slot
        asum[i] = 0.;           // and continue until empty slot
    }
}
 
Last edited:
  • #12
TheCanadian said:
If I had these set of differential equations:

## \frac {\partial N}{\partial t} = \frac {i}{\hbar}(PE - E^*P^*) - \frac {N}{a} \\ \\

\frac {\partial P}{\partial t} = i \hbar E^*N - \frac {P}{b} \\ \\

\frac {\partial E}{\partial z} = \frac {i}{\hbar}P^* - \frac{E}{c} ##
At first glance, you have a stiff system of partial differential equations here, and to make matters worse, the derivative functions are not analytic thanks to the complex conjugates. From the questions you've asked earlier, it appears you are trying to roll your own solver. This is almost always a mistake. Work on numerical solvers for ODEs and PDEs predates digital computers by over a hundred years. These efforts ramped up massively with the development of digital computers.

You should take advantage of the vast amounts of technical literature on this. You should also take advantage of the numerous software packages for solving PDEs. Many of them are free and some have Python wrappers. You might have to revert to using Fortran. Most of these software packages are written in Fortran and have not been ported to / wrapped in some more modern language. (Many are still in FORTRAN IV, although that might be a sign of an ancient technique you might not want to use.)
 
  • #13
D H said:
At first glance, you have a stiff system of partial differential equations here, and to make matters worse, the derivative functions are not analytic thanks to the complex conjugates. From the questions you've asked earlier, it appears you are trying to roll your own solver. This is almost always a mistake. Work on numerical solvers for ODEs and PDEs predates digital computers by over a hundred years. These efforts ramped up massively with the development of digital computers.

You should take advantage of the vast amounts of technical literature on this. You should also take advantage of the numerous software packages for solving PDEs. Many of them are free and some have Python wrappers. You might have to revert to using Fortran. Most of these software packages are written in Fortran and have not been ported to / wrapped in some more modern language. (Many are still in FORTRAN IV, although that might be a sign of an ancient technique you might not want to use.)

Thank you for the reply! Yes, I have been running into a bit more trouble with the above equations since the ones I solved earlier were similar but without one of the terms included in the third equation and for specific conditions. I was looking into a few different PDE solvers and I guess the main reason I was going away from them is because a lot of them (at least from what I saw) did not handle complex numbers well and at this time I'm unsure of any additional methods I would like to implement such as adaptive stretching and rezoning for my domain. Namely, I was a little unsure of how to combine this all with existing solvers but I think you are quite right in advising me to use the sophisticated tools that have been developed through many years before me instead of starting from scratch again.

I will certainly look into different Python wrappers and maybe dabble with Fortran for these purposes. Just curious, it seems like you have a bit of experience with different solvers; would you have any particular recommendations? I have looked into FiPy and came across a suggestion to use 2 vectors for each solution--one denoting its real part and one for the imaginary part. And the http://hplgit.github.io/fenics-tutorial/doc/pub/fenics-tutorial-4print.pdf is something I am looking at more and trying to see if it is suitable for my problem.

Edit: just to clarify, I was only able to solve a simpler case with 3 equations similar to the ones posted above by applying transformations and reproducing the sine-gordon equation which is easily solved (an ODE). But I am looking for a more general method since I will not be able to arrive at the sine-gordon equation for future and more complex cases analyzed. But as many of you have mentioned, the stiffness of the solution for the 3 coupled PDE shown above has been creating problems.
 
Last edited by a moderator:
  • Like
Likes Pepper Mint
  • #14
I don't do much Python, but my first impression was, that this is a "duck typing" issue. When you assign the 1 to a variable, try assigning 1.0 instead, or even, 1.1 and then subtract .1 from it. Because I wonder if the compiler isn't just converting the variable that gets 1 assigned to it to an integer, and then you've got mixed-type arithmetic, which causes the imprecision.

Mayby not, but it's worth checking. "Duck typing" (as in "if it walks like a duck, it must be a duck") languages (i.e., those like Python or VB that don't require you to explicitly specify the type of a variable before using it) can occasionally lead to these sorts of undesired outcomes.
 

1. What is the purpose of adding very small values to very large numbers?

The purpose of adding very small values to very large numbers is to prevent loss of precision in calculations. In scientific calculations, it is common to work with extremely large numbers, which can lead to rounding errors and loss of significant digits. By adding a small value, the precision of the calculation can be maintained.

2. How do you determine the appropriate small value to add to a large number?

The appropriate small value to add depends on the precision required for the calculation. It should be small enough to not significantly impact the overall value, but large enough to prevent loss of significant digits. This value is often determined through trial and error, or by considering the scale of the numbers involved in the calculation.

3. Can adding small values to large numbers lead to errors in calculations?

Yes, if the small value is not carefully chosen, it can lead to errors in calculations. If the value is too large, it may significantly impact the overall result. If it is too small, it may not prevent loss of precision. It is important to carefully select the small value to ensure accurate calculations.

4. Are there any alternatives to adding small values to large numbers?

Yes, there are alternative methods to prevent loss of precision in calculations with large numbers. Some techniques include using scientific notation, increasing the number of significant digits used in the calculation, or using specialized software designed for working with large numbers.

5. In what types of scientific calculations is adding small values to large numbers commonly used?

Adding small values to large numbers is commonly used in a variety of scientific calculations, such as in physics, chemistry, and engineering. It is especially useful in calculations involving large distances, time intervals, or extremely small particles, where maintaining precision is crucial for accurate results.

Similar threads

  • Programming and Computer Science
Replies
14
Views
2K
  • Programming and Computer Science
Replies
1
Views
761
  • Programming and Computer Science
Replies
5
Views
1K
  • Programming and Computer Science
Replies
2
Views
1K
  • Programming and Computer Science
Replies
1
Views
2K
  • Programming and Computer Science
Replies
29
Views
1K
  • Programming and Computer Science
Replies
1
Views
928
  • Programming and Computer Science
Replies
22
Views
2K
  • Programming and Computer Science
Replies
23
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
7
Views
297
Back
Top