Fortran Real vs. double precision in Fortran 90?

  • Thread starter Thread starter Heimisson
  • Start date Start date
  • Tags Tags
    Fortran Precision
AI Thread Summary
When writing a Fortran program for complex numerical calculations, the choice between single and double precision is crucial for accuracy and performance. Double precision is generally recommended for most applications, especially when dealing with complex values, as it tends to yield more accurate results. The speed difference between single and double precision calculations often depends on the hardware and compiler optimizations, with modern systems typically handling double precision efficiently.Memory considerations also play a role; single precision uses less memory, which can be advantageous for large data sets. However, using double precision can prevent significant errors in calculations, making it a safer initial choice. Profiling the program after implementation can help determine if switching to single precision is beneficial for performance.Specific compilers, like IFORT and GNU g95, support double precision complex types, but users should verify compatibility. Some older or less reliable compilers may have issues with double precision complex arithmetic, which can lead to inaccurate results. Testing the output of complex calculations is essential to ensure the compiler's reliability.
Heimisson
Messages
42
Reaction score
0
Hey, I'm trying to write my very first Fortran program. I'm a physics student and I don't know a lot about computer science and since I'm new to Fortran I'm still having problems with finding answers to my questions on my own. So..please bear with me.

I'm doing heavy numerical calculations that should be quite accurate without being too slow so I was wondering if there is some rule of thumb to when one uses real variables or double precision. My final values will be complex so does that change anything? will the complex values be as precise double precision.

Thanks
 
Technology news on Phys.org
It depends on the hardware and the compiler implementation. Generally on a PC platform the standard precision calculations will only be faster if the compiler implements SIMD optimizations for them (SSE, SSE2 etc). If the floating point is done via the "legacy" floating point unit then double precision will incur no speed penalty and my even be faster than single precision (due to the native precision of the floating point unit). I'm pretty sure the more recent SIMD implementations handle double precision natively anyway, so either way there's probably very little speed difference on a modern hardware platform.

Apart from FPU speed the other consideration is memory and memory speed. If you've got some very big data structures then single precision could still hold an advantage.

These days I almost always choose the double precision floating point type (over the traditional precision real or float) for number crunching applications.
 
Last edited:
Heimisson said:
My final values will be complex so does that change anything? will the complex values be as precise double precision.

Ok I didn't read that last sentence before posting the previous reply. Yeah that will effect the precision. You should check with your chosen compiler, but last time I use Fortran (the gnu g95 compiler) it didn't support double prediction complex types. Depending on how much processing is done with the reals before you "cmplx()" them, you still might get a worthwhile precision improvement with double precision.

BTW. What compiler will you be using?
 
uart said:
Ok I didn't read that last sentence before posting the previous reply. Yeah that will effect the precision. You should check with your chosen compiler, but last time I use Fortran (the gnu g95 compiler) it didn't support double prediction complex types. Depending on how much processing is done with the reals before you "cmplx()" them, you still might get a worthwhile precision improvement with double precision.

BTW. What compiler will you be using?

I'm using IFORT
 
I was wondering if there is some rule of thumb to when one uses real variables or double precision
Yes: it's better to worry first about writing correct code, and only later worry about writing fast code.

Use double precision first -- it's more likely to be correct. And when you've run and profiled your program and discovered there's some part that's too slow that could potentially be sped up by switching to single precision, then you consider making the switch. (And you profile it both ways, so you can see which one is faster, and hopefully understand why)




Some other quite significant things to note are:




Single precision (usually) takes less memory. This is significant if memory bandwidth is the limiting factor in your calculation, or if it affects whether your data fits in cache.




Some calculations make progress much faster when computed with double precision -- the single precision version, in essence, wastes a lot of progress due to errors in calculation.
 
Hurkyl said:
Yes: it's better to worry first about writing correct code, and only later worry about writing fast code.

Use double precision first -- it's more likely to be correct. And when you've run and profiled your program and discovered there's some part that's too slow that could potentially be sped up by switching to single precision, then you consider making the switch. (And you profile it both ways, so you can see which one is faster, and hopefully understand why)

Thanks this sounds sensible.
 
uart said:
You should check with your chosen compiler, but last time I use Fortran (the gnu g95 compiler) it didn't support double prediction complex types.

Sorry that info was wrong. Even the free gnu compiler can handle double precision complex so I'm certain Ifort will also be able to do so.

To get double precision complex in g95 you have to declare the variable with the "kind" specifier. eg "complex(kind = 8) MyComplexVar" (or if you prefer the abbreviated syntax just "complex(8) MyComplexVar" ).

When I tested this before I assumed that "kind" specified the total storage space (bytes) for both Re and I am components so I thought you needed complex(16) (which wouldn't compile) to get double precision. After further checking however I find that complex(8) specifies 8 bytes for each of the real and imaginary components, so complex(8) is in fact double precision and yes it works. :biggrin:
 
What Hurkyl said.

You have got a basic tradeoff here. Either you go for the no-brain option of double precision for everything, and if necessary take the cost hit of buying some more memory or a slightly faster processor of you need them. Or, you take the cost of your time in figuring out what is the "optimum" solution, running benchmarks to verify you made the correct decision, etc.

The "no-brain" method almost always wins, except in specialized situations like real-time computing.

Re "double precision complex", historically some Fortran implementations have had very poor implementations of 64-bit complex arithmetic. I would hope that modern compiler systems (and certainly open-source software) would not have that problem, but if I was using an unknown Fortran implementation for anything serious involving complex arithmetic my first program would probably be something like this (tranalate from Fortran 77 to whatever version you are using!)

complex*16 x,y,z
x = complx(1d0, +1d-50)
y = complx(1d0, -1d-60)
z = x/y
print *,z

and if the output is "1 + 0i", throw the software away and get something that works properly! (Of course the output should be 1 + 1.000000001d-50i)
 
AlephZero said:
Re "double precision complex", historically some Fortran implementations have had very poor implementations of 64-bit complex arithmetic. I would hope that modern compiler systems (and certainly open-source software) would not have that problem, but if I was using an unknown Fortran implementation for anything serious involving complex arithmetic my first program would probably be something like this (tranalate from Fortran 77 to whatever version you are using!)

complex*16 x,y,z
x = complx(1d0, +1d-50)
y = complx(1d0, -1d-60)
z = x/y
print *,z

and if the output is "1 + 0i", throw the software away and get something that works properly! (Of course the output should be 1 + 1.000000001d-50i)

Hmmm, good call AlephZero. I just tested both of the freely available Fortran95 compilers that I have for Windows and both failed miserably. :cry:

I've got the Silverfrost (Salford) ftn95 freeware version and also the g95 windows port. Yep they both returned "1 + 0i".

I tested increasing the "1d-50" imaginary part and found that it starts "crapping out" at around "1e-38", which looks suspiciously close to 2^-127, so their "double precision" is behaving as if it only had a 8 bit exponent.

Surprisingly however it doesn't seem to have a problem with the precision of the mantissa. Doing stuff like roots of quadratic with "complex*16" I'm getting at least 14 or 15 significant digits, so it's definitely not single precision. So why the seemly only 8 bit exponent - I'm really stumped?
 
Last edited:
  • #10
Does the FORTRAN standard really require that the default format for printing a floating-point number (or a complex floating-point number) should include ten decimal digits of precision?
 
  • #11
Hurkyl said:
Does the FORTRAN standard really require that the default format for printing a floating-point number (or a complex floating-point number) should include ten decimal digits of precision?

I know exactly what you're saying Hurkyl as I was initially thinking the same thing. Is the 1+i0 thing nothing more than a rounding of the actual value stored in the variable for printing only? However I tested this and it didn't turn out to be the case. For example I tried things like :
Code:
z = (x/y - 1d0) * 1d40
print *,z
and the result was still zero.

Even more telling was that the imaginary part of the result wasn't always zero in my tests. When I was testing with values around 10^-39 and 10^-40 I was getting spurious results that were non-zero. It really was behaving as if it had only an 8 bit exponent yet surprisingly (in other calculations) still behaves like it has a 48+ bit mantissa. Very weird.

BTW. Just to be sure everyone is one the same page here, when you do the maths on AlephZero's example you see that it's not a mantissa type issue with this example. I meant it's not like you're trying to add a really small number to a really big one and just "max'ing out" on the precision of the mantissa. In essence this example really just involves adding two small numbers and not getting zero or an otherwise spurious result. (Adding 1d-50 and 1d-60 in the original example, but even 1d-40 and 1d-40 doesn't work properly in my two compilers).
 
Last edited:

Similar threads

Replies
4
Views
2K
Replies
3
Views
2K
Replies
4
Views
2K
Replies
8
Views
4K
Replies
4
Views
2K
Replies
8
Views
2K
Replies
59
Views
11K
Replies
3
Views
3K
Back
Top