1. Not finding help here? Sign up for a free 30min tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

A significant figure problem in fortran77

  1. Jan 26, 2016 #1
    • Member advised to use the homework template for posts in the homework sections of PF.
    Programming with Fortran77,
    I got some problems during coding.....
    How can I set to be the value?
    For example, I want to the value 'a' set to be 0.0000045.
    But, in command window, value of a is set to be 0.000004499876895.
    How can I set to be the value 'a'?

    In my case, I command the real value (real=real*4, not real*8 or double precision)...

    program check
    implicit none
    real a
    integer j
    j=4.000
    a=0.0000045
    write(*,*) j , a

    end program

    ======> 4 0.000004499876895

    I don't want that value......I want 0.0000045...
     
  2. jcsd
  3. Jan 26, 2016 #2

    DrClaude

    User Avatar

    Staff: Mentor

    That's the expected behavior as this is how the decimal number is stored in binary up to the precision of a real*4. What you may want to do is to use a formatted output.
     
  4. Jan 26, 2016 #3

    SteamKing

    User Avatar
    Staff Emeritus
    Science Advisor
    Homework Helper

    You don't specify the brand of Fortran compiler which exhibits this behavior, but I respectfully disagree with Dr. Claude.

    If the decimal-binary conversion is done correctly by the compiler, you should get at least 6 decimal digits of precision for the value of a, if all you are doing in your code is a simple assignment.

    In order to make the correct conversion, the compiler should first convert 0.0000045 to a scientific notation representation such as 4.5E-6. There should be no problem obtaining an accurate binary conversion for the mantissa, namely 4.5, and you should get your 6 digits of precision on this mantissa.

    This tool

    http://www.binaryconvert.com/convert_float.html

    does conversions of decimal floating point numbers to 32-bit binary using the IEEE754 standard, which is what all modern Fortran compilers should be using.

    I would experiment with some other floating point assignments, but if you cannot reliably get at least 6 decimal digits of precision in a simple assignment, who knows what other floating point errors lurk in complex calculations? I would recommend that if you cannot find a reasonable explanation for this behavior, it's time to switch compilers.

    https://en.wikipedia.org/wiki/Single-precision_floating-point_format
     
  5. Jan 26, 2016 #4

    DrClaude

    User Avatar

    Staff: Mentor

    Guilty as charged. I had misread the output as 40.000004499876895, not as two numbers o:)

    I still think this is an output problem. I remember seeing such output a long time ago, when a real*4 was printed out as a real*8 there would be random garbage appended (which was not present if the real*4 was converted to real*8 and the latter was printed).

    I've checked with two compilers, including gfortran, and I get

    4 4.50000016E-06
    or
    4 4.500000E-06
     
  6. Jan 26, 2016 #5
    Thanks for SteamKing and DrClaude...!
    I think that there are ouput problems..

    In my case, I used the compiler "Compaq Visual Fortran version 6.6" in Windown XP Professional using VMware workstation...

    I have my code in 2 version, i.e., fortran77 and matlab code.
    First, I make a code in fortran 77 and compile using the visual fortran, but the error has occured every time....So
    I write down the same version in matlab, (convert mathematical expression only, ex......2**beta -> 2^beta)
    there are no error and I got the reasonable results in matlab..........All varibables and fuction are same and there are
    no compile and link error,,, only run-time error...But I don't know why the error is occured..
     
  7. Jan 27, 2016 #6

    Mark44

    Staff: Mentor

    This is not an error and it's not due to output problems-- it's an artifact of the way that floating point numbers are stored in computers. The value you set a to is decimal (or base-10) fraction. Most programming languages adhere to the IEEE 754 standard that SteamKing mentioned, which stores numbers as hexadecimal (base-16) fractions.

    For some numbers, the hexadecimal representation is an exact representation. This would include any number whose fractional part is the sum of multiples of 1/2, 1/4, 1/8, 1/16; i.e., the sum of multiples of negative powers of 2. For example, 3/8 = 1/4 + 1/8 is stored exactly, so if you store .375 in a variable, that's what you get when you print it. On the other hand, fractions that include multiples of 1/5 or 1/10 to various powers (such as the value you chose for a) aren't stored in exact form, which explains the discrepancy when you printed the number.

    In many programming languages, if you add .1 + .1 + .1 (with 10 terms in all), what you get won't be exactly equal to 1. You can verify this in Fortran or C or C++ or whatever by subtracting 1 from (.1 + .1 + .1 + .1 + .1 + .1 + .1 + .1 + .1 + .1). You'll find that the answer is not 0.

    This is a problem that software that performs calculations on money amounts needs to recognize and deal with.
     
  8. Jan 27, 2016 #7

    SteamKing

    User Avatar
    Staff Emeritus
    Science Advisor
    Homework Helper

    What run-time error? This is the first time you have mentioned run-time errors.

    It's not clear what is happening here, but it suggests that you are doing more than making a simple assignment, such as a = 0.0000045. In my experience with Fortran compilers, if you execute the simple statement a = 0.0000045 and then print out a, you should get 0.0000045 back out, not 0.000004499876895. This amount of loss in precision would render the results of even simple programs pretty meaningless.

    Compaq Visual Fortran 6.6 is also pretty dated. It's at least 15 years old, and there should be some more modern versions of a Fortran 77 compiler to choose from, even some open-source ones.
     
  9. Jan 27, 2016 #8
    Thanks to all~!

    I solve the problems in my code and learn about the charactersitcs of fortran 77 (floating point numbers and so on..)
    Thanks to all^^
    Especially, SteamKing , the wikipedia adress , I really really thank you ^*^
     
  10. Jan 27, 2016 #9

    Mark44

    Staff: Mentor

    I disagree. I don't have a Fortran compiler, so I can't test this directly. Here's a snippet from a C program, which I believe is behaving similarly to the Fortran code in this thread:
    Code (C):

    float a = .0000045;
    printf("%.14f", a);
    The output from printf is 0.00000450000016

    I'm storing .0000045 in a four-byte variable, a, which corresponds closely with the real type that the Fortran program in this thread is using. The value .0000045 can't be represented exactly in either four bytes or eight bytes, so when you print the value that is actually stored, you don't get back what you think you put in. This is what I said in my previous post.

    Also as I said before, this is a problem that financial software has to overcome. Some languages, Python and C# for example, have library modules that perfom decimal arithmetic with more precision, specifically to minimize this problem.
     
    Last edited: Jan 27, 2016
  11. Jan 27, 2016 #10

    SteamKing

    User Avatar
    Staff Emeritus
    Science Advisor
    Homework Helper

    I never claimed that the assigned number could be represented exactly, only to a certain number of sig figs, given the limitations of 4-byte floating point representations.

    In your case, 0.0000045 is represented as 0.00000450000016, which conversion to floating point is to my reckoning at least seven sig figs, which is within the limits of a 4-byte real. The OP's output gave 0.000004499876895, which for reasons that are still not clear, is IMO an unacceptable conversion of 0.0000045 to floating point format. You should not have to fiddle with formatted output to obtain a precise value which is within the limits of the FP format.

    The extra 0000016 a the end of your FP will have almost no impact on extended FP calculations, whereas who can say for sure that the OP's FP representation will not cause problems?

    Financial software has its own peculiarities, since trillions of dollars (or euros or whatever) must be represented to two decimal place accuracy, or 14 decimal digits. In time, this problem only gets worse, since soon quadrillions of dollars (or euros or whatever) must be represented to two decimal place accuracy, unless a revaluation of common currencies is mandated in the future. 8-byte FP can give about 15-16 decimal digits; beyond that, you'll need a new FP format for the finance types, unless they change over to some sort of long int calculation format.
     
  12. Jan 27, 2016 #11

    Mark44

    Staff: Mentor

    Yes, I agree. The OP's figure of 0.000004499876895 is pretty far off (by .000000000123105), so the two numbers agree in only 4 sig figures. Fortran77 is pretty old, and I don't know offhand if the way it stores four-byte reals adheres to the IEEE 754 spec.
     
  13. Jan 27, 2016 #12

    SteamKing

    User Avatar
    Staff Emeritus
    Science Advisor
    Homework Helper

    The F77 standard was written before IEEE 754, but that doesn't necessarily preclude compilers written after the latter standard was developed from using IEEE 754 to implement FP calculations while maintaining compatibility with the Fortran standard, which IIRC is more about specifying how language features are supposed to work than the nuts and bolts of how the numbers are crunched.

    Before IEEE 754, computer makers had adopted a number of different FP formats which were incompatible with one another. With the rise of microcomputers and the increasing popularity of mixed language programs being developed (particularly C and Fortran), there was a need to standardize on a common FP format which could be utilized by different machines and programming languages to avoid future headaches, and IEEE 754 was the result. It was incorporated into hardware FPUs, like the famous Intel x86 family of CPUs, and compiler writers for various languages used to tout that their products were IEEE 754-compatible.

    When the Fortran 90 standard appeared, there was a revision to the representation of different data types, so instead of REAL and DOUBLE PRECISION, there was the REAL(N) type, where N could be 4 or 8.

    http://www.cs.uwm.edu/~cs151/Bacon/Lecture/HTML/ch06s09.html
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted



Similar Discussions: A significant figure problem in fortran77
Loading...