Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Computers rational and irrational numbers

  1. Jul 18, 2011 #1

    rcgldr

    User Avatar
    Homework Helper

    I think this needs it's own thread.

    e and pi are transcendental numbers:

    http://en.wikipedia.org/wiki/Transcendental_number

    The square root of 2 is n irrational number:

    http://en.wikipedia.org/wiki/Irrational_number

    1/3 is a rational number

    http://en.wikipedia.org/wiki/Rational_number
     
  2. jcsd
  3. Jul 18, 2011 #2

    chiro

    User Avatar
    Science Advisor

    People should note, that many mathematical and computational platforms use custom libraries that store numbers differently from IEEE formats.

    Depending on what code is used, this issue can be addressed in a different light.
     
  4. Jul 18, 2011 #3
    Well if we are gonna start a separate thread I'll post my reply to his statements here as well(I originally sent to PM since it was getting way offtopic in the other thread).

    Edit: e and pi are still irrational, transcendentalism is just another property. All transcendentals are irrational, while the reverse is not true.
    Also in any implementation similar to floating point you are going to have those errors for the reasons explained above.
     
    Last edited by a moderator: Apr 26, 2017
  5. Jul 18, 2011 #4

    D H

    User Avatar
    Staff Emeritus
    Science Advisor

    Exactly. There are ways to represent 1/3 exactly. For example, (much abbreviated):
    Code (Text):

    class Rational {
    public:
       Rational (long int num, unsigned long int den);
       Rational operator+(const Rational&);
       Rational operator-(const Rational&);
       Rational operator*(const Rational&);
       Rational operator/(const Rational&);
       operator double ();
    private:
       long int numerator;
       unsigned long int denominator;
    };
     
    With this, Rational one_third (1, 3); is an exact representation of 1/3. Use some type of bignum for the numerator and denominator and voila! even more numbers can be represented exactly.

    The built-in floating point numbers that most computers provide are facsimiles of the real numbers. They are very very fast compared to the Rational class defined above, or compared to any software implementation of the reals or rationals. They come pre-packaged with a lot of predefined functions such as sine, cosine, and exp.

    The typical computer programmer should know that these built-in representations are facsimiles. They are not reals. For one thing, they are neither commutative nor associative. The following exhibits just a few of the problems with those built-in floating point numbers:
    Code (Text):

    #include <iostream>
    int main () {
       double pi = M_PI;
       double big = 1e16;
       double a = 3;
       double b = 1;

       std::cout << "sin(pi) = " << std::sin(pi) << "\n";
       std::cout << "1e16-1e16+3-1 = " << big - big + a - b << "\n";
       std::cout << "1e16+3-1-1e16 = " << big + a + b - big << "\n";
       return 0;
    }
    Output on my computer is
    Code (Text):

    sin(pi) = 1.22465e-16
    1e16-1e16+3-1 = 2
    1e16+3-1-1e16 = 4
     
    Change that 1e16 to 1e18 and the errors become even more pronounced. Compare to Mathematica, which knows that [itex]\sin\pi[/itex] is identically zero and that 10100+3-1-10100 is 2.

    The typical programmer does not need to know the details of the IEEE floating point standard. They should know that there are pitfalls associated with using float and double (or their equivalents in other languages).
     
    Last edited: Jul 18, 2011
  6. Jul 18, 2011 #5

    D H

    User Avatar
    Staff Emeritus
    Science Advisor

    Correct. A hypothetical computer with an infinite amount of storage and an infinitely fast processor could represent all of the rationals exactly and even all of the algebraic numbers. Even that hypothetical computer would have a hard time (impossible time!) representing all of the reals because almost all real numbers are not computable.
     
  7. Jul 18, 2011 #6

    SixNein

    User Avatar
    Gold Member

    Pi and e are both irrational and transcendental.
    http://en.wikipedia.org/wiki/Proof_that_π_is_irrational
    http://en.wikipedia.org/wiki/Proof_that_e_is_irrational
    http://en.wikipedia.org/wiki/Proof_that_π_is_transcendental#Transcendence_of_e_and_.CF.80

    In this context, I think irrational would be the term to use.
     
  8. Jul 18, 2011 #7

    SixNein

    User Avatar
    Gold Member

    And here is my response:

    And all of this makes my point about the inability to work with fractions the primary reason for this behaviour. This behaviour will show up in any base because computers don't rationalize decimal point numbers.

    Truncation generally introduces rounding error.

    Many fractions can only be approximated in decimal notation, and this fact is true regardless of base. 1/3 was a good example.
     
  9. Jul 18, 2011 #8

    D H

    User Avatar
    Staff Emeritus
    Science Advisor

    You are implicitly assuming that the number is being represented in IEEE floating point format. While that is the typical built-in representation scheme nowadays for types such as float and double (C), REAL*4 and REAL*8(Fortran), etc., there are plenty of packages out there that do not use IEEE floating point (and do not use the floating point hardware).
     
  10. Jul 18, 2011 #9
    Point being that any fixed length and/or literal representation is going to suffer from such loss of precision for rational numbers in some cases and in every case for irrational numbers. If you want to preserve precision, then you need to store the data as an expression.

    For example, in the case of the IEEE 754 standard, the decimal precision is computed by log(2significand-bit-length+1)

    Also, it's still the base's fault (and the discrete nature of computers) when you lose precision.
     
    Last edited: Jul 18, 2011
  11. Jul 18, 2011 #10

    SixNein

    User Avatar
    Gold Member

    I'm making the assumption that the number is being expressed is in point notation. The approximation problems will occur regardless if one is working with binary, hex, or even decimal.
     
  12. Jul 18, 2011 #11

    SixNein

    User Avatar
    Gold Member

    I don't understand why you think base is involved.
     
  13. Jul 18, 2011 #12

    Mark44

    Staff: Mentor

    It actually has quite a bit to do with the binary representation of some fractions. For example, assuming we're using the IEEE 754 floating point standard, computations done with fractions such as 1/2, 1/4, 1/8, 3/4, 5/8, and so on are done exactly. Notice that all of these numbers have terminating decimal and binary representations. OTOH, computations with fractions such as 1/5, 1/10, 3/10, and so on, are not exact, even though their decimal representations terminate. The reason for this is that the binary representations repeat endlessly, but some precision is lost due to truncation to fit into the number of bits that are being used to store the number (typically 32 for floats and 64 bits for doubles).

    One other thing:
    That's like me saying, "Black dogs, like my dog Dylan, are always black. Of course, black dogs are black, and irrational numbers are irrational.
     
  14. Jul 18, 2011 #13
    If you haven't been following along, this may seem tautological but there is a greater context that you don't appear to be aware of...

    In this thread, you can see that there are people who believe that an irrational number need not be irrational if you represent it in another base. With this context, does the apparent tautology make more sense?


    I recommend to everyone that if you are going to branch a thread into a new thread then you should reference the old thread from the new one...
     
  15. Jul 18, 2011 #14
    I explained it already several times.
    As did Mark44 just now.
    It's unlikely it can be made any clearer...
    You should just reread all explanations carefully if you still do not understand.

    Also - from the PM:
    It will not show up in any base...
    As referred to in https://www.physicsforums.com/showthread.php?p=3408657#post3408657": consider base 3, in that case you would get 1/3 with 10-1. This is a terminating number that represents your decimal/binary repeating fraction with absolute precision / no loss in precision as long as you provide 2 or more digits for the significand.

    Edit: @D H's post below: It probably has to do with them thinking that repeating patterns mean irrationality. That's one myth that needs to be dispelled quickly.
     
    Last edited by a moderator: Apr 26, 2017
  16. Jul 18, 2011 #15

    D H

    User Avatar
    Staff Emeritus
    Science Advisor

    I see two person who mistakenly said that. Both were quickly corrected, and one recanted. There is little point in carrying that nonsense discussion forward into this thread.

    For those of you with this mistaken belief: Do not derail this thread with that mistaken belief. When represented in some integral base, a rational number will either have a terminating or repeating representation. An irrational does not have a terminating or repeating representation in any integral base. A number with a terminating or repeating representation in some integral base is necessarily a rational, and a number with a non-terminating, non-repeating representation is necessarily irrational. Please consider this the end of the discussion on this point.
     
    Last edited: Jul 18, 2011
  17. Jul 18, 2011 #16

    SixNein

    User Avatar
    Gold Member

    But this issue always occurs regardless of the number system used.

    Since I'm growing tired of re-writing:
    http://en.wikipedia.org/wiki/Binary_numeral_system#Representing_real_numbers

    "The phenomenon that the binary representation of any rational is either terminating or recurring also occurs in other radix-based numeral systems. See, for instance, the explanation in decimal."

    The nature of rational numbers is the underlying reason for this behaviour. In decimal form, many rational numbers can only be approximated.
     
    Last edited: Jul 18, 2011
  18. Jul 18, 2011 #17

    rcgldr

    User Avatar
    Homework Helper

    Truncation is also a problem when adding up a large number of values. In the case of numerical integration, as the step size decreases and the number of values increase, the sum can end up smaller due to truncation error. This can be avoided by using a summation function that uses an array to store numbers based on their exponent value. For double precision, the array size would be 2048, and the index would be the exponent value ( index = (((int)value)>>52)&0x7ff ). The array is initialized to all zeroes. Each time a value is passed to the summation function, it uses it's exponent to index into the array. If array(index) is zero, the value is stored, otherwise a sum is calculated, sum = input_value + array(index), array(index) is set to zero, and the process repeated using the sum, until array(index) is zero and the sum stored, or until overflow occurs. A call is then made to produce the final sum, which just adds up all the array values by index (depending on gaps, there's some truncation, but it's minimized).
     
  19. Jul 19, 2011 #18

    Mark44

    Staff: Mentor

    You are making some very broad statements that are not true in general.
    "Don't rationalize decimal point numbers" - what do you mean by this?
    As are 1/5, 1/10, and many others.

    This is not true. I gave some examples earlier. We can add the decimal representations of 1/10 and 1/10 and 1/10 and 1/10 and 1/10 (five of them), as .2 + .2 + .2 + .2 + .2, which is exactly equal to 1. So in base-10, the addition is exact. On computers using floating point hardware that conforms to the IEEE 754 standard for floating point arithmetic, the resulting sum is different from 1. The upshot is that in base-10, the arithmetic of this example gives exact results, but in a binary representation, it doesn't. It would appear that it does make a difference what base you use.
    The problems you cite don't occur for all rational numbers. Rational numbers such as 1/2, 1/4, or any sum of multiples of fractions that are negative integral powers of two don't exhibit this problem. In floating point math, the sum 1/8 + 3/32 + 5/64 is exact, with no truncation or other error.

    What you describe occurs for some, perhaps most, rational numbers, but it does not always occur.
     
  20. Jul 19, 2011 #19

    SixNein

    User Avatar
    Gold Member

    Converting from a decimal point notation to fraction notation for any given rational number.

    what about adding 1/3 + 1/3 + 1/3 in base 10 decimal point notation? Is this still exact?

    0.333333333 + 0.333333333 + 0.333333333 = 0.9999999999 not 1.

    The fundamental reason this behaviour happens on computers is because computers do not recognise 1/3 to be a number; instead, it is an operation upon two separate numbers, and the result of the operation is truncated according to register size. There is no number system that will allow one to state all rational numbers in exact form. Some of the numbers will repeat and some will terminate in every single number system there is.
     
  21. Jul 19, 2011 #20

    SixNein

    User Avatar
    Gold Member

    I'm going to try one last stab at explaining this then I'm going to throw in the towel.

    A rational number in any base can only be expressed in exact decimal point notation when the denominator is made up of prime factors of the base. So regardless of number system used, the issue of comparing decimal point numbers will come up. This is true for all number systems period.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Computers rational and irrational numbers
  1. Computer language (Replies: 4)

  2. Computational Physics (Replies: 1)

Loading...