Computers rational and irrational numbers

In summary, the built-in floating point numbers that most computers provide are facsimiles of the real numbers. They are very very fast compared to the Rational class defined above, or compared to any software implementation of the reals or rationals. They come pre-packaged with a lot of predefined functions such as sine, cosine, and exp.
  • #1
rcgldr
Homework Helper
8,855
632
I think this needs it's own thread.

SixNein said:
The mathematical reason floating point numbers behave as they do on computers has nothing to do with binary; instead, it is due to the inability of processors to work with fractions. So when you ask the computer to store 1/3, it does the calculation as 1 divided by 3 in decimal notation. So the computer winds up with .33333333-> and is truncated to a certain precision. If an operation such as 3 * (1/3) is performed, the computer takes .333333333 and times it by 3 to arrive at .9999999999. But its still a rational number because it repeats.

For example... the .333333333 can be expressed as a fraction by taking 33 (the pattern) and dividing by 99. The .999999999 can be expressed as fraction with 99/99.

Irrational numbers, like PI, are always irrational. There are no patterns.

e and pi are transcendental numbers:

http://en.wikipedia.org/wiki/Transcendental_number

The square root of 2 is n irrational number:

http://en.wikipedia.org/wiki/Irrational_number

1/3 is a rational number

http://en.wikipedia.org/wiki/Rational_number
 
Technology news on Phys.org
  • #2
People should note, that many mathematical and computational platforms use custom libraries that store numbers differently from IEEE formats.

Depending on what code is used, this issue can be addressed in a different light.
 
  • #3
Well if we are going to start a separate thread I'll post my reply to his statements here as well(I originally sent to PM since it was getting way offtopic in the other thread).

martix said:
Since this is sufficiently offtopic already, let's try PM.

SixNein said:
No he's correct.

The mathematical reason floating point numbers behave as they do on computers has nothing to do with binary; instead, it is due to the inability of processors to work with fractions. So when you ask the computer to store 1/3, it does the calculation as 1 divided by 3 in decimal notation. So the computer winds up with .33333333-> and is truncated to a certain precision. If an operation such as 3 * (1/3) is performed, the computer takes .333333333 and times it by 3 to arrive at .9999999999. But its still a rational number because it repeats.

For example... the .333333333 can be expressed as a fraction by taking 33 (the pattern) and dividing by 99. The .999999999 can be expressed as fraction with 99/99.

Irrational numbers, like PI, are always irrational. There are no patterns.
Er... the pattern is 0.(3), not 0.(33)
You however are incorrect.
It has everything to do with binary, which is the point I was trying to make in my post in the thread(as in https://www.physicsforums.com/showthread.php?t=513202&page=2").
What it doesn't have to do with is (ir)rationality.

As I was saying,
martix said:
...there are numbers which can be represented accurately in a finite string of digits in one number system, while requiring a representation with repeating digits in another...
And since the computer cannot store it in a fractional format(i.e. the ratio of 2 integers as per the formal definiton), and the format (e.g. IEEE 754) does not support representation of repeating digits, it stores your terminating decimal number string, which in binary might only be representable with a repeating pattern, in a terminating truncated/rounded(not sure which) version with the loss of some precision.
This is why such equality tests might fail: because of the inability to accurately represent numbers with repeating patters in binary which have a terminating representation in decimal.

Edit: e and pi are still irrational, transcendentalism is just another property. All transcendentals are irrational, while the reverse is not true.
Also in any implementation similar to floating point you are going to have those errors for the reasons explained above.
 
Last edited by a moderator:
  • #4
chiro said:
Depending on what code is used, this issue can be addressed in a different light.
Exactly. There are ways to represent 1/3 exactly. For example, (much abbreviated):
Code:
class Rational {
public:
   Rational (long int num, unsigned long int den);
   Rational operator+(const Rational&);
   Rational operator-(const Rational&);
   Rational operator*(const Rational&);
   Rational operator/(const Rational&);
   operator double (); 
private:
   long int numerator;
   unsigned long int denominator;
};
With this, Rational one_third (1, 3); is an exact representation of 1/3. Use some type of bignum for the numerator and denominator and voila! even more numbers can be represented exactly.

The built-in floating point numbers that most computers provide are facsimiles of the real numbers. They are very very fast compared to the Rational class defined above, or compared to any software implementation of the reals or rationals. They come pre-packaged with a lot of predefined functions such as sine, cosine, and exp.

The typical computer programmer should know that these built-in representations are facsimiles. They are not reals. For one thing, they are neither commutative nor associative. The following exhibits just a few of the problems with those built-in floating point numbers:
Code:
#include <iostream>
int main () {
   double pi = M_PI;
   double big = 1e16;
   double a = 3;
   double b = 1;

   std::cout << "sin(pi) = " << std::sin(pi) << "\n";
   std::cout << "1e16-1e16+3-1 = " << big - big + a - b << "\n";
   std::cout << "1e16+3-1-1e16 = " << big + a + b - big << "\n";
   return 0;
}
Output on my computer is
Code:
sin(pi) = 1.22465e-16
1e16-1e16+3-1 = 2
1e16+3-1-1e16 = 4
Change that 1e16 to 1e18 and the errors become even more pronounced. Compare to Mathematica, which knows that [itex]\sin\pi[/itex] is identically zero and that 10100+3-1-10100 is 2.

The typical programmer does not need to know the details of the IEEE floating point standard. They should know that there are pitfalls associated with using float and double (or their equivalents in other languages).
 
Last edited:
  • #5
martix said:
Edit: e and pi are still irrational, transcendentalism is just another property. All transcendentals are irrational, while the reverse is not true.
Correct. A hypothetical computer with an infinite amount of storage and an infinitely fast processor could represent all of the rationals exactly and even all of the algebraic numbers. Even that hypothetical computer would have a hard time (impossible time!) representing all of the reals because almost all real numbers are not computable.
 
  • #6
rcgldr said:
I think this needs it's own thread.



e and pi are transcendental numbers:

http://en.wikipedia.org/wiki/Transcendental_number

The square root of 2 is n irrational number:

http://en.wikipedia.org/wiki/Irrational_number

1/3 is a rational number

http://en.wikipedia.org/wiki/Rational_number

Pi and e are both irrational and transcendental.
http://en.wikipedia.org/wiki/Proof_that_π_is_irrational
http://en.wikipedia.org/wiki/Proof_that_e_is_irrational
http://en.wikipedia.org/wiki/Proof_that_π_is_transcendental#Transcendence_of_e_and_.CF.80

In this context, I think irrational would be the term to use.
 
  • #7
martix said:
Well if we are going to start a separate thread I'll post my reply to his statements here as well(I originally sent to PM since it was getting way offtopic in the other thread).



Edit: e and pi are still irrational, transcendentalism is just another property. All transcendentals are irrational, while the reverse is not true.
Also in any implementation similar to floating point you are going to have those errors for the reasons explained above.

And here is my response:

And all of this makes my point about the inability to work with fractions the primary reason for this behaviour. This behaviour will show up in any base because computers don't rationalize decimal point numbers.

Truncation generally introduces rounding error.

Many fractions can only be approximated in decimal notation, and this fact is true regardless of base. 1/3 was a good example.
 
  • #8
SixNein said:
Many fractions can only be approximated in decimal notation, and this fact is true regardless of base. 1/3 was a good example.
You are implicitly assuming that the number is being represented in IEEE floating point format. While that is the typical built-in representation scheme nowadays for types such as float and double (C), REAL*4 and REAL*8(Fortran), etc., there are plenty of packages out there that do not use IEEE floating point (and do not use the floating point hardware).
 
  • #9
Point being that any fixed length and/or literal representation is going to suffer from such loss of precision for rational numbers in some cases and in every case for irrational numbers. If you want to preserve precision, then you need to store the data as an expression.

For example, in the case of the IEEE 754 standard, the decimal precision is computed by log(2significand-bit-length+1)

Also, it's still the base's fault (and the discrete nature of computers) when you lose precision.
 
Last edited:
  • #10
D H said:
You are implicitly assuming that the number is being represented in IEEE floating point format. While that is the typical built-in representation scheme nowadays for types such as float and double (C), REAL*4 and REAL*8(Fortran), etc., there are plenty of packages out there that do not use IEEE floating point (and do not use the floating point hardware).

I'm making the assumption that the number is being expressed is in point notation. The approximation problems will occur regardless if one is working with binary, hex, or even decimal.
 
  • #11
martix said:
Point being that any fixed length and/or literal representation is going to suffer from such loss of precision for rational numbers in some cases and in every case for irrational numbers. If you want to preserve precision, then you need to store the data as an expression.

For example, in the case of the IEEE 754 standard, the decimal precision is computed by log(2significand-bit-length+1)

Also, it's still the base's fault (and the discrete nature of computers) when you lose precision.

I don't understand why you think base is involved.
 
  • #12
SixNein said:
The mathematical reason floating point numbers behave as they do on computers has nothing to do with binary; instead, it is due to the inability of processors to work with fractions.
It actually has quite a bit to do with the binary representation of some fractions. For example, assuming we're using the IEEE 754 floating point standard, computations done with fractions such as 1/2, 1/4, 1/8, 3/4, 5/8, and so on are done exactly. Notice that all of these numbers have terminating decimal and binary representations. OTOH, computations with fractions such as 1/5, 1/10, 3/10, and so on, are not exact, even though their decimal representations terminate. The reason for this is that the binary representations repeat endlessly, but some precision is lost due to truncation to fit into the number of bits that are being used to store the number (typically 32 for floats and 64 bits for doubles).

One other thing:
SixNein said:
Irrational numbers, like PI, are always irrational.
That's like me saying, "Black dogs, like my dog Dylan, are always black. Of course, black dogs are black, and irrational numbers are irrational.
 
  • #13
Mark44 said:
SixNein said:
Irrational numbers, like PI, are always irrational. There are no patterns.
That's like me saying, "Black dogs, like my dog Dylan, are always black. Of course, black dogs are black, and irrational numbers are irrational.
If you haven't been following along, this may seem tautological but there is a greater context that you don't appear to be aware of...

In this thread, you can see that there are people who believe that an irrational number need not be irrational if you represent it in another base. With this context, does the apparent tautology make more sense?


I recommend to everyone that if you are going to branch a thread into a new thread then you should reference the old thread from the new one...
 
  • #14
SixNein said:
I don't understand why you think base is involved.

I explained it already several times.
As did Mark44 just now.
It's unlikely it can be made any clearer...
You should just reread all explanations carefully if you still do not understand.

Also - from the PM:
SixNein said:
And all of this makes my point about the inability to work with fractions the primary reason for this behaviour. This behaviour will show up in any base because computers don't rationalize decimal point numbers.

Truncation generally introduces rounding error.
It will not show up in any base...
As referred to in https://www.physicsforums.com/showthread.php?p=3408657#post3408657": consider base 3, in that case you would get 1/3 with 10-1. This is a terminating number that represents your decimal/binary repeating fraction with absolute precision / no loss in precision as long as you provide 2 or more digits for the significand.

Edit: @D H's post below: It probably has to do with them thinking that repeating patterns mean irrationality. That's one myth that needs to be dispelled quickly.
 
Last edited by a moderator:
  • #15
Jocko Homo said:
In this thread, you can see that there are people who believe that an irrational number need not be irrational if you represent it in another base.
I see two person who mistakenly said that. Both were quickly corrected, and one recanted. There is little point in carrying that nonsense discussion forward into this thread.

For those of you with this mistaken belief: Do not derail this thread with that mistaken belief. When represented in some integral base, a rational number will either have a terminating or repeating representation. An irrational does not have a terminating or repeating representation in any integral base. A number with a terminating or repeating representation in some integral base is necessarily a rational, and a number with a non-terminating, non-repeating representation is necessarily irrational. Please consider this the end of the discussion on this point.
 
Last edited:
  • #16
Mark44 said:
It actually has quite a bit to do with the binary representation of some fractions. For example, assuming we're using the IEEE 754 floating point standard, computations done with fractions such as 1/2, 1/4, 1/8, 3/4, 5/8, and so on are done exactly. Notice that all of these numbers have terminating decimal and binary representations. OTOH, computations with fractions such as 1/5, 1/10, 3/10, and so on, are not exact, even though their decimal representations terminate. The reason for this is that the binary representations repeat endlessly, but some precision is lost due to truncation to fit into the number of bits that are being used to store the number (typically 32 for floats and 64 bits for doubles).

One other thing:

That's like me saying, "Black dogs, like my dog Dylan, are always black. Of course, black dogs are black, and irrational numbers are irrational.

But this issue always occurs regardless of the number system used.

Since I'm growing tired of re-writing:
http://en.wikipedia.org/wiki/Binary_numeral_system#Representing_real_numbers

"The phenomenon that the binary representation of any rational is either terminating or recurring also occurs in other radix-based numeral systems. See, for instance, the explanation in decimal."

The nature of rational numbers is the underlying reason for this behaviour. In decimal form, many rational numbers can only be approximated.
 
Last edited:
  • #17
truncation
Truncation is also a problem when adding up a large number of values. In the case of numerical integration, as the step size decreases and the number of values increase, the sum can end up smaller due to truncation error. This can be avoided by using a summation function that uses an array to store numbers based on their exponent value. For double precision, the array size would be 2048, and the index would be the exponent value ( index = (((int)value)>>52)&0x7ff ). The array is initialized to all zeroes. Each time a value is passed to the summation function, it uses it's exponent to index into the array. If array(index) is zero, the value is stored, otherwise a sum is calculated, sum = input_value + array(index), array(index) is set to zero, and the process repeated using the sum, until array(index) is zero and the sum stored, or until overflow occurs. A call is then made to produce the final sum, which just adds up all the array values by index (depending on gaps, there's some truncation, but it's minimized).
 
  • #18
You are making some very broad statements that are not true in general.
SixNein said:
And all of this makes my point about the inability to work with fractions the primary reason for this behaviour. This behaviour will show up in any base because computers don't rationalize decimal point numbers.
"Don't rationalize decimal point numbers" - what do you mean by this?
SixNein said:
Many fractions can only be approximated in decimal notation, and this fact is true regardless of base. 1/3 was a good example.
As are 1/5, 1/10, and many others.

SixNein said:
I'm making the assumption that the number is being expressed is in point notation. The approximation problems will occur regardless if one is working with binary, hex, or even decimal.
This is not true. I gave some examples earlier. We can add the decimal representations of 1/10 and 1/10 and 1/10 and 1/10 and 1/10 (five of them), as .2 + .2 + .2 + .2 + .2, which is exactly equal to 1. So in base-10, the addition is exact. On computers using floating point hardware that conforms to the IEEE 754 standard for floating point arithmetic, the resulting sum is different from 1. The upshot is that in base-10, the arithmetic of this example gives exact results, but in a binary representation, it doesn't. It would appear that it does make a difference what base you use.
The problems you cite don't occur for all rational numbers. Rational numbers such as 1/2, 1/4, or any sum of multiples of fractions that are negative integral powers of two don't exhibit this problem. In floating point math, the sum 1/8 + 3/32 + 5/64 is exact, with no truncation or other error.

SixNein said:
But this issue always occurs regardless of the number system used.
What you describe occurs for some, perhaps most, rational numbers, but it does not always occur.
 
  • #19
Mark44 said:
You are making some very broad statements that are not true in general.
"Don't rationalize decimal point numbers" - what do you mean by this?

Converting from a decimal point notation to fraction notation for any given rational number.

This is not true. I gave some examples earlier. We can add the decimal representations of 1/10 and 1/10 and 1/10 and 1/10 and 1/10 (five of them), as .2 + .2 + .2 + .2 + .2, which is exactly equal to 1. So in base-10, the addition is exact.

what about adding 1/3 + 1/3 + 1/3 in base 10 decimal point notation? Is this still exact?

0.333333333 + 0.333333333 + 0.333333333 = 0.9999999999 not 1.

The fundamental reason this behaviour happens on computers is because computers do not recognise 1/3 to be a number; instead, it is an operation upon two separate numbers, and the result of the operation is truncated according to register size. There is no number system that will allow one to state all rational numbers in exact form. Some of the numbers will repeat and some will terminate in every single number system there is.
 
  • #20
I'm going to try one last stab at explaining this then I'm going to throw in the towel.

A rational number in any base can only be expressed in exact decimal point notation when the denominator is made up of prime factors of the base. So regardless of number system used, the issue of comparing decimal point numbers will come up. This is true for all number systems period.
 
  • #21
Let's have a go at it again:
For any practical application the accuracy of a calculation is going to depend on the number base being used! That statement is true for rational numbers only.
All irrational numbers will lose some precision regardless of base.


SixNein said:
So regardless of number system used, the issue of comparing decimal point numbers will come up.
It will come up only if you work with decimal base at any point in the comparison.
SixNein said:
A rational number in any base can only be expressed in exact decimal point notation when the denominator is made up of prime factors of the base. ... This is true for all number systems period.
Not sure what you mean by that. Maybe standard notation? Still not true though. See example 3 below: no exact(terminating) standard notation in decimal, yet terminates in standard notation when represented in a prime factor base.
Perhaps you meant to say: "A rational number in any base can only be expressed in terminating standard notation when the denominator is made up of prime factors of the base." This is a correct statement.
You either need to express yourself better or you need to need to grasp the explanations of the problem better.

There are lots of other bases beyond binary and decimal. Last time I gave you an example in base 3 which turns your decimal algebraic expression into a standard notation terminating number. Note the emphasis!

Here are some examples covering every case mentioned:
[tex]0.5_{10}=0.1_{2}[/tex]
[tex]0.2_{10}=0.\overline{0011}_{2}[/tex]
[tex]0.\overline{3}_{10}=0.\overline{01}_{2}=0.1_{3}=0.7_{21}[/tex]base21 with factors 3, 7.
Hopefully this will make it clear enough that precision and number base are integrally linked for any practical application using single number notation(be it standard notation as above or exponential notation as in IEEE 754's floating point specification).

I'm done here...
 
  • #22
SixNein said:
what about adding 1/3 + 1/3 + 1/3 in base 10 decimal point notation? Is this still exact?

0.333333333 + 0.333333333 + 0.333333333 = 0.9999999999 not 1.
No, it's not exact, nor did I claim that it was.
SixNein said:
The fundamental reason this behaviour happens on computers is because computers do not recognise 1/3 to be a number; instead, it is an operation upon two separate numbers, and the result of the operation is truncated according to register size.
You're assuming that rational numbers are going to be represented as a decimal (or binary or other) fraction. As D H pointed out in post #3, there are ways around this, so that every rational number can be represented exactly.
SixNein said:
There is no number system that will allow one to state all rational numbers in exact form. Some of the numbers will repeat and some will terminate in every single number system there is.
 
  • #23
martix said:
Not sure what you mean by that.

Lets say your wanting to work with rational numbers in base 10. The number 10 has two prime factors 2 and 5; therefore, only fractions with a denominator made up of factors 2 or/and 5 will be exact in decimal form. All other fractions will be repeating.

Example: 1/5 will terminate. 1/2 will terminate. 1/10 will terminate, 1/20 will terminate. but 1/30 will not terminate because 30 has prime factors 2, 3, and 5.

The same is true in binary. Binary is a base 2 number system. So only denominators with a factor of 2 can be expressed exactly in decimal form. All other rational numbers will repeat.

Any time you wish to know when rational numbers will terminate, you simply factor the base into its primes. Then you will know what prime factors the denominators must consist of.
 
  • #24
You seriously NEED to learn to read more thoroughly and write more precisely!
And really mean READ AND UNDERSTAND(or make an effort anyway) what is being said.

Why is it that you understood my "Not sure what you mean by that." but nothing after that. Or if you did, you are not showing it very well.
 
  • #25
Mark44 said:
No, it's not exact, nor did I claim that it was.
You're assuming that rational numbers are going to be represented as a decimal (or binary or other) fraction. As D H pointed out in post #3, there are ways around this, so that every rational number can be represented exactly.

My assumption is point notation will be used.

The original argument was that binary was the reason this behaviour occurs. But this behaviour occurs in all base number systems. There is nothing special about binary.
 
  • #26
martix said:
You seriously NEED to learn to read more thoroughly and write more precisely!
And really mean READ AND UNDERSTAND(or make an effort anyway) what is being said.

Why is it that you understood my "Not sure what you mean by that." but nothing after that. Or if you did, you are not showing it very well.

I stopped reading at 'not true' because it takes five seconds to demonstrate.
 
  • #27
In fact, let's recall the argument:

The mathematical reason floating point numbers behave as they do on computers has nothing to do with binary; instead, it is due to the inability of processors to work with fractions.

You however are incorrect.
It has everything to do with binary

So please either retract your argument or defend it with something. I have illustrated the problem with working in point notation. Now show me I'm wrong that its everything to do with binary.
 
  • #28
I showed how in post #4. There are plenty of bignum / rational arithmetic packages out there that represent rationals exactly. The only constraint is that the numerator and denominator cannot be so huge that they consume of all of the computer's memory.
 
  • #29
D H said:
I showed how in post #4. There are plenty of bignum / rational arithmetic packages out there that represent rationals exactly. The only constraint is that the numerator and denominator cannot be so huge that they consume of all of the computer's memory.

Right, one can use a library to work with numbers as fractions. I think GMP (or MPIR if your doing windows programming) for C/C++ has some code for it.
 
  • #30
SixNein -- I really think you are so focused on the point you want to convey that you are overstating yourself (e.g. you repeatedly creep into assertions about "all number systems") and , and you are not recognizing the variations that others are talking about (e.g. matrix seems to point out that any particular rational arithmetic calculation can be done with finite precision in radix form for many bases)
 
  • #31
Hurkyl said:
SixNein -- I really think you are so focused on the point you want to convey that you are overstating yourself (e.g. you repeatedly creep into assertions about "all number systems") and , and you are not recognizing the variations that others are talking about (e.g. matrix seems to point out that any particular rational arithmetic calculation can be done with finite precision in radix form for many bases)

I think we are just on different pages, and it didn't help for the discussion to be broken into two posts. For example, I was the first to mention that number libraries can be used. The discussion began with the issue of making exact floating point comparisons in programming languages for rational numbers. A programmer might do a calculation like 5 * (1/5) and try to do an exact comparison of 1 and the result. When the test fails, the programmer is surprised.

My argument here is that it doesn't matter what number system is being used for the calculation because programmers will always get into trouble when doing floating point comparisons. The mathematical reason is that only a subset of rational numbers can be stated exactly as reals for any particular base, and the rest will be approximated. So the only way to get around this issue is to work with rational numbers as fractions and avoid real conversions altogether.

There is nothing special about binary that causes this issue of floating point comparisons to occur. The only difference between using binary and decimal is the subsets one would get that can be stated exactly as reals, and this difference is due to the arithmetic being performed to come up with the real.
 
  • #32
SixNein said:
My argument here is that it doesn't matter what number system is being used for the calculation because programmers will always get into trouble when doing floating point comparisons.
Depends on the language implementation of comparasons. For languages with a "relative fuzz" factor (like APL), this normally isn't an issue. The comparsons using a fuzz factor in floating point use the following formulas:

A > B => A > (B - |B * fuzz|)
A < B => A < (B + |B * fuzz|)
A == B => A > (B - |B * fuzz|) && A < (B + |B * fuzz|)

The issue here is that it's possible for (A > B && A == B) || (A < B && A == B) to be true.

In languages that don't support "relative fuzz" factor, the programmer normally compensates with his/her own fuzz factor something like for(A = 0.0; A < 4.99; A += .1) // loop 50 times ... .

I don't understand why this got so much attention as I doubt there are many programmers that ever ran into this problem more than once if at all, since this and other issues are typically covered during school or early training of a programmer.
 
  • #33
SixNein said:
A programmer might do a calculation like 5 * (1/5) and try to do an exact comparison of 1 and the result. When the test fails, the programmer is surprised.
Surprise is really just an artifact of inexperience. They put in a debug statement, see the result is zero, are properly embarrassed for a few moments. :wink:

And similarly it's general procedure never to compare floating point numbers for equality and instead check if the absolute difference is sufficiently small. (unless you really know what you're doing)

My argument here is that it doesn't matter what number system is being used for the calculation
Again, you are in the habit of overstating yourself. You know very well that you are restricting yourself solely to radix representations of finite precision and there are other number systems that don't have the issues you're bringing up.

because programmers will always get into trouble when doing floating point comparisons.
No, not always. For example, they will never get into trouble when they work with dyadic fractions with sufficiently few significant bits required.

In fact, on some machines, people use floating-point calculations so that they can do exact integer computations more efficiently. (e.g. because a CPU has a really good floating point unit as compared to its integer unit, or maybe because doing so let's them use both the integer and floating point units simultaneously)


So the only way to get around this issue is to work with rational numbers as fractions and avoid real conversions altogether.
Actually, this is not the only way to get around the issue.

One particular example is that there are a exact numerical algorithms that work by using approximate but fast floating-point arithmetic to obtain an approximate solution, and then use some means (e.g. continued fractions, lattice reduction) to convert into a rational number. Upper bounds on the error of calculation and the size of the denominator can be used to guarantee exactly correct results.

Such algorithms, I believe, are very important in computational number theory.
 
  • #34
Hurkyl said:
Again, you are in the habit of overstating yourself. You know very well that you are restricting yourself solely to radix representations of finite precision and there are other number systems that don't have the issues you're bringing up.
Is the whole problem in this thread that people are talking past each other?

In particular, when someone refers to floating point arithmetic, they're talking very specifically about a system that use radix representations of finite precision. It looks like other people are using the term "floating point" to mean any representation of the real numbers...
 

1. What is the difference between rational and irrational numbers?

Rational numbers can be expressed as a ratio of two integers, while irrational numbers cannot be expressed as a ratio and have non-repeating, non-terminating decimal representations.

2. How are computers able to represent irrational numbers?

Computers use a finite number of bits to represent numbers, so they cannot store irrational numbers exactly. Instead, they use approximations or rounding to represent them.

3. Can irrational numbers be used in computer calculations?

Yes, irrational numbers can be used in computer calculations. However, due to their approximated representation, there may be small errors in the calculations involving irrational numbers.

4. Are there any special considerations when working with irrational numbers in computer programming?

Yes, when working with irrational numbers in computer programming, it is important to be aware of potential rounding errors and to use appropriate data types and precision levels to minimize these errors.

5. How are irrational numbers used in real-world applications?

Irrational numbers are used in many real-world applications, such as in scientific calculations, financial modeling, and data encryption. They allow for more precise and accurate results compared to using only rational numbers.

Similar threads

  • Precalculus Mathematics Homework Help
Replies
13
Views
1K
  • Programming and Computer Science
Replies
29
Views
3K
Replies
14
Views
2K
  • Linear and Abstract Algebra
Replies
33
Views
3K
  • Atomic and Condensed Matter
Replies
5
Views
2K
Replies
1
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
927
Replies
7
Views
826
Back
Top