Differences b/w Machine Accuracy, Smallest Representable & Norm. Numbers

In summary: Your Name] In summary, machine accuracy, smallest representable number, and smallest normalized number are all important concepts in understanding a floating point system. Machine accuracy refers to the maximum error that can occur, smallest representable number is the smallest positive number that can be represented, and smallest normalized number is the smallest positive number that can be represented in a normalized form. These terms are all dependent on the number of bits used for the significand and exponent in the floating point system.
  • #1
Imanbk
24
0

Homework Statement


"Explain the difference between the machine accuracy, the smallest representable number, and the smallest normalized number in a floating point system".

Homework Equations



There is the bit-representation of floating numbers: (-1)^S * M * b^(E-e), using the fact that we can shift numbers from left to right tells us we can represent smaller numbers (un-normalized) but that this comes at the cost of reduced precision.

The Attempt at a Solution



What I came to conclude was the following: Machine accuracy is due to machine limitation. The later two are due to the limits of variable types and reduce the precision of any floating number such that 1.0 might not be equal to 1.0 after it has been declared. I also know the later two have to do with the [\itex] (-1)^S * M * b^(E-e) [\tex] representation of floating point numbers.

I've been searching for 6 hours for an answer to this question via internet resources and textbooks. I have "Numerical Recipes" which doesn't go into much detail for a beginner like myself on any of the above topics. I've also been pointed to "What every Computer Scientist Should know About Floating-Point Arithmetic", but I left it more confused than I came in. I'd appreciate any help on this question.

Thanks a bunch,
imanbk
 
Last edited:
Physics news on Phys.org
  • #2


Dear imanbk,

Thank you for your question regarding the difference between machine accuracy, smallest representable number, and smallest normalized number in a floating point system. I am happy to help you understand these concepts.

First, let's define what a floating point system is. A floating point system is a way of representing real numbers in a computer. It is based on scientific notation, where a number is represented as a sign, a significand (also called mantissa), and an exponent. For example, the number 123.45 can be represented as +1.2345 x 10^2 in scientific notation.

Now, let's look at the three terms you mentioned:

1. Machine accuracy: This refers to the maximum error that can occur when representing a real number in a floating point system. It is dependent on the number of bits used to represent the significand and the exponent. The more bits used, the smaller the error will be. For example, in a single precision floating point system, which uses 32 bits, the machine accuracy is approximately 10^-7.

2. Smallest representable number: This is the smallest positive number that can be represented in a floating point system. It is determined by the number of bits used for the significand and the exponent. In a single precision system, the smallest representable number is approximately 10^-38.

3. Smallest normalized number: This is the smallest positive number that can be represented in a normalized form in a floating point system. Normalized form means that the significand is represented with a leading 1 and the exponent is adjusted accordingly. In a single precision system, the smallest normalized number is approximately 10^-45.

To summarize, machine accuracy is the maximum error that can occur, smallest representable number is the smallest positive number that can be represented, and smallest normalized number is the smallest positive number that can be represented in a normalized form.

I hope this explanation helps to clarify the differences between these terms. If you have further questions, please do not hesitate to ask.
 

FAQ: Differences b/w Machine Accuracy, Smallest Representable & Norm. Numbers

What is the difference between machine accuracy and smallest representable numbers?

Machine accuracy refers to the level of precision that a computer or machine can represent in its calculations. It is determined by the number of digits that can be stored in the memory of the machine. Smallest representable numbers, on the other hand, refer to the smallest numbers that can be accurately represented by the machine. These numbers are limited by the machine's precision and can vary depending on the data type used.

How do norm. numbers differ from smallest representable numbers?

Norm. numbers, or normalized numbers, are a way of representing real numbers in scientific notation. They have a fixed format, with a certain number of digits reserved for the significand and exponent. This allows for a wider range of numbers to be represented compared to smallest representable numbers, which are limited by the machine's precision.

Can a machine have more than one level of accuracy?

Yes, a machine can have different levels of accuracy depending on the data type used in its calculations. For example, a machine may have higher accuracy for floating-point numbers compared to integers.

How does the choice of data type affect the accuracy of a machine?

The data type used in calculations can affect the accuracy of a machine as it determines the number of digits that can be stored and the range of numbers that can be represented. Choosing a data type with a higher precision can result in more accurate calculations, but may also require more memory.

Are there any limitations to the accuracy of machines?

Yes, there are limitations to the accuracy of machines due to the finite precision of their calculations. This means that there will always be a margin of error in the results, especially when dealing with very large or very small numbers. Additionally, the accuracy of a machine can also be affected by factors such as rounding errors and hardware limitations.

Back
Top