Standard representation for arbitrary size/precision numbers

AI Thread Summary
When representing numbers of arbitrary size or precision for storage in text files, JSON messages, or variables, the standard approach typically involves using decimal or scientific notation. For integers, representing them as decimal strings (e.g., "-12345678901234567") is common. For floating-point numbers, an effective method is to use an ordered pair (array) of strings for the decimal mantissa and exponent (e.g., ["-1.2345678901234567", "17"]). However, the choice of representation often depends on the specific requirements of the text file and the capabilities of the parser being used. For instance, JavaScript has limitations with large integers and high precision numbers, necessitating adherence to formats that are compatible with the parser in use.
pbuk
Science Advisor
Homework Helper
Gold Member
Messages
4,966
Reaction score
3,217
Is there a standard way of representing numbers of arbitrary size or precision for storage in a text file, JSON message, variable etc.?

I am thinking of representing integers as decimal strings e.g. "-12345678901234567" and floats as an ordered pair (array) of strings representing decimal mantissa and exponent e.g. ["-1.2345678901234567", "17"], but if there is some existing standard I would rather follow that.
 
Technology news on Phys.org
Short answer: depends on what the text file is for.

Just for any old text file, the standard form is as decimal or scientific notation.
Usually an e is used between mantissa and exponent where x10^ is not available.

The only reasons to depart from them is if the text file must be parsed by something that cannot cope with the numbers - in which case you use the format recognized by the parser.
 
Thanks, JavaScript in particular is going to have problems with big integers or high precision numbers so I think I will invoke the "format recognised by the parser" pattern.
 
Dear Peeps I have posted a few questions about programing on this sectio of the PF forum. I want to ask you veterans how you folks learn program in assembly and about computer architecture for the x86 family. In addition to finish learning C, I am also reading the book From bits to Gates to C and Beyond. In the book, it uses the mini LC3 assembly language. I also have books on assembly programming and computer architecture. The few famous ones i have are Computer Organization and...
I have a quick questions. I am going through a book on C programming on my own. Afterwards, I plan to go through something call data structures and algorithms on my own also in C. I also need to learn C++, Matlab and for personal interest Haskell. For the two topic of data structures and algorithms, I understand there are standard ones across all programming languages. After learning it through C, what would be the biggest issue when trying to implement the same data...
Back
Top