Most Unreliable Technique in the World to compute pi

  • Thread starter Thread starter John Creighto
  • Start date Start date
  • Tags Tags
    Pi
John Creighto
Messages
487
Reaction score
2
In "http://users.info.unicaen.fr/~karczma/arpap/lazypi.ps.gz " Jerzy Karczmarczuk develops some functions for computing pi lazily.
http://www.haskell.org/haskellwiki/Libraries_and_tools/Mathematics

This thread is motivated by the following response by Hurkly
Computers can only deal with approximations to real numbers.
:confused:

Therefore, when one talks about "real number" in a colloquial sense, I assume they mean "approximate real number" in my sense of the word.
(moderator hat on) This is unacceptable. You'll note that this is a math subforum. Also, one of the primary goals of physicsforums.com is to promote education in science and math -- this cannot happen if you fill readers' heads with errors and misinformation. To be sure, the theory of computation is a very interesting subject, but you do the reader a great disservice to masquerade it as if you were actually talking about the real numbers. Hijacking threads is similarly problematic.

Maybe I should have taken some action earlier to split the computability stuff into a separate topic. *shrug* Nobody's complained, though; I think unless someone does, I'll let things continue. (moderator hat off)
in the thread:
https://www.physicsforums.com/showthread.php?t=294420&page=4

I looked at csprof2000 claim that computers can only deal with approximations to rational numbers and thought about Hurkly's answer. My first thought was what about symbolic computation such as those done in maple and mathematics.

However, I wondered in such symbolic computations why we might need to represent numbers in decimal form and about the utility of these symbolic results whose complexity would grow with the complexity of the calculation. I wondered do we eventually need the number in decimal form for them to be useful for us and if so what utility a complex symbolic result might yield us.

Wondering what alternatives their might be to straight symbolic calculations two things about Haskel struck me as possibly relevant to the above questions. First Haskel uses something called Lazy evaluation. This means that results are only computed if needed. Knowing this and that Haskel uses infinite precession I wondered how pi may be represented in Haskel with infinite precession.

It turns out that infinite precession is not really infinite precession but an unfortunate word used sometimes in computer science for arbatray precession. However, I did not give up my search because I thought that lazy evaluation may still be relevant.

It turns out that we can use lazy evaluation to compute pi. Moreover, it turns out that by using the BBP formula for PI you can compute any diget of pi without needing any information about the previous digits. This means that in symbolic terms we can separate the digits we compute from the digits we don't compute and only go into further precession as needed for the desired accuracy.

Say we had some function in the form:

digitsAfter(pi,100)

Then we can represent the first 100 digts we need for our calculations using arbitrary precession, and if at a later time we needed further accuracy we can evaluate to precession without having to repeat the steps required to compute the first 100 digits and with much smaller memory requirements then the memory that would be required to store the number to the precession already computed.

Thus the basic idea is to break up the computation into two parts, an arbitrary precession part and a symbolic precision part where the arbitrary precision is used to compute results to needed accuracy and the symbolic part is used to represent the air in such a way that we can always refine the precession without having to repeat computation. And consume large amounts of memory.

Now there seems to be some draw back to this technique in terms of reliability and how well things are defined. After further reading I'll comment more.
 
Last edited by a moderator:
Physics news on Phys.org
It turns out that infinite precession is not really infinite precession but an unfortunate word used sometimes in computer science for arbatray precession.
You can make an infinite precision representation - but you need an inconveniently large universe to store the memory chips in.
 
mgb_phys said:
You can make an infinite precision representation - but you need an inconveniently large universe to store the memory chips in.

Either that or one without such inconvenient limitations such as quantum mechanics. Is the interval from zero to one inconveniently large?
 
You may find http://www.brics.dk/~barnie/RealLib/ interesting; it is a full implementation of computable real arithmetic, limited only by your hardware*.



Incidentally, one shouldn't forget that an (infinitely-long) decimal numeral is (in a typical formulation) quite literally just a function whose domain is the integers and whose image is the set {0,1,2,3,4,5,6,7,8,9}. (For the numeral to represent a real number, we also impose that on the left, it is eventually all zeroes)





*: Okay, that's probably not strictly true; there are probably some limitations in the actual implementation, if you push it to ridiculous (and probably impossible with current technology) extremes.
 
Last edited by a moderator:
Hurkyl said:
You may find http://www.brics.dk/~barnie/RealLib/ interesting; it is a full implementation of computable real arithmetic, limited only by your hardware*.

I'm looking at it a bit now. I like this statement

"In the cases where machine precision is sufficient, exact real computations can be executed at a speed comparable to the speed of double precision arithmetic, sometimes even running on par with it. "
 
Last edited by a moderator:
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top