My first topic here. Here goes I've written a small program that first calculates pi in decimal and then changes it into binary. But as you know there's no accurate binary presentation even for 0.1 so I just can't change the whole 20000 decimal representation of pi into binary. I there for calculate x decimals(?) in binary.The question is how many correct decimals do I need in the base 10 form to get x accurate digits in binary form?