1. Limited time only! Sign up for a free 30min personal tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

A Physics and Integer Computation with Eisenstein Integers

  1. Dec 17, 2016 #1
    I realize this question may not have an obvious answer, but I am curious: I am using Gaussian and Eisenstein integer domains for geometry research. The Gaussian integers can be described using pairs of rational integers (referring to the real and imaginary dimensions of the complex plane). And so I can do purely Diophantine math (only integers: no real numbers required).

    But Eisenstein integers (occupying a triangular lattice) require non-integers for doing any computational geometry (specifically, ½ and √3). My scheme uses only rational integers for a compact and efficient set of parameters. In the case of the Eisenstein domain, I must apply a transformation requiring the irrational number √3 to map points in the plane.

    This is not impeding my work, but I am curious: is it the physical nature of computer hardware that creates a constraint that requires the irrational number √3 to be used in the case of triangular lattices? Physics and nature prefer triangular (hexagonal) arrangements over orthogonal (square) ones, and yet our computers are not able to precisely represent these arrangements without the use of an irrational number.

    If the answer to my question requires the design of a new kind of computer, then I would be curious how (or if) that can be done! (I suspect it is not possible).

    Meanwhile, I will have to make do with the fact that all geometry defined with Eisenstein integers can never be as precise (or computationally compact) as with the Gaussian integers. This is obvious in the pragmatic sense, but the fundamental reason is unclear - and it may fall into the domains of meta-math, physics, and ontology.
  2. jcsd
  3. Dec 17, 2016 #2


    Staff: Mentor

    Because of the way floating point numbers are stored in memory, computers work exclusively with rational numbers, and some rational numbers can't be represented exactly in hardware. For example, numbers such as 0.1 and 0.2 are stored as approximations. There are software libraries that can store floating point numbers with much greater precision, and there probably are libraries that can work with symbolic representations of numbers, such as ##\sqrt{3}##, but I don't know about them.

Share this great discussion with others via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted