How can we translate real-world positions into a computer language? An example would be to simulate a ball bouncing around in a closed box. How can we say with confidence that the program will represent reality? This is my current thinking on how to manage space. Any advise or critique would be greatly appreciated! Code (Text): import numpy as np import numpy.random as rand def gen_space(xsize=10, ysize=10, zsize=10): ''' Create and fill a 3D space with random values on the interval [0,1]. When accessing points in this space, we can visualize like so: space[:][Y][Z] # Returns (x) at y=Y, z=Z space[X][:][Z] # Returns (y) at x=X, z=Z space[X][Y] # Returns (z) at x=X, y=Y space[:][:][Z] # Returns (x, y) at z=Z space[:][Y] # Returns (x, z) at y=Y space[X] # Returns (y, z) at x=X space[X][Y][Z] # Returns value at the point (X, Y, Z) Returns: space, [xsize, ysize, zsize] ''' # space = np.array([[[None] * zsize] * ysize] * xsize) # Safe method space = np.empty([zsize, ysize, xsize]) # Fast method for x in range(xsize): for y in range(ysize): for z in range(zsize): space[x][y][z] = rand.random() return space, [xsize, ysize, zsize] if __name__ == '__main__': space, sizes = gen_space() Thank you very much for your time and thoughts!
The only way we can say a program represents reality is to take its results and compare it to the system its trying to model. I did a course in Computational Physics where we modelled simple systems like oscillation springs and pendulums and depending on the choice of ODE solver you picked you either lose energy or add energy in your simulation and after a awhile you would notice why that ODE was a bad idea. Even the best computer simulations only approximate the system they are trying to model and for some cases that is okay but we can never truly know whether it does or not without constant comparing of results. This is one of the reasons why trial lawyers object to simulations of events because you can make a plausible looking movie that is not true to reality with bad physics and the jury would believe it...
Storing a continuous space as a lattice of points is indeed the most common representation. In most all cases, this can be done in a controlled manner. Specifically, for a space of linear length ##L##, discretized using ##N## points on a lattice (stored in an array), there is a lattice spacing ##a = L/N##. Although technically we only recover continuous space in the limit ##a \rightarrow 0## (or ##N \rightarrow \infty##), many numerical results can be "converged" to a given accuracy. By "converged," we essentially mean that if we make ##a## smaller (say by a factor of two, by doubling ##N##), we find that the results do not change (to our desired level of accuracy). Many computational algorithms can be analyzed to understand how the error depends on ##a##, for example an algorithm which has an error scaling as ##a## is worse than one which scales as ##a^2##, because the latter would allow for a larger ##a## (and thus smaller ##N##, and therefore less memory and faster calculations).