# Understanding Grids and Units in Computer Graphics and Physics Simulations

• Raj Harsh
In summary: Wow. Those are very broad questions. It would take a whole textbook and a semester's study to answer all that.But maybe I can answer one part of your query.What I mean is that you can simply change the scale from metres to centimetres and save yourself some computing power and effectively, some time as well, i.e; if the units affect the performance, in the case of which, the units are somewhat relevant but otherwise not at all. Consider simulating a violin string. (1 dimension makes it easier). I might divide the string into N equal sub-lengths, then simulate each of them a a
Raj Harsh
Hello,

I am having trouble comprehending how grids are made and defined in computers. What is the unit that they use and how is it defined ? I know that softwares use standardized units of measure (measurement) such as centimetre. Basically, how is a 3-Dimensional Space created in computers where they are actually restricted two dimensions (I am referring to the pixels which ultimately form the image). Even the 3D image we see is a 2D representation/image.

This is also the reason I am having a hard time understanding and grasping (the concept of) real-world Physics simulations. How do computer based Physics simulations work ? And are the scales even relevant ? What I mean is that you can simply change the scale from metres to centimetres and save yourself some computing power and effectively, some time as well, i.e; if the units affect the performance, in the case of which, the units are somewhat relevant but otherwise not at all.

Wow. Those are very broad questions. It would take a whole textbook and a semester's study to answer all that.

Raj Harsh said:
What I mean is that you can simply change the scale from metres to centimetres and save yourself some computing power and effectively, some time as well, i.e; if the units affect the performance, in the case of which, the units are somewhat relevant but otherwise not at all.

Consider simulating a violin string. (1 dimension makes it easier). I might divide the string into N equal sub-lengths, then simulate each of them a a separate sub problem. That is analogous to a grid in 2D or 3D. The next question, is how big must N be? I can run experiments. Try N=1000, then N=100. If the results change significantly, then I need 100 < N < 1000. If there is almost no change in results, then I try N=10. I am searching for a value of N as big as I can make it (to use less computer resources) that does not significantly change the answers. Once I find that N, I might study many cases of plucking that string all using the same value of N.

Raj Harsh
To understand 3D displayed by 2d technology recall that in art, you can use perspective to trick the brain into seeing a 3D image..

https://en.m.wikipedia.org/wiki/Perspective_(graphical)

In computer graphics we relate pixels to measurements. Consider how some computer fonts are created via tiny dots on a grid, each dot is a pixel.

russ_watters
Raj Harsh said:
I am having trouble comprehending how grids are made and defined in computers.
Have you taken a high school level geometry (math) class? Are you familiar with how functions are graphed/data is plotted on a graph? Visual representation of data is just x,y,z coordinates on a defined coordinate system. There are a number of techniques for actually drawing objects, but typically they are a collection of equations describing shapes.
What is the unit that they use and how is it defined ? I know that softwares use standardized units of measure (measurement) such as centimetre.
There is no need for units and while I'm not a software engineer I would think the units are applied after-the-fact if necessary. All you need to make the pictures are the x,y,z coordinates.
Basically, how is a 3-Dimensional Space created in computers where they are actually restricted two dimensions (I am referring to the pixels which ultimately form the image). Even the 3D image we see is a 2D representation/image.
Computers don't have dimensions at all. They are just devices that organize and manipulate data. Want a 12 dimensional space? You can just create it with the data: instead of x,y,z coordinates, make x,y,z,a,b,c,d,e...etc. We live in a 3 dimensional world though, so that's how things are drawn...though when you add colors, it's like having 12 dimensions (x,y,z for each l,r,g,b).

Making a 2-d projection of a 3d image in order to display it on the screen though is simple geometry to mimic what our eyes do. It's about finding the angles between objects and plotting them in 2d as distances.
This is also the reason I am having a hard time understanding and grasping (the concept of) real-world Physics simulations. How do computer based Physics simulations work ? And are the scales even relevant ? What I mean is that you can simply change the scale from metres to centimetres and save yourself some computing power and effectively, some time as well, i.e; if the units affect the performance, in the case of which, the units are somewhat relevant but otherwise not at all.
Changing the units does not save computing power. I'm not sure what you are getting at there. 25.4 mm and 1.00 inches is the same amount of data. Scientifically you have 3 significant figures, though the data is probably actually stored as 16 or 32 bit numbers.

CWatters
Here’s an article on computer graphics showing how different colored pixels can be used to to shading and create 3D like imagery among other things.. it has some history of computer graphics too which you should read.

https://en.m.wikipedia.org/wiki/Computer_graphics

Raj Harsh said:
Hello,

I am having trouble comprehending how grids are made and defined in computers. What is the unit that they use and how is it defined ? I know that softwares use standardized units of measure (measurement) such as centimetre. Basically, how is a 3-Dimensional Space created in computers where they are actually restricted two dimensions (I am referring to the pixels which ultimately form the image). Even the 3D image we see is a 2D representation/image.
With one type of Computer graphics modeling called Ray-tracing, the computer creates a virtual 3D space. Here is a screen capture from a modeler called Moray.
It shows four views of the space, front, top, side and camera. In the space, are the objects (in this case a cylinder, sphere, and 2 boxes), the position and aim of the camera, and a light source. The camera view is created from this info , as a wire frame image.

To create an graphics image from this, the ray-tracing program calculates the path of rays leaving the light source, striking a surface and bouncing to the camera. he properties of the surface where a ray strikes an object determines what color the camera "sees" from that point in its field of view and determines what color the pixel at that part of the image will be.
Here's a simple image created from the above model.

Note that it gives simple shading and shadows for the objects, all by calculating the paths of imaginary light rays. Here the object surfaces were also given color.

You can add more complexity by adding surface roughness, highlights, reflection, transparency and refraction and other effects.
Here is the same scene with some of these additional effects added.

Again this is done by calculating the paths of rays from light to camera and how it would interact with objects in the scene along the way and working out the color of the pixel depending on the assigned properties of the objects in the scene.

#### Attachments

• wireframe.png
21.5 KB · Views: 612
• cylinder.png
18.6 KB · Views: 532
• cylinder2.png
22 KB · Views: 511
NTL2009 and Raj Harsh
Raj Harsh said:
How do computer based Physics simulations work ? And are the scales even relevant ?
Here's a simple simulation of a planet orbiting a star.

First you define the star to be at rest at the origin and you decide on its mass, M, and store this in a variable.

Then you decide on the initial x, y, and z coordinates of your planet and the initial x, y, and z velocities of your planet and store all six numbers in variables.

Then you begin a loop. Each time around the loop you take the values you have and overwrite them with the values for a small time ##\delta t## later. So the x position changes to ##x+v_x\delta t## and similarly for the y and z values. And the velocities change due to the gravitational acceleration. So ##v_x## changes to ##v_x-GMx\delta t/r^3## (note that this is the component of Newtonian gravitational acceleration in the x direction, ##GM/r^2## multiplied by ##x/r##) - again, there are similar expressions for the y and z velocities. Then you just go round this loop, updating the positions and velocities each time.

So what you've done is generate the position of the planet. What do you do with them? Depends what you want. If all you want to know is what shape the orbit is, you might just add a line inside the loop instructing the computer to save the position of the planet to a file. Then you could load it in Excel and plot a graph. Alternatively, you might feed the locations live to a program similar to the one Janus showed, and tell it to draw a planet centered at these coordinates, and get a pretty animation for a film or a game.

Note that units, in some senses, don't matter as long as you use a consistent set (all SI, all geometric, whatever). However, you do have to be aware of your computer's limits. You won't easily be able to store a planet's position to the nearest millimetre because millions of kilometres to the millimetre is a number in the millions of millions, and the rounding errors the computer makes will be at or above this scale. Use sensible precision. And you need to pick a sensible value for ##\delta t## - see anorlunda's discussion on picking N for a violin string model.

You can add a lot of bells and whistles to the sketch I've given. More sophisticated programming techniques like arrays and object orientation make managing the information easier. And you could add more planets, or account for the gravitational effect of the planet on the star.

I should also note that the algorithm I described is somewhat naive, and you'll find that planets are quite likely to suddenly escape their solar system as rounding errors accumulate. You can do a lot better, but it gets harder to describe easily.

Hope that helps.

Raj Harsh
Thank you all for the answers, they have helped me learn a lot more.

russ_watters said:
Have you taken a high school level geometry (math) class? Are you familiar with how functions are graphed/data is plotted on a graph? Visual representation of data is just x,y,z coordinates on a defined coordinate system. There are a number of techniques for actually drawing objects, but typically they are a collection of equations describing shapes.

Yes, I have and I am aware of the method to represent data on graph but that is exactly what baffles me. We do it on a simple piece of paper where you can move up and down, left and right and our movement is restrained. And after some time, we had stopped plotting graphs and our focus had shifted to simply solving equations, even during Calculus. While I am not an excellent student, that concept was clear to me. But in computers, you actually do create a 3D Object which you can manipulate in real-time. That would imply that there exists a 3D Space of some sort. A simple cube both on a paper and in a computer is the same, as long as it is still. Moving along the z-axis can be considered the object being scaled bigger or smaller but when it really comes into perspective is when it is rotated. The rotation is partly the screen updating the pixels based on the shading information provided by the computer but what I find to be mind boggling is that all the points are well-connected and they do not break. Instead of 3D Space in Computers, I should be asking about the cameras.

A big thank you to all the participants and Janus Sir, but I think my question is actually about how cameras are created and how they work in computers and about the arrays which store the point-cloud information and how those points are connected.

Raj Harsh said:
Yes, I have and I am aware of the method to represent data on graph but that is exactly what baffles me.
I don't see why. There is no meaningful difference.
But in computers, you actually do create a 3D Object which you can manipulate in real-time. That would imply that there exists a 3D Space of some sort.
Sure. There is a set of allowable values of x,y,z coordinates, just like a piece of graph paper has a certain number of blocks.
A simple cube both on a paper and in a computer is the same, as long as it is still.
All you need to make the cube move on paper is a second piece of paper.
Moving along the z-axis can be considered the object being scaled bigger or smaller but when it really comes into perspective is when it is rotated. The rotation is partly the screen updating the pixels based on the shading information provided by the computer but what I find to be mind boggling is that all the points are well-connected and they do not break.
I don't understand what you are saying there.

A big thank you to all the participants and Janus Sir, but I think my question is actually about how cameras are created and how they work in computers and about the arrays which store the point-cloud information and how those points are connected.
You mean virtual cameras that enable displaying the picture? As I and others said, it's just angles, and perspective just like in art. As the artist or programmer, you pick a location and orientation for the camera and a field of view angle and plot what you see!

Raj Harsh said:
But in computers, you actually do create a 3D Object which you can manipulate in real-time. That would imply that there exists a 3D Space of some sort.
No. What is there is only a data set, a rule set and an 'engine' what processing them and creates a kind of representation of the objects. The representation is not 'some sort' of existence in any conventional manner.The whole thing is just more detailed but not entirely different than a few words on a piece of paper about the placement of the furniture inside a room to inform the staff of the moving company about the requirements.
Nobody would take those few words as 3D space, right?

Raj Harsh said:
Yes, I have and I am aware of the method to represent data on graph but that is exactly what baffles me. We do it on a simple piece of paper where you can move up and down, left and right and our movement is restrained. And after some time, we had stopped plotting graphs and our focus had shifted to simply solving equations, even during Calculus.
For computers, that is the key. They work with the equations and then output a picture.
Consider the following pseudo code as an example:
Code:
note: units are seconds, meters, meters/sec, meters/sec/sec
ballA.type = "sphere"
ballA.color = "red"
ballA.x,y,z = -30, 20, 100
ballA.vx,vy,vz = 0, 0, 0
ballB.type = "sphere"
ballB.color = "blue"
ballB.x,y,z = -25, 20, 10
ballB.vx,vy,vz = 0, 0, 0

t = 0
g=10

camera.x,y,z = 0, -10, 20
camera.dirx,diry,dirz = -1, 0, 0
camera.zoom = 0.5

So I have now defined two balls, both stationary, both 10 meters in radius.
If we presume that the ground is at z=0, then one is sitting on the ground.

Time might represent the current time (time zero) and "g", might represent our gravity.
So with calculations, we could start to change the position and location of the balls as time progresses.
ballB will not move right away, because it is sitting on the ground. But ballA will fall towards the floor and strike ballA.

Let's say we did this 1/30 of a second at a time (t=0, 0.033, 0.067, 0.01, ...). We could compute the position of each ball at each point in time.

The we could use the colors of the balls and the camera information to generate 2D images for each of these frames.

But notice that we model the motion numerically before rendering it to the display. We do not use the 2D display to compute anything.

Raj Harsh said:
That would imply that there exists a 3D Space of some sort.
As the example above shows, the computer does not hold a 3D space in the sense you are thinking. Instead, it holds a numerical description of a 3D space.

Raj Harsh said:
The rotation is partly the screen updating the pixels based on the shading information provided by the computer but what I find to be mind boggling is that all the points are well-connected and they do not break. Instead of 3D Space in Computers, I should be asking about the cameras.

Raj Harsh said:
I think my question is actually about how cameras are created and how they work in computers and about the arrays which store the point-cloud information and how those points are connected.

In the example above, there does not have to be a "point cloud". Instead, there is simply an array of pixels. Given the camera location (the camera's focal point in 3D space) and a description of its "retina", you can compute the 3D location of any point (pixel) on that retina. Then, the 3D location of that point and the focal point will denote a line through the 3D space. That line may intersect with either of the two balls. If it does not, assign a background color to the pixel. It is does, determine which surface in intersects first, what its color is, and what angle the light it striking it (guess I need a lamp object in my code). This method is called ray-tracing, although a primitive form of it. IT is one method of generating a 2D image of a 3D model.

russ_watters and Raj Harsh
In a response to the last two posts from @russ_watters and @Rive, I beg to differ. Computers make use of point-cloud information. Each plane has a said/certain number of vertices and two adjoining surfaces or planes will have overlapping points. The point-cloud information could be stored either in a linear fashion where the progression is from one end to the other or it could be done through using common points (noticing the same co-ordinates and simply removing the extra points to avoid overlapping). And maybe you did not understand my example of the cube on a paper. When you draw a cube on a paper from one perspective, it's fixed and cannot be changed. You can indeed take another paper and draw it from a different perspective and that can is similar to the frames in computer, but the difference is that the cube in the computer is one object. So regardless of whether you change the frame or the perspective or not, you will still be having a cube with all of it's sides, the same cannot be said for the cube on the paper. The 3D cube in the computer does not need to be recreated each and every time we need to change the perspective, only the screen needs to be refreshed to display the new information based on the transformation parameters of the cube but the transformation only happens because you actually have 3D Object. Consider this, in 2D animation, each frame needs to drawn, or a few and then they use interpolation but you do need to draw again and again from each and every single perspective. But if you have a 3D model, you do not have to do that. Once you make it, you can watch it from any angle you want. Now, consider this. You draw a cube on a piece of paper and make one out of wood. To see the the wooden cube from a different angle, you would have to rotate it but to see the one on the paper, you will have to redraw the cube. Now, you can simply move around the cube yourself (like a camera in computer) to look at it from other angles but the one on the paper will always remain the same, even if you draw it with shades to create a three dimensional depth. I hope this makes my point clearer.

And now about the grid. What I meant by that is that we get the option to snap something to the grid, take SolidWorks or Maya for example, or AutoCAD. But how does it do that ? What defines the spacing ? And this is the reason why I asked whether the units are relevant or not and what they are and how they are defined in computers. Let us say my workspace is set to metres, everything is in scale of metres but I can go to the scale of nanometre and yoctometre, maybe to add something. How would the computer differentiate between both of those scales ? What are the parameters that help computers construct things proportionate to each other with respect to units and scales ? If I have an object 10 metres long and then I need to add something very small, I switch to millimetre and choose snap to grid, what is it going to do ? Will the computer actually bring those lines so close together ? If you think about it, it should be able to get smaller and closer to a vast extent, but that does not happen. There is a limit and each and every single thing is so well defined that you have to work within the parameters set by the developer. Many softwares start clipping because you need to change the project's unit scale and sometimes, change the attributes of the camera too.

@.Scott Thank you so very much ! That is very helpful ! Would you please look at my new post ? I have updated my question to make it more clear. I was never asking about the rendered images on a screen but the 3D models or objects themselves. I wanted to know they are created. I was aware that the values are stored numerically in the form of vectors or matrices and that is not what I was asking about. What I was meaning to ask is/was - There are values assigned to each point or a vertex, and a surface is created between them. That is fine. But, it works with a z-axis and you cannot do the same on a piece of paper without intersection and overlaps. If I give you balls you can suspend in a room from the ceiling and hold them up with a support from the walls and then add a surface between each four points with a cardboard, you will be able to make an actual object but on paper, that is very hard, not even possible.

Or, I may be thinking and reading too much into this.

Raj Harsh said:
In a response to the last two posts from @russ_watters and @Rive, I beg to differ...
...maybe you did not understand my example of the cube on a paper. When you draw a cube on a paper from one perspective, it's fixed and cannot be changed. You can indeed take another paper and draw it from a different perspective and that can is similar to the frames in computer, but the difference is that the cube in the computer is one object.
You're misunderstanding the analogy. The computer is your brain and the paper is the monitor. When drawn on paper, the "virtual" cube exists in the mind of the artist and the 2d representation drawn on paper is like the computer screen...or more directly, like a paper print-out. Same thing.
So regardless of whether you change the frame or the perspective or not, you will still be having a cube with all of it's sides, the same cannot be said for the cube on the paper.
To repeat for emphasis: the "virtual" cube is not on the paper or screen, whether printed out or drawn by a human or computer. The "virtual" cube is in the CPU and the artist's brain.
The 3D cube in the computer does not need to be recreated each and every time we need to change the perspective, only the screen needs to be refreshed to display the new information based on the transformation parameters of the cube but the transformation only happens because you actually have 3D Object. Consider this, in 2D animation, each frame needs to drawn, or a few and then they use interpolation but you do need to draw again and again from each and every single perspective. But if you have a 3D model, you do not have to do that. Once you make it, you can watch it from any angle you want.
Again, for emphasis: there is no actual difference in the logic of the two processes. You make the hand-drawn animation sound harded, but it strikes me that you don't realize just how difficult the 3d rendering output of the model is: It's the vast majority of the workload the computer has to do.
Now, consider this. You draw a cube on a piece of paper and make one out of wood. To see the the wooden cube from a different angle, you would have to rotate it but to see the one on the paper, you will have to redraw the cube. Now, you can simply move around the cube yourself (like a camera in computer) to look at it from other angles but the one on the paper will always remain the same, even if you draw it with shades to create a three dimensional depth. I hope this makes my point clearer.
What I would actually like to know is what your point is here. You started by asking questions about how computers work and now you are telling us how they work. It seems you have a larger agenda in mind. Perhaps some philosophical/existential belief about reality?
And now about the grid. What I meant by that is that we get the option to snap something to the grid, take SolidWorks or Maya for example, or AutoCAD. But how does it do that ? What defines the spacing ?
Typically it is just a pre-chosen number size (such as 16 bit) or extent (1,000 or 1,000,000 units).
And this is the reason why I asked whether the units are relevant or not and what they are and how they are defined in computers. Let us say my workspace is set to metres, everything is in scale of metres but I can go to the scale of nanometre and yoctometre, maybe to add something. How would the computer differentiate between both of those scales ?
There is no need to differentiate unless the user feels the need to. The computer doesn't care. Think about a blank piece of graph paper. Do you need to label the axes with units in order to draw on it?
What are the parameters that help computers construct things proportionate to each other with respect to units and scales ? If I have an object 10 metres long and then I need to add something very small, I switch to millimetre and choose snap to grid, what is it going to do ? Will the computer actually bring those lines so close together ? If you think about it, it should be able to get smaller and closer to a vast extent, but that does not happen. There is a limit and each and every single thing is so well defined that you have to work within the parameters set by the developer. Many softwares start clipping because you need to change the project's unit scale and sometimes, change the attributes of the camera too.
It seems to me that you are making this way more complicated than it really is. A 10m cube is 10x10x10. A 1mm cube is 0.001x0.001x0.001. In CAD, you literally just type in the numbers. The computer doesn't care.

Raj Harsh said:
I was never asking about the rendered images on a screen but the 3D models or objects themselves. I wanted to know they are created. I was aware that the values are stored numerically in the form of vectors or matrices and that is not what I was asking about. What I was meaning to ask is/was - There are values assigned to each point or a vertex, and a surface is created between them. That is fine. But, it works with a z-axis and you cannot do the same on a piece of paper...

Or, I may be thinking and reading too much into this.
I think you are reading too much into this. The pice of paper with the 2D representation is not/is not analagous to the 3D model. Printed-out to show what the computer is actually thinking, the 3D model is a list of number or equations.

russ_watters said:
You're misunderstanding the analogy. The computer is your brain and the paper is the monitor. When drawn on paper, the "virtual" cube exists in the mind of the artist and the 2d representation drawn on paper is like the computer screen...or more directly, like a paper print-out. Same thing.
Thank you. This helps.

russ_watters said:
What I would actually like to know is what your point is here. You started by asking questions about how computers work and now you are telling us how they work. It seems you have a larger agenda in mind. Perhaps some philosophical/existential belief about reality?
No, nothing like such. And I am not telling you how they work, I was only giving you an example to clarify my question.

russ_watters
Raj Harsh said:
I am having trouble comprehending how grids are made and defined in computers.
There are no grids in a computer. A programmer can write a program that will display a two-dimensional grid on a screen, but the computer itself has no such grid.

Raj Harsh said:
What is the unit that they use and how is it defined ?
There are no units. The contents of a computer's memory are just numbers.

Raj Harsh said:
I know that softwares use standardized units of measure (measurement) such as centimetre.
No, there are no standardized units. Software can be written to indicate units of measure, but the computer deals only with numbers.

Raj Harsh said:
Basically, how is a 3-Dimensional Space created in computers where they are actually restricted two dimensions (I am referring to the pixels which ultimately form the image). Even the 3D image we see is a 2D representation/image.
All there is in a computer is memory, which is laid out in a one-dimensional form. The computer's operating system maps part of this memory to pixels on the screen, using a variety of formats and resolutions. A computer program can display an image that appears to be three-dimensional, using perspective, lighting, and shading, the same as how an artist depicts a similar scene on a flat, two-dimensional piece of paper or canvas. As already mentioned, some software uses ray tracing to compute how each point in the image will be lit by an assumed light source.

russ_watters
Raj Harsh said:
Computers make use of point-cloud information ... The point-cloud information could be stored either in a linear fashion
Well, not really. There are two different things here. One is modelling, the other is visualisation. For modelling, it is about specific coordinates and dimensions: center and radius of a sphere, points of a square or cube, endpoints and width of a track - all coordinates and linear dimensions. Who would want to use a point cloud to define a triangle, when it is perfectly defined by just nine number? For (graphics) modelling it is all about coordinates (and some other properties) of objects.

For visualization, it is about breaking down all objects to small graphics elements (usually: triangles) which can be processed in a fast and uniform manner by just the coordinates of their corners (lookup: tesselation, triangle mesh) only. So also no real point-cloud here, only a big set of 3D coordinates and an advanced GPU what is chewing them endlessly and transforming them to a 2D image according to the required viewpoint. (There are many other parts of this, like textures and more, but this is the starting point what makes it '3D')

There are other ways to do this, usually with more advanced math and more calculations. But there is no real 'point cloud' here: no 3D space inside.

Mark44 said:
There are no grids in a computer. A programmer can write a program that will display a two-dimensional grid on a screen, but the computer itself has no such grid.
You seem to have misunderstood my statement. The computer is nothing if you come to think of it, it's simply electricity. According to Science, we function on electricity and so do all the equipment that we may use to test for the speed of light so no matter what, the speed of electricity should not be exceeded as any information would only be conveyed and interpreted at the speed of electricity, which even if little, is less than and different from the speed of light but we still have a different value for it. And light is electromagnetic radiation, magnets can be made out of electricity as change in electric field can cause magnetic field, moving charges cause magnetism so from the surface, it is all just electricity. My question and my point was not what you thought it was.

Mark44 said:
There are no units. The contents of a computer's memory are just numbers.
Everyone keeps on misunderstanding my question and keep telling the same thing over and over again, that there is nothing in computers. I am very well aware of that. My school taught me about binary very early on, with respect to computers as well. According to all of you, there is nothing in a computer. And that is true, everything needs to be defined. But that would also mean that physics simulations and such are bogus and mean nothing in computers. You basically instruct the computer to either modify the transformation attributes of another object to make it fall towards the a surface which you would call ground plane and call it gravity. But since there are no metres or centimetres or anything like such, how do you know whether what the computer is doing is accurate or not ? The softwares I have used have (so far) allowed me to select unit of length. I have learned modelling and digital sculpting myself but I want to learn more from the scientific aspect and learn about the inner workings of it.

When I move the joystick on my controller, the game character tends to move forward. But what is it moving on ? A surface. But where is he going ? Forward ? How could he go forward when there is no forward or backward but just numbers ? That was the question, in a nutshell. The game character moves as if there is an entirely new world inside the computer, just as ours, it moves similar to how we can move. The question is much harder than the answer of it would be. It will remain an internal struggle for me to find an answer as I it is hard for me to explain what I want to ask.

Raj Harsh said:
How could he go forward when there is no forward or backward but just numbers ?
Philosophy... Any 'meaning' is always an user defined parameter for a computer. No help for this. The game engine modifies the object set according to the rule set what belongs to a specified action, the 3D engine renders it into images, but it will move 'forward' only for you.

Raj Harsh said:
When I move the joystick on my controller, the game character tends to move forward. But what is it moving on ? A surface. But where is he going ? Forward ? How could he go forward when there is no forward or backward but just numbers ?
All that happens is that the coordinates of your character are updated, subject to constraints like "the z coordinate must be equal to 1 plus the z coordinate of a specified plane". Since the camera is usually attached to your character, the screen needs to be redrawn from another point of view. But nothing's moving, any more than it is in a film. You're just generating a new drawing.

Regarding units, there's nothing in the program that defines the units. It's true that your character is about twice as tall as a table. But there's nothing in the program that says you are 2m tall or 2km tall. You assume the former because you assume that the program is simulating a typical human in something like the real world - but that's your prejudice.

When you get to a full physics simulation, the units must be consistent, otherwise the answers won't work. But there are no units in the computer. When you write the maths you may make some assumptions (for example, G=6.67x10-11 assumes that we're working in SI units). But the computer assigns no significance to that. You can ask the user what units they want to use, and write code to convert to the units your other values assume if you like. But if you don't do it, and you enter r=1000 while using G=6.67x10-11 then the answer will come out as if you entered the distance in metres, whatever you intend.

I hope that makes sense. The point is that the maths of a physics system will only work if you use consistent units. Because if you use inconsistent units your maths is not a good description of the real world. But interpreting the input and output as having physical meaning is up to you.

Raj Harsh
I think I have found a better way to present this question. There are co-ordinate points, which may act as vertices. But where do you put or draw those points ? Putting the grid aside, people have kept on telling me that there is nothing but numbers, but those numbers act like co-ordinate points. And if there is nothing else, where would those points be plotted ? If you do not have a paper, you cannot make the graph you are asked to draw. I am addressing the paper here, that is what I want to know about. Even if you just make points in air, you do have an environment, a surrounding to move about. What is that surrounding and how is that created ? This is my question. And if it is still not clear, this particular topic can be closed.

Ibix said:
All that happens is that the coordinates of your character are updated, subject to constraints like "the z coordinate must be equal to 1 plus the z coordinate of a specified plane". Since the camera is usually attached to your character, the screen needs to be redrawn from another point of view. But nothing's moving, any more than it is in a film. You're just generating a new drawing.

Regarding units, there's nothing in the program that defines the units. It's true that your character is about twice as tall as a table. But there's nothing in the program that says you are 2m tall or 2km tall. You assume the former because you assume that the program is simulating a typical human in something like the real world - but that's your prejudice.

When you get to a full physics simulation, the units must be consistent, otherwise the answers won't work. But there are no units in the computer. When you write the maths you may make some assumptions (for example, G=6.67x10-11 assumes that we're working in SI units). But the computer assigns no significance to that. You can ask the user what units they want to use, and write code to convert to the units your other values assume if you like. But if you don't do it, and you enter r=1000 while using G=6.67x10-11 then the answer will come out as if you entered the distance in metres, whatever you intend.

I hope that makes sense. The point is that the maths of a physics system will only work if you use consistent units. Because if you use inconsistent units your maths is not a good description of the real world. But interpreting the input and output as having physical meaning is up to you.

Thank you. This is the answer I think I was looking for regarding units. Thank You Sir !

Raj Harsh said:
I think I have found a better to present this question. There are co-ordinate points, which may act as vertices. But where do you put or draw those points ?
Depends on where you decided your camera was and what camera properties (field of view etc) you decided to simulate.
Raj Harsh said:
And if there is nothing else, where would those points be plotted ?
If you don't have a camera, or some kind of reference, then it's up to you. Plot 'em where you want 'em.

Ibix said:
Depends on where you decided your camera was and what camera properties (field of view etc) you decided to simulate.
If this makes any sense, the camera would also require a space to be in, would it not ?

Raj Harsh said:
You seem to have misunderstood my statement. The computer is nothing if you come to think of it, it's simply electricity. According to Science, we function on electricity and so do all the equipment that we may use to test for the speed of light so no matter what, the speed of electricity should not be exceeded as any information would only be conveyed and interpreted at the speed of electricity, which even if little, is less than and different from the speed of light but we still have a different value for it. And light is electromagnetic radiation, magnets can be made out of electricity as change in electric field can cause magnetic field, moving charges cause magnetism so from the surface, it is all just electricity. My question and my point was not what you thought it was.
Part of the problem in this thread is that your posts appear to contain a lot of irrelevant information, such as all of the above.
Everyone keeps on misunderstanding my question and keep telling the same thing over and over again, that there is nothing in computers. I am very well aware of that.
It does not appear that you do...
But that would also mean that physics simulations and such are bogus and mean nothing in computers. You basically instruct the computer to either modify the transformation attributes of another object to make it fall towards the a surface which you would call ground plane and call it gravity. But since there are no metres or centimetres or anything like such, how do you know whether what the computer is doing is accurate or not ?
It is up to the programmer to make it accurate. Some physics simulations - like video games - are not accurate. Often they aren't meant to be.

But again with the units: the computer doesn't care about the units. It is up to the programmer to keep track of them. Think about what you do with a calculator. Do you type in units or do you keep track of them separately? Or when you graph something in a spreadsheet: if you forget to tell Excel the units, does it affect the shape of the graph? The units are just labels you add for your own benefit. The computer doesn't care if you use them or not.
When I move the joystick on my controller, the game character tends to move forward. But what is it moving on ? A surface. But where is he going ? Forward ? How could he go forward when there is no forward or backward but just numbers ?
Starting from 0, +1 is forwards, -1 is backwards. But the answer to your existential conundrum is that since he doesn't exist he doesn't go anywhere (unless you choose to plot him on a monitor; then he can be said to move from left to right, for example).
That was the question, in a nutshell. The game character moves as if there is an entirely new world inside the computer, just as ours, it moves similar to how we can move. The question is much harder than the answer of it would be. It will remain an internal struggle for me to find an answer as I it is hard for me to explain what I want to ask.
This and other statements make it sound like you think there is a physical universe inside the computer. There isn't. It's just a big list of numbers.

Consider an Excel spreadsheet. It's a 2-dimensional array of numbers and letters. An Excel spreadsheet has a pre-defined size of 2^20 rows (about a million) and 2^12 (about 16,000) columns. But even if those are the defined space, the spreadsheet does not necessarily have 16 billion pieces of data in it. You can tell by the size of the file that only the space actually filled by the numbers is fully described in the computer's memory memory.

Raj Harsh said:
The computer is nothing if you come to think of it, it's simply electricity.
This is completely untrue. A computer is much more than just electricity -- it has memory, a central processing unit, input devices like keyboards, mice, and joysticks, output devices like monitors and printers, and non-volatile storage.

Raj Harsh said:
Everyone keeps on misunderstanding my question and keep telling the same thing over and over again, that there is nothing in computers. I am very well aware of that.
No one is saying that there is nothing in computers. What they are saying is that some scenario is modeled by a program that simulates reality in some fashion. However, that scenario involving objects in some physical space doesn't exist -- at heart, the whole thing is just a long list of numbers that are manipulated by the CPU or GPU (central processing unit, graphics processing unit).

Raj Harsh said:
When I move the joystick on my controller, the game character tends to move forward. But what is it moving on ? A surface. But where is he going ? Forward ? How could he go forward when there is no forward or backward but just numbers ?
There is no surface -- the program has set up memory with a coordinate system that models a surface or a space. Moving the joystick or mouse causes the program to adjust the position of the character to some other place.

Raj Harsh said:
Hello,

I am having trouble comprehending how grids are made and defined in computers. What is the unit that they use and how is it defined ? I know that softwares use standardized units of measure (measurement) such as centimetre. Basically, how is a 3-Dimensional Space created in computers where they are actually restricted two dimensions (I am referring to the pixels which ultimately form the image). Even the 3D image we see is a 2D representation/image.

This is also the reason I am having a hard time understanding and grasping (the concept of) real-world Physics simulations. How do computer based Physics simulations work ? And are the scales even relevant ? What I mean is that you can simply change the scale from metres to centimetres and save yourself some computing power and effectively, some time as well, i.e; if the units affect the performance, in the case of which, the units are somewhat relevant but otherwise not at all.

Consider 3D space, where each point is defined as having coordinates (x,y,z). To display this on a 2D surface, use the perspective transformation which maps (x,y,z) in 3D space onto a point (x',y') in 2D space. You can look up perspective transformation. It's simple mathematically. Think of looking at a real world scene outside your window, and trying to draw that scene on your window, with your eye at a fixed point. Or think of a camera, where the 3D reality is projected onto the 2D film surface.

Once you have coordinates in 2D space, then it's just a question of mapping the 2D coordinates to a point on the screen, which is also 2D. In other words, you map (x',y') to a pixel (x'',y''). You can look this up also.

The units are up to you. A distance of one unit in my 3D world space might represent any number of cm I want. What I select depends on the application.

As far as simulations or animation in general, one is moving the scene, or the camera, or both. The computational challenge in real-time 3D animation is that one must normally recalculate the transformation of 3D object onto the computer screen many times per second in order to achieve a realistic looking animation.

I am just giving you some main points. This is not a hard subject provided you find the right book. The math is basically high school level, with vectors and matrices playing a central role in coordinate transformations.

Years ago I learned by reading Ian Angell's book "Computer Graphics in C." It's a bit outdated now in terms of how he uses the C language, but perhaps you can find another suitable book. There are some books on game programming which may help. You can also find websites which explain 3D computer graphics from a game programming point of view. IMO this is a good place to start.

Last edited by a moderator:
Raj Harsh
russ_watters said:
Part of the problem in this thread is that your posts appear to contain a lot of irrelevant information, such as all of the above.
It may seem irrelevant to you but it is not. That was just to show that anything can be narrowed down to a simple idea. You keep saying it's just numbers but where are those those numbers put to use ? The numbers you are referring to are the co-ordinate points, the location of the points that make up the object that we see on screen but to give directions, you need to have some sort of spatial matter which you could refer to. You have a telescope in your photo. Planets have co-ordinate points, do they not ? If I am not mistaken it is called celestial co-ordinate. Regardless of what it is called, you do require reference for it.

russ_watters said:
This and other statements make it sound like you think there is a physical universe inside the computer. There isn't. It's just a big list of numbers.
I do not but it definitely seems like it. If it is in fact only about numbers, then we would not have GPUs today. A GPU is not just used to display the images.

I think it is best to close this case. My question is not making sense to any of you. I am not being disrespectful by saying that, this is only a remark on my failure. Thank you all for your contribution. People here are nice, they take out time to help others which is very gracious of them.

Mark44 said:
his is completely untrue. A computer is much more than just electricity -- it has memory, a central processing unit, input devices like keyboards, mice, and joysticks, output devices like monitors and printers, and non-volatile storage.
All of which work on electricity. And they are made up of electrons. And electrons are charged particles. So basically, electricity. Even we work on electricity, once it sparks out, that usually marks the end of life. This was just a way for me to show how anything can be narrowed down to one simple thing.
Mark44 said:
There is no surface -- the program has set up memory with a coordinate system that models a surface or a space. Moving the joystick or mouse causes the program to adjust the position of the character to some other place.
Thank you ! Now we are getting somewhere. The co-ordinates are stored in memory in relation to one another which is then used to form surfaces, which is then displayed on the screen. Just to trouble you a little bit more, Is the creation of surface predicated upon the existence of a display unit or is some information generated separately ? And to make it clear, what I mean to ask is - are the instructions to create the pixels based on the properties of the shader material, lighting etc. given to the computer when the model is loaded up (meaning that it is built into the model's instruction set) or is it dictated by whether the model will be rendered or not (essentially meaning that it is only the location of the points which is stored for the model and the rest is codependent of other factors, which would also mean that a possible scenario is - say points a,b,c,d are meant to form a square but they instead connect diagonally and form a surface that way) ? This is what I have been asking for so long. By 3D Space in computers, I meant that the points for a cube know how to connect and create planes to form that cube. So thinking of them as pixels, it does not need to be recreated each time. The model itself contains the relative information of the points and their relation is so well defined (the question itself is becoming more clear over time) that the pixels are instantly manipulated if you say, rotate it. And this is why I gave the example of the wooden cube. Let us say we are given an armature, a basic structure. We can add in blocks anywhere we want as we please, which means that the cube may or may not be created (I will reiterate the diagonal connection example). But if we know that four points need to form a plane surface, we can do that. Such an instruction can be given but what I am having a hard time with is regarding those co-ordinate points themselves. Is it that the co-ordinate points are essentially pixel information which exist in relation to the transformation of the cube with the respect to the angle from which the cube is being rendered ?

Aufbauwerk 2045 said:
Consider 3D space, where each point is defined as having coordinates (x,y,z). To display this on a 2D surface, use the perspective transformation which maps (x,y,z) in 3D space onto a point (x',y') in 2D space. You can look up perspective transformation. It's simple mathematically. Think of looking at a real world scene outside your window, and trying to draw that scene on your window, with your eye at a fixed point. Or think of a camera, where the 3D reality is projected onto the 2D film surface.
If I draw something from an angle, say a cuboid which stretches from x'y to xy, it will be a flat image. A picture taken from a camera or a sketch of what I see from the window is also flat. Once done, I cannot manipulate the data, no further changes can be made. Co-ordinates in computers are like me telling you where to plot the points on a numbered graph and instructing how to connect those points, I know this much. But what I do not understand or need a confirmation on (I had a thought which is mentioned in my previous post) is how the spacing between the points is defined and how the the image of the model is instantly rendered on screen if I rotate it or change the camera angle.

Raj Harsh said:
If I draw something from an angle, say a cuboid which stretches from x'y to xy, it will be a flat image. A picture taken from a camera or a sketch of what I see from the window is also flat. Once done, I cannot manipulate the data, no further changes can be made. Co-ordinates in computers are like me telling you where to plot the points on a numbered graph and instructing how to connect those points, I know this much. But what I do not understand or need a confirmation on (I had a thought which is mentioned in my previous post) is how the spacing between the points is defined and how the the image of the model is instantly rendered on screen if I rotate it or change the camera angle.

Taking the second part of your question first, the image of the model appears to be rendered instantly (hopefully at least) because the calculations are done very quickly. Behind the scenes, the program is recalculating the new projection of the 3D coordinates of the object onto the 2D pixel coordinates. Then the new 2D pixel coordinates are used to draw the next frame. So if, for example, we have a stationary camera, and the object is a cube, and the cube is rotating, then the 3D coordinates of the cube's vertices are changing, and those new vertices need to be transformed into pixel coordinates. What happens during the actual drawing is that we have triangles (think of two triangles per cube face) which are drawn and filled in with the appropriate color or texture. Each triangle is defined by three vertices. We use triangles because the GPU is good at drawing triangles very quickly. For more details, I really suggest finding a good book on 3D real time graphics. You may also be interested in a book on Physics for Game Programmers. Not that you are necessarily a game programmer, I just mention that because that is a good place to find this information. As for spacing between points, if you mean mapping real-world spacing into a 3D coordinate system to begin with, that's somewhat arbitrary. I could represent a real-world object using many different coordinates, depending on my choice.

That's about all I can say on this topic. I need to get back to work now! Best wishes. :)

scottdave and Raj Harsh
Raj Harsh said:
The computer is nothing if you come to think of it, it's simply electricity.
Mark44 said:
This is completely untrue. A computer is much more than just electricity -- it has memory, a central processing unit, input devices like keyboards, mice, and joysticks, output devices like monitors and printers, and non-volatile storage.

Raj Harsh said:
All of which work on electricity. And they are made up of electrons.
True, computer components operate on electricity, but that's very different from saying that a computer is "simply electricity."
Raj Harsh said:
And electrons are charged particles. So basically, electricity. Even we work on electricity, once it sparks out, that usually marks the end of life. This was just a way for me to show how anything can be narrowed down to one simple thing.
There's a saying that I believe is due to Einstein: "Make things as simple as possible, but no simpler." By saying that a computer is "simply electricity" you have vastly oversimplified things.

scottdave
Raj Harsh said:
The co-ordinates are stored in memory in relation to one another which is then used to form surfaces, which is then displayed on the screen. Just to trouble you a little bit more, Is the creation of surface predicated upon the existence of a display unit or is some information generated separately ?
Just to be clear, a surface is not actually created, but rather, memory is modified so that when it is displayed on a monitor, the resulting image appears to be a surface. Of course, if you want to see the image, you need a monitor.

Raj Harsh said:
And to make it clear, what I mean to ask is - are the instructions to create the pixels
The pixels aren't created -- they are built into the monitor. What the program does is to turn on the appropriate red, green, or blue pixels to form the image. Part of the computer's memory (video memory) is used to represent a bit pattern that will be displayed on the monitor. That's how it works for most graphics, although there are some computers that use what is called vector graphics rather than a bitmap.[/quote]
Raj Harsh said:
based on the properties of the shader material, lighting etc. given to the computer when the model is loaded up (meaning that it is built into the model's instruction set) or is it dictated by whether the model will be rendered or not (essentially meaning that it is only the location of the points which is stored for the model and the rest is codependent of other factors, which would also mean that a possible scenario is - say points a,b,c,d are meant to form a square but they instead connect diagonally and form a surface that way) ? This is what I have been asking for so long. By 3D Space in computers, I meant that the points for a cube know how to connect and create planes to form that cube.
I'm not sure I understand what you're trying to say here. If you intend for points a, b, c, and d to form a square, the program has to "know" to do this. The points for a cube don't "know" anything, especially how to connect or that they are part of some geometric shape. The program has to do all that.

At its simplest, a 3D image is a plane, with the origin at the upper left corner, and the x-axis extending to the right, and the positive y-axis extending down from the origin. The third dimension is the z-axis, which extends back into the screen. For a simple object like a cube that we're viewing head-on, a program can determine that the front plane should be shown, but that the rear plane should not be shown, based solely on the z-coordinates.
Raj Harsh said:
So thinking of them as pixels, it does not need to be recreated each time. The model itself contains the relative information of the points and their relation is so well defined (the question itself is becoming more clear over time) that the pixels are instantly manipulated if you say, rotate it.
The rotation may seem instantaneous, but there are thousands of machine instructions that have to be executed to rotate the points of even a small object. My understanding is that a matrix transformation has to be applied to each point to calculate its new location.
Raj Harsh said:
And this is why I gave the example of the wooden cube. Let us say we are given an armature, a basic structure. We can add in blocks anywhere we want as we please, which means that the cube may or may not be created (I will reiterate the diagonal connection example). But if we know that four points need to form a plane surface, we can do that. Such an instruction can be given but what I am having a hard time with is regarding those co-ordinate points themselves. Is it that the co-ordinate points are essentially pixel information which exist in relation to the transformation of the cube with the respect to the angle from which the cube is being rendered ?
This doesn't make a lot of sense to me. On the one hand, there are the points that form the framework of the object, and these have nothing to do with pixels. There are other concepts, such as pixel shaders, meshes, textures and others, that are used to tranform the framework of an object to what you see on a monitor (the pixels).

Take a look at this wiki article: https://en.wikipedia.org/wiki/Computer_graphics

• Programming and Computer Science
Replies
3
Views
1K
• Programming and Computer Science
Replies
1
Views
1K
• Programming and Computer Science
Replies
19
Views
3K
• Programming and Computer Science
Replies
10
Views
3K
• Programming and Computer Science
Replies
14
Views
1K
• Programming and Computer Science
Replies
29
Views
3K
• Programming and Computer Science
Replies
10
Views
2K
Replies
9
Views
1K
• General Engineering
Replies
31
Views
2K
• Programming and Computer Science
Replies
1
Views
1K