# Computer Graphics & More

Hello,

I am having trouble comprehending how grids are made and defined in computers. What is the unit that they use and how is it defined ? I know that softwares use standardized units of measure (measurement) such as centimetre. Basically, how is a 3-Dimensional Space created in computers where they are actually restricted two dimensions (I am referring to the pixels which ultimately form the image). Even the 3D image we see is a 2D representation/image.

This is also the reason I am having a hard time understanding and grasping (the concept of) real-world Physics simulations. How do computer based Physics simulations work ? And are the scales even relevant ? What I mean is that you can simply change the scale from metres to centimetres and save yourself some computing power and effectively, some time as well, i.e; if the units affect the performance, in the case of which, the units are somewhat relevant but otherwise not at all.

Related Programming and Computer Science News on Phys.org
anorlunda
Staff Emeritus
Wow. Those are very broad questions. It would take a whole textbook and a semester's study to answer all that.

What I mean is that you can simply change the scale from metres to centimetres and save yourself some computing power and effectively, some time as well, i.e; if the units affect the performance, in the case of which, the units are somewhat relevant but otherwise not at all.
Consider simulating a violin string. (1 dimension makes it easier). I might divide the string into N equal sub-lengths, then simulate each of them a a separate sub problem. That is analogous to a grid in 2D or 3D. The next question, is how big must N be? I can run experiments. Try N=1000, then N=100. If the results change significantly, then I need 100 < N < 1000. If there is almost no change in results, then I try N=10. I am searching for a value of N as big as I can make it (to use less computer resources) that does not significantly change the answers. Once I find that N, I might study many cases of plucking that string all using the same value of N.

Raj Harsh
jedishrfu
Mentor
To understand 3D displayed by 2d technology recall that in art, you can use perspective to trick the brain into seeing a 3D image..

https://en.m.wikipedia.org/wiki/Perspective_(graphical)

In computer graphics we relate pixels to measurements. Consider how some computer fonts are created via tiny dots on a grid, each dot is a pixel.

russ_watters
russ_watters
Mentor
I am having trouble comprehending how grids are made and defined in computers.
Have you taken a high school level geometry (math) class? Are you familiar with how functions are graphed/data is plotted on a graph? Visual representation of data is just x,y,z coordinates on a defined coordinate system. There are a number of techniques for actually drawing objects, but typically they are a collection of equations describing shapes.
What is the unit that they use and how is it defined ? I know that softwares use standardized units of measure (measurement) such as centimetre.
There is no need for units and while I'm not a software engineer I would think the units are applied after-the-fact if necessary. All you need to make the pictures are the x,y,z coordinates.
Basically, how is a 3-Dimensional Space created in computers where they are actually restricted two dimensions (I am referring to the pixels which ultimately form the image). Even the 3D image we see is a 2D representation/image.
Computers don't have dimensions at all. They are just devices that organize and manipulate data. Want a 12 dimensional space? You can just create it with the data: instead of x,y,z coordinates, make x,y,z,a,b,c,d,e....etc. We live in a 3 dimensional world though, so that's how things are drawn....though when you add colors, it's like having 12 dimensions (x,y,z for each l,r,g,b).

Making a 2-d projection of a 3d image in order to display it on the screen though is simple geometry to mimic what our eyes do. It's about finding the angles between objects and plotting them in 2d as distances.
This is also the reason I am having a hard time understanding and grasping (the concept of) real-world Physics simulations. How do computer based Physics simulations work ? And are the scales even relevant ? What I mean is that you can simply change the scale from metres to centimetres and save yourself some computing power and effectively, some time as well, i.e; if the units affect the performance, in the case of which, the units are somewhat relevant but otherwise not at all.
Changing the units does not save computing power. I'm not sure what you are getting at there. 25.4 mm and 1.00 inches is the same amount of data. Scientifically you have 3 significant figures, though the data is probably actually stored as 16 or 32 bit numbers.

CWatters
jedishrfu
Mentor
Here’s an article on computer graphics showing how different colored pixels can be used to to shading and create 3D like imagery among other things.. it has some history of computer graphics too which you should read.

https://en.m.wikipedia.org/wiki/Computer_graphics

Janus
Staff Emeritus
Gold Member
Hello,

I am having trouble comprehending how grids are made and defined in computers. What is the unit that they use and how is it defined ? I know that softwares use standardized units of measure (measurement) such as centimetre. Basically, how is a 3-Dimensional Space created in computers where they are actually restricted two dimensions (I am referring to the pixels which ultimately form the image). Even the 3D image we see is a 2D representation/image.
With one type of Computer graphics modeling called Ray-tracing, the computer creates a virtual 3D space. Here is a screen capture from a modeler called Moray.
It shows four views of the space, front, top, side and camera. In the space, are the objects (in this case a cylinder, sphere, and 2 boxes), the position and aim of the camera, and a light source. The camera view is created from this info , as a wire frame image.

To create an graphics image from this, the ray-tracing program calculates the path of rays leaving the light source, striking a surface and bouncing to the camera. he properties of the surface where a ray strikes an object determines what color the camera "sees" from that point in its field of view and determines what color the pixel at that part of the image will be.
Here's a simple image created from the above model.

Note that it gives simple shading and shadows for the objects, all by calculating the paths of imaginary light rays. Here the object surfaces were also given color.

You can add more complexity by adding surface roughness, highlights, reflection, transparency and refraction and other effects.
Here is the same scene with some of these additional effects added.

Again this is done by calculating the paths of rays from light to camera and how it would interact with objects in the scene along the way and working out the color of the pixel depending on the assigned properties of the objects in the scene.

#### Attachments

• 33.5 KB Views: 308
• 23.7 KB Views: 310
• 26 KB Views: 306
NTL2009 and Raj Harsh
Ibix
2020 Award
How do computer based Physics simulations work ? And are the scales even relevant ?
Here's a simple simulation of a planet orbiting a star.

First you define the star to be at rest at the origin and you decide on its mass, M, and store this in a variable.

Then you decide on the initial x, y, and z coordinates of your planet and the initial x, y, and z velocities of your planet and store all six numbers in variables.

Then you begin a loop. Each time around the loop you take the values you have and overwrite them with the values for a small time ##\delta t## later. So the x position changes to ##x+v_x\delta t## and similarly for the y and z values. And the velocities change due to the gravitational acceleration. So ##v_x## changes to ##v_x-GMx\delta t/r^3## (note that this is the component of Newtonian gravitational acceleration in the x direction, ##GM/r^2## multiplied by ##x/r##) - again, there are similar expressions for the y and z velocities. Then you just go round this loop, updating the positions and velocities each time.

So what you've done is generate the position of the planet. What do you do with them? Depends what you want. If all you want to know is what shape the orbit is, you might just add a line inside the loop instructing the computer to save the position of the planet to a file. Then you could load it in Excel and plot a graph. Alternatively, you might feed the locations live to a program similar to the one Janus showed, and tell it to draw a planet centered at these coordinates, and get a pretty animation for a film or a game.

Note that units, in some senses, don't matter as long as you use a consistent set (all SI, all geometric, whatever). However, you do have to be aware of your computer's limits. You won't easily be able to store a planet's position to the nearest millimetre because millions of kilometres to the millimetre is a number in the millions of millions, and the rounding errors the computer makes will be at or above this scale. Use sensible precision. And you need to pick a sensible value for ##\delta t## - see anorlunda's discussion on picking N for a violin string model.

You can add a lot of bells and whistles to the sketch I've given. More sophisticated programming techniques like arrays and object orientation make managing the information easier. And you could add more planets, or account for the gravitational effect of the planet on the star.

I should also note that the algorithm I described is somewhat naive, and you'll find that planets are quite likely to suddenly escape their solar system as rounding errors accumulate. You can do a lot better, but it gets harder to describe easily.

Hope that helps.

Raj Harsh
Thank you all for the answers, they have helped me learn a lot more.

Have you taken a high school level geometry (math) class? Are you familiar with how functions are graphed/data is plotted on a graph? Visual representation of data is just x,y,z coordinates on a defined coordinate system. There are a number of techniques for actually drawing objects, but typically they are a collection of equations describing shapes.
Yes, I have and I am aware of the method to represent data on graph but that is exactly what baffles me. We do it on a simple piece of paper where you can move up and down, left and right and our movement is restrained. And after some time, we had stopped plotting graphs and our focus had shifted to simply solving equations, even during Calculus. While I am not an excellent student, that concept was clear to me. But in computers, you actually do create a 3D Object which you can manipulate in real-time. That would imply that there exists a 3D Space of some sort. A simple cube both on a paper and in a computer is the same, as long as it is still. Moving along the z-axis can be considered the object being scaled bigger or smaller but when it really comes into perspective is when it is rotated. The rotation is partly the screen updating the pixels based on the shading information provided by the computer but what I find to be mind boggling is that all the points are well-connected and they do not break. Instead of 3D Space in Computers, I should be asking about the cameras.

A big thank you to all the participants and Janus Sir, but I think my question is actually about how cameras are created and how they work in computers and about the arrays which store the point-cloud information and how those points are connected.

russ_watters
Mentor
Yes, I have and I am aware of the method to represent data on graph but that is exactly what baffles me.
I don't see why. There is no meaningful difference.
But in computers, you actually do create a 3D Object which you can manipulate in real-time. That would imply that there exists a 3D Space of some sort.
Sure. There is a set of allowable values of x,y,z coordinates, just like a piece of graph paper has a certain number of blocks.
A simple cube both on a paper and in a computer is the same, as long as it is still.
All you need to make the cube move on paper is a second piece of paper.
Moving along the z-axis can be considered the object being scaled bigger or smaller but when it really comes into perspective is when it is rotated. The rotation is partly the screen updating the pixels based on the shading information provided by the computer but what I find to be mind boggling is that all the points are well-connected and they do not break.
I don't understand what you are saying there.

A big thank you to all the participants and Janus Sir, but I think my question is actually about how cameras are created and how they work in computers and about the arrays which store the point-cloud information and how those points are connected.
You mean virtual cameras that enable displaying the picture? As I and others said, it's just angles, and perspective just like in art. As the artist or programmer, you pick a location and orientation for the camera and a field of view angle and plot what you see!

But in computers, you actually do create a 3D Object which you can manipulate in real-time. That would imply that there exists a 3D Space of some sort.
No. What is there is only a data set, a rule set and an 'engine' what processing them and creates a kind of representation of the objects. The representation is not 'some sort' of existence in any conventional manner.The whole thing is just more detailed but not entirely different than a few words on a piece of paper about the placement of the furniture inside a room to inform the staff of the moving company about the requirements.
Nobody would take those few words as 3D space, right?

.Scott
Homework Helper
Yes, I have and I am aware of the method to represent data on graph but that is exactly what baffles me. We do it on a simple piece of paper where you can move up and down, left and right and our movement is restrained. And after some time, we had stopped plotting graphs and our focus had shifted to simply solving equations, even during Calculus.
For computers, that is the key. They work with the equations and then output a picture.
Consider the following pseudo code as an example:
Code:
note: units are seconds, meters, meters/sec, meters/sec/sec
ballA.type = "sphere"
ballA.color = "red"
ballA.x,y,z = -30, 20, 100
ballA.vx,vy,vz = 0, 0, 0
ballB.type = "sphere"
ballB.color = "blue"
ballB.x,y,z = -25, 20, 10
ballB.vx,vy,vz = 0, 0, 0

t = 0
g=10

camera.x,y,z = 0, -10, 20
camera.dirx,diry,dirz = -1, 0, 0
camera.zoom = 0.5
So I have now defined two balls, both stationary, both 10 meters in radius.
If we presume that the ground is at z=0, then one is sitting on the ground.

Time might represent the current time (time zero) and "g", might represent our gravity.
So with calculations, we could start to change the position and location of the balls as time progresses.
ballB will not move right away, because it is sitting on the ground. But ballA will fall towards the floor and strike ballA.

Let's say we did this 1/30 of a second at a time (t=0, 0.033, 0.067, 0.01, ...). We could compute the position of each ball at each point in time.

The we could use the colors of the balls and the camera information to generate 2D images for each of these frames.

But notice that we model the motion numerically before rendering it to the display. We do not use the 2D display to compute anything.

That would imply that there exists a 3D Space of some sort.
As the example above shows, the computer does not hold a 3D space in the sense you are thinking. Instead, it holds a numerical description of a 3D space.

The rotation is partly the screen updating the pixels based on the shading information provided by the computer but what I find to be mind boggling is that all the points are well-connected and they do not break. Instead of 3D Space in Computers, I should be asking about the cameras.
I think my question is actually about how cameras are created and how they work in computers and about the arrays which store the point-cloud information and how those points are connected.
In the example above, there does not have to be a "point cloud". Instead, there is simply an array of pixels. Given the camera location (the camera's focal point in 3D space) and a description of its "retina", you can compute the 3D location of any point (pixel) on that retina. Then, the 3D location of that point and the focal point will denote a line through the 3D space. That line may intersect with either of the two balls. If it does not, assign a background color to the pixel. It is does, determine which surface in intersects first, what its color is, and what angle the light it striking it (guess I need a lamp object in my code). This method is called ray-tracing, although a primitive form of it. IT is one method of generating a 2D image of a 3D model.

russ_watters and Raj Harsh
In a response to the last two posts from @russ_watters and @Rive, I beg to differ. Computers make use of point-cloud information. Each plane has a said/certain number of vertices and two adjoining surfaces or planes will have overlapping points. The point-cloud information could be stored either in a linear fashion where the progression is from one end to the other or it could be done through using common points (noticing the same co-ordinates and simply removing the extra points to avoid overlapping). And maybe you did not understand my example of the cube on a paper. When you draw a cube on a paper from one perspective, it's fixed and cannot be changed. You can indeed take another paper and draw it from a different perspective and that can is similar to the frames in computer, but the difference is that the cube in the computer is one object. So regardless of whether you change the frame or the perspective or not, you will still be having a cube with all of it's sides, the same cannot be said for the cube on the paper. The 3D cube in the computer does not need to be recreated each and every time we need to change the perspective, only the screen needs to be refreshed to display the new information based on the transformation parameters of the cube but the transformation only happens because you actually have 3D Object. Consider this, in 2D animation, each frame needs to drawn, or a few and then they use interpolation but you do need to draw again and again from each and every single perspective. But if you have a 3D model, you do not have to do that. Once you make it, you can watch it from any angle you want. Now, consider this. You draw a cube on a piece of paper and make one out of wood. To see the the wooden cube from a different angle, you would have to rotate it but to see the one on the paper, you will have to redraw the cube. Now, you can simply move around the cube yourself (like a camera in computer) to look at it from other angles but the one on the paper will always remain the same, even if you draw it with shades to create a three dimensional depth. I hope this makes my point clearer.

And now about the grid. What I meant by that is that we get the option to snap something to the grid, take SolidWorks or Maya for example, or AutoCAD. But how does it do that ? What defines the spacing ? And this is the reason why I asked whether the units are relevant or not and what they are and how they are defined in computers. Let us say my workspace is set to metres, everything is in scale of metres but I can go to the scale of nanometre and yoctometre, maybe to add something. How would the computer differentiate between both of those scales ? What are the parameters that help computers construct things proportionate to each other with respect to units and scales ? If I have an object 10 metres long and then I need to add something very small, I switch to millimetre and choose snap to grid, what is it going to do ? Will the computer actually bring those lines so close together ? If you think about it, it should be able to get smaller and closer to a vast extent, but that does not happen. There is a limit and each and every single thing is so well defined that you have to work within the parameters set by the developer. Many softwares start clipping because you need to change the project's unit scale and sometimes, change the attributes of the camera too.

@.Scott Thank you so very much ! That is very helpful ! Would you please look at my new post ? I have updated my question to make it more clear. I was never asking about the rendered images on a screen but the 3D models or objects themselves. I wanted to know they are created. I was aware that the values are stored numerically in the form of vectors or matrices and that is not what I was asking about. What I was meaning to ask is/was - There are values assigned to each point or a vertex, and a surface is created between them. That is fine. But, it works with a z-axis and you cannot do the same on a piece of paper without intersection and overlaps. If I give you balls you can suspend in a room from the ceiling and hold them up with a support from the walls and then add a surface between each four points with a cardboard, you will be able to make an actual object but on paper, that is very hard, not even possible.

Or, I may be thinking and reading too much into this.

russ_watters
Mentor
In a response to the last two posts from @russ_watters and @Rive, I beg to differ....
...maybe you did not understand my example of the cube on a paper. When you draw a cube on a paper from one perspective, it's fixed and cannot be changed. You can indeed take another paper and draw it from a different perspective and that can is similar to the frames in computer, but the difference is that the cube in the computer is one object.
You're misunderstanding the analogy. The computer is your brain and the paper is the monitor. When drawn on paper, the "virtual" cube exists in the mind of the artist and the 2d representation drawn on paper is like the computer screen....or more directly, like a paper print-out. Same thing.
So regardless of whether you change the frame or the perspective or not, you will still be having a cube with all of it's sides, the same cannot be said for the cube on the paper.
To repeat for emphasis: the "virtual" cube is not on the paper or screen, whether printed out or drawn by a human or computer. The "virtual" cube is in the CPU and the artist's brain.
The 3D cube in the computer does not need to be recreated each and every time we need to change the perspective, only the screen needs to be refreshed to display the new information based on the transformation parameters of the cube but the transformation only happens because you actually have 3D Object. Consider this, in 2D animation, each frame needs to drawn, or a few and then they use interpolation but you do need to draw again and again from each and every single perspective. But if you have a 3D model, you do not have to do that. Once you make it, you can watch it from any angle you want.
Again, for emphasis: there is no actual difference in the logic of the two processes. You make the hand-drawn animation sound harded, but it strikes me that you don't realize just how difficult the 3d rendering output of the model is: It's the vast majority of the workload the computer has to do.
Now, consider this. You draw a cube on a piece of paper and make one out of wood. To see the the wooden cube from a different angle, you would have to rotate it but to see the one on the paper, you will have to redraw the cube. Now, you can simply move around the cube yourself (like a camera in computer) to look at it from other angles but the one on the paper will always remain the same, even if you draw it with shades to create a three dimensional depth. I hope this makes my point clearer.
What I would actually like to know is what your point is here. You started by asking questions about how computers work and now you are telling us how they work. It seems you have a larger agenda in mind. Perhaps some philosophical/existential belief about reality?
And now about the grid. What I meant by that is that we get the option to snap something to the grid, take SolidWorks or Maya for example, or AutoCAD. But how does it do that ? What defines the spacing ?
Typically it is just a pre-chosen number size (such as 16 bit) or extent (1,000 or 1,000,000 units).
And this is the reason why I asked whether the units are relevant or not and what they are and how they are defined in computers. Let us say my workspace is set to metres, everything is in scale of metres but I can go to the scale of nanometre and yoctometre, maybe to add something. How would the computer differentiate between both of those scales ?
There is no need to differentiate unless the user feels the need to. The computer doesn't care. Think about a blank piece of graph paper. Do you need to label the axes with units in order to draw on it?
What are the parameters that help computers construct things proportionate to each other with respect to units and scales ? If I have an object 10 metres long and then I need to add something very small, I switch to millimetre and choose snap to grid, what is it going to do ? Will the computer actually bring those lines so close together ? If you think about it, it should be able to get smaller and closer to a vast extent, but that does not happen. There is a limit and each and every single thing is so well defined that you have to work within the parameters set by the developer. Many softwares start clipping because you need to change the project's unit scale and sometimes, change the attributes of the camera too.
It seems to me that you are making this way more complicated than it really is. A 10m cube is 10x10x10. A 1mm cube is 0.001x0.001x0.001. In CAD, you literally just type in the numbers. The computer doesn't care.

russ_watters
Mentor
I was never asking about the rendered images on a screen but the 3D models or objects themselves. I wanted to know they are created. I was aware that the values are stored numerically in the form of vectors or matrices and that is not what I was asking about. What I was meaning to ask is/was - There are values assigned to each point or a vertex, and a surface is created between them. That is fine. But, it works with a z-axis and you cannot do the same on a piece of paper...

Or, I may be thinking and reading too much into this.
I think you are reading too much into this. The pice of paper with the 2D representation is not/is not analagous to the 3D model. Printed-out to show what the computer is actually thinking, the 3D model is a list of number or equations.

You're misunderstanding the analogy. The computer is your brain and the paper is the monitor. When drawn on paper, the "virtual" cube exists in the mind of the artist and the 2d representation drawn on paper is like the computer screen....or more directly, like a paper print-out. Same thing.
Thank you. This helps.

What I would actually like to know is what your point is here. You started by asking questions about how computers work and now you are telling us how they work. It seems you have a larger agenda in mind. Perhaps some philosophical/existential belief about reality?
No, nothing like such. And I am not telling you how they work, I was only giving you an example to clarify my question.

russ_watters
Mark44
Mentor
I am having trouble comprehending how grids are made and defined in computers.
There are no grids in a computer. A programmer can write a program that will display a two-dimensional grid on a screen, but the computer itself has no such grid.

What is the unit that they use and how is it defined ?
There are no units. The contents of a computer's memory are just numbers.

I know that softwares use standardized units of measure (measurement) such as centimetre.
No, there are no standardized units. Software can be written to indicate units of measure, but the computer deals only with numbers.

Basically, how is a 3-Dimensional Space created in computers where they are actually restricted two dimensions (I am referring to the pixels which ultimately form the image). Even the 3D image we see is a 2D representation/image.
All there is in a computer is memory, which is laid out in a one-dimensional form. The computer's operating system maps part of this memory to pixels on the screen, using a variety of formats and resolutions. A computer program can display an image that appears to be three-dimensional, using perspective, lighting, and shading, the same as how an artist depicts a similar scene on a flat, two-dimensional piece of paper or canvas. As already mentioned, some software uses ray tracing to compute how each point in the image will be lit by an assumed light source.

russ_watters
Computers make use of point-cloud information ... The point-cloud information could be stored either in a linear fashion
Well, not really. There are two different things here. One is modelling, the other is visualisation. For modelling, it is about specific coordinates and dimensions: center and radius of a sphere, points of a square or cube, endpoints and width of a track - all coordinates and linear dimensions. Who would want to use a point cloud to define a triangle, when it is perfectly defined by just nine number? For (graphics) modelling it is all about coordinates (and some other properties) of objects.

For visualization, it is about breaking down all objects to small graphics elements (usually: triangles) which can be processed in a fast and uniform manner by just the coordinates of their corners (lookup: tesselation, triangle mesh) only. So also no real point-cloud here, only a big set of 3D coordinates and an advanced GPU what is chewing them endlessly and transforming them to a 2D image according to the required viewpoint. (There are many other parts of this, like textures and more, but this is the starting point what makes it '3D')

There are other ways to do this, usually with more advanced math and more calculations. But there is no real 'point cloud' here: no 3D space inside.

There are no grids in a computer. A programmer can write a program that will display a two-dimensional grid on a screen, but the computer itself has no such grid.
You seem to have misunderstood my statement. The computer is nothing if you come to think of it, it's simply electricity. According to Science, we function on electricity and so do all the equipment that we may use to test for the speed of light so no matter what, the speed of electricity should not be exceeded as any information would only be conveyed and interpreted at the speed of electricity, which even if little, is less than and different from the speed of light but we still have a different value for it. And light is electromagnetic radiation, magnets can be made out of electricity as change in electric field can cause magnetic field, moving charges cause magnetism so from the surface, it is all just electricity. My question and my point was not what you thought it was.

There are no units. The contents of a computer's memory are just numbers.
Everyone keeps on misunderstanding my question and keep telling the same thing over and over again, that there is nothing in computers. I am very well aware of that. My school taught me about binary very early on, with respect to computers as well. According to all of you, there is nothing in a computer. And that is true, everything needs to be defined. But that would also mean that physics simulations and such are bogus and mean nothing in computers. You basically instruct the computer to either modify the transformation attributes of another object to make it fall towards the a surface which you would call ground plane and call it gravity. But since there are no metres or centimetres or anything like such, how do you know whether what the computer is doing is accurate or not ? The softwares I have used have (so far) allowed me to select unit of length. I have learnt modelling and digital sculpting myself but I want to learn more from the scientific aspect and learn about the inner workings of it.

When I move the joystick on my controller, the game character tends to move forward. But what is it moving on ? A surface. But where is he going ? Forward ? How could he go forward when there is no forward or backward but just numbers ? That was the question, in a nutshell. The game character moves as if there is an entirely new world inside the computer, just as ours, it moves similar to how we can move. The question is much harder than the answer of it would be. It will remain an internal struggle for me to find an answer as I it is hard for me to explain what I want to ask.

How could he go forward when there is no forward or backward but just numbers ?
Philosophy... Any 'meaning' is always an user defined parameter for a computer. No help for this. The game engine modifies the object set according to the rule set what belongs to a specified action, the 3D engine renders it into images, but it will move 'forward' only for you.

Ibix
2020 Award
When I move the joystick on my controller, the game character tends to move forward. But what is it moving on ? A surface. But where is he going ? Forward ? How could he go forward when there is no forward or backward but just numbers ?
All that happens is that the coordinates of your character are updated, subject to constraints like "the z coordinate must be equal to 1 plus the z coordinate of a specified plane". Since the camera is usually attached to your character, the screen needs to be redrawn from another point of view. But nothing's moving, any more than it is in a film. You're just generating a new drawing.

Regarding units, there's nothing in the program that defines the units. It's true that your character is about twice as tall as a table. But there's nothing in the program that says you are 2m tall or 2km tall. You assume the former because you assume that the program is simulating a typical human in something like the real world - but that's your prejudice.

When you get to a full physics simulation, the units must be consistent, otherwise the answers won't work. But there are no units in the computer. When you write the maths you may make some assumptions (for example, G=6.67x10-11 assumes that we're working in SI units). But the computer assigns no significance to that. You can ask the user what units they want to use, and write code to convert to the units your other values assume if you like. But if you don't do it, and you enter r=1000 while using G=6.67x10-11 then the answer will come out as if you entered the distance in metres, whatever you intend.

I hope that makes sense. The point is that the maths of a physics system will only work if you use consistent units. Because if you use inconsistent units your maths is not a good description of the real world. But interpreting the input and output as having physical meaning is up to you.

Raj Harsh
I think I have found a better way to present this question. There are co-ordinate points, which may act as vertices. But where do you put or draw those points ? Putting the grid aside, people have kept on telling me that there is nothing but numbers, but those numbers act like co-ordinate points. And if there is nothing else, where would those points be plotted ? If you do not have a paper, you cannot make the graph you are asked to draw. I am addressing the paper here, that is what I want to know about. Even if you just make points in air, you do have an environment, a surrounding to move about. What is that surrounding and how is that created ? This is my question. And if it is still not clear, this particular topic can be closed.

All that happens is that the coordinates of your character are updated, subject to constraints like "the z coordinate must be equal to 1 plus the z coordinate of a specified plane". Since the camera is usually attached to your character, the screen needs to be redrawn from another point of view. But nothing's moving, any more than it is in a film. You're just generating a new drawing.

Regarding units, there's nothing in the program that defines the units. It's true that your character is about twice as tall as a table. But there's nothing in the program that says you are 2m tall or 2km tall. You assume the former because you assume that the program is simulating a typical human in something like the real world - but that's your prejudice.

When you get to a full physics simulation, the units must be consistent, otherwise the answers won't work. But there are no units in the computer. When you write the maths you may make some assumptions (for example, G=6.67x10-11 assumes that we're working in SI units). But the computer assigns no significance to that. You can ask the user what units they want to use, and write code to convert to the units your other values assume if you like. But if you don't do it, and you enter r=1000 while using G=6.67x10-11 then the answer will come out as if you entered the distance in metres, whatever you intend.

I hope that makes sense. The point is that the maths of a physics system will only work if you use consistent units. Because if you use inconsistent units your maths is not a good description of the real world. But interpreting the input and output as having physical meaning is up to you.
Thank you. This is the answer I think I was looking for regarding units. Thank You Sir !

Ibix