Understanding Grids and Units in Computer Graphics and Physics Simulations

AI Thread Summary
Grids in computer graphics are defined by x, y, and z coordinates, allowing for the representation of 3D objects within a 2D pixel framework. The creation of 3D space involves using mathematical equations to describe shapes and their interactions, with techniques like ray tracing simulating light paths to generate images. Changing measurement units, such as from meters to centimeters, does not inherently save computing power, as the data remains consistent; however, precision and scale can impact performance. Understanding physics simulations in computers requires recognizing that consistent units are crucial for accurate calculations, particularly in dynamic systems like planetary orbits. Ultimately, the manipulation of 3D objects in real-time relies on the effective use of coordinate systems and algorithms to maintain visual coherence.
Raj Harsh
Messages
27
Reaction score
2
Hello,

I am having trouble comprehending how grids are made and defined in computers. What is the unit that they use and how is it defined ? I know that softwares use standardized units of measure (measurement) such as centimetre. Basically, how is a 3-Dimensional Space created in computers where they are actually restricted two dimensions (I am referring to the pixels which ultimately form the image). Even the 3D image we see is a 2D representation/image.

This is also the reason I am having a hard time understanding and grasping (the concept of) real-world Physics simulations. How do computer based Physics simulations work ? And are the scales even relevant ? What I mean is that you can simply change the scale from metres to centimetres and save yourself some computing power and effectively, some time as well, i.e; if the units affect the performance, in the case of which, the units are somewhat relevant but otherwise not at all.
 
Technology news on Phys.org
Wow. Those are very broad questions. It would take a whole textbook and a semester's study to answer all that.

But maybe I can answer one part of your query.
Raj Harsh said:
What I mean is that you can simply change the scale from metres to centimetres and save yourself some computing power and effectively, some time as well, i.e; if the units affect the performance, in the case of which, the units are somewhat relevant but otherwise not at all.

Consider simulating a violin string. (1 dimension makes it easier). I might divide the string into N equal sub-lengths, then simulate each of them a a separate sub problem. That is analogous to a grid in 2D or 3D. The next question, is how big must N be? I can run experiments. Try N=1000, then N=100. If the results change significantly, then I need 100 < N < 1000. If there is almost no change in results, then I try N=10. I am searching for a value of N as big as I can make it (to use less computer resources) that does not significantly change the answers. Once I find that N, I might study many cases of plucking that string all using the same value of N.
 
  • Like
Likes Raj Harsh
To understand 3D displayed by 2d technology recall that in art, you can use perspective to trick the brain into seeing a 3D image..

https://en.m.wikipedia.org/wiki/Perspective_(graphical)

In computer graphics we relate pixels to measurements. Consider how some computer fonts are created via tiny dots on a grid, each dot is a pixel.
 
  • Like
Likes russ_watters
Raj Harsh said:
I am having trouble comprehending how grids are made and defined in computers.
Have you taken a high school level geometry (math) class? Are you familiar with how functions are graphed/data is plotted on a graph? Visual representation of data is just x,y,z coordinates on a defined coordinate system. There are a number of techniques for actually drawing objects, but typically they are a collection of equations describing shapes.
What is the unit that they use and how is it defined ? I know that softwares use standardized units of measure (measurement) such as centimetre.
There is no need for units and while I'm not a software engineer I would think the units are applied after-the-fact if necessary. All you need to make the pictures are the x,y,z coordinates.
Basically, how is a 3-Dimensional Space created in computers where they are actually restricted two dimensions (I am referring to the pixels which ultimately form the image). Even the 3D image we see is a 2D representation/image.
Computers don't have dimensions at all. They are just devices that organize and manipulate data. Want a 12 dimensional space? You can just create it with the data: instead of x,y,z coordinates, make x,y,z,a,b,c,d,e...etc. We live in a 3 dimensional world though, so that's how things are drawn...though when you add colors, it's like having 12 dimensions (x,y,z for each l,r,g,b).

Making a 2-d projection of a 3d image in order to display it on the screen though is simple geometry to mimic what our eyes do. It's about finding the angles between objects and plotting them in 2d as distances.
This is also the reason I am having a hard time understanding and grasping (the concept of) real-world Physics simulations. How do computer based Physics simulations work ? And are the scales even relevant ? What I mean is that you can simply change the scale from metres to centimetres and save yourself some computing power and effectively, some time as well, i.e; if the units affect the performance, in the case of which, the units are somewhat relevant but otherwise not at all.
Changing the units does not save computing power. I'm not sure what you are getting at there. 25.4 mm and 1.00 inches is the same amount of data. Scientifically you have 3 significant figures, though the data is probably actually stored as 16 or 32 bit numbers.
 
  • Like
Likes CWatters
Raj Harsh said:
Hello,

I am having trouble comprehending how grids are made and defined in computers. What is the unit that they use and how is it defined ? I know that softwares use standardized units of measure (measurement) such as centimetre. Basically, how is a 3-Dimensional Space created in computers where they are actually restricted two dimensions (I am referring to the pixels which ultimately form the image). Even the 3D image we see is a 2D representation/image.
With one type of Computer graphics modeling called Ray-tracing, the computer creates a virtual 3D space. Here is a screen capture from a modeler called Moray.
It shows four views of the space, front, top, side and camera. In the space, are the objects (in this case a cylinder, sphere, and 2 boxes), the position and aim of the camera, and a light source. The camera view is created from this info , as a wire frame image.
wireframe.png

To create an graphics image from this, the ray-tracing program calculates the path of rays leaving the light source, striking a surface and bouncing to the camera. he properties of the surface where a ray strikes an object determines what color the camera "sees" from that point in its field of view and determines what color the pixel at that part of the image will be.
Here's a simple image created from the above model.
cylinder.png

Note that it gives simple shading and shadows for the objects, all by calculating the paths of imaginary light rays. Here the object surfaces were also given color.

You can add more complexity by adding surface roughness, highlights, reflection, transparency and refraction and other effects.
Here is the same scene with some of these additional effects added.

cylinder2.png

Again this is done by calculating the paths of rays from light to camera and how it would interact with objects in the scene along the way and working out the color of the pixel depending on the assigned properties of the objects in the scene.
 

Attachments

  • wireframe.png
    wireframe.png
    21.5 KB · Views: 669
  • cylinder.png
    cylinder.png
    18.6 KB · Views: 602
  • cylinder2.png
    cylinder2.png
    22 KB · Views: 579
  • Like
Likes NTL2009 and Raj Harsh
Raj Harsh said:
How do computer based Physics simulations work ? And are the scales even relevant ?
Here's a simple simulation of a planet orbiting a star.

First you define the star to be at rest at the origin and you decide on its mass, M, and store this in a variable.

Then you decide on the initial x, y, and z coordinates of your planet and the initial x, y, and z velocities of your planet and store all six numbers in variables.

Then you begin a loop. Each time around the loop you take the values you have and overwrite them with the values for a small time ##\delta t## later. So the x position changes to ##x+v_x\delta t## and similarly for the y and z values. And the velocities change due to the gravitational acceleration. So ##v_x## changes to ##v_x-GMx\delta t/r^3## (note that this is the component of Newtonian gravitational acceleration in the x direction, ##GM/r^2## multiplied by ##x/r##) - again, there are similar expressions for the y and z velocities. Then you just go round this loop, updating the positions and velocities each time.

So what you've done is generate the position of the planet. What do you do with them? Depends what you want. If all you want to know is what shape the orbit is, you might just add a line inside the loop instructing the computer to save the position of the planet to a file. Then you could load it in Excel and plot a graph. Alternatively, you might feed the locations live to a program similar to the one Janus showed, and tell it to draw a planet centered at these coordinates, and get a pretty animation for a film or a game.

Note that units, in some senses, don't matter as long as you use a consistent set (all SI, all geometric, whatever). However, you do have to be aware of your computer's limits. You won't easily be able to store a planet's position to the nearest millimetre because millions of kilometres to the millimetre is a number in the millions of millions, and the rounding errors the computer makes will be at or above this scale. Use sensible precision. And you need to pick a sensible value for ##\delta t## - see anorlunda's discussion on picking N for a violin string model.

You can add a lot of bells and whistles to the sketch I've given. More sophisticated programming techniques like arrays and object orientation make managing the information easier. And you could add more planets, or account for the gravitational effect of the planet on the star.

I should also note that the algorithm I described is somewhat naive, and you'll find that planets are quite likely to suddenly escape their solar system as rounding errors accumulate. You can do a lot better, but it gets harder to describe easily.

Hope that helps.
 
  • Like
Likes Raj Harsh
Thank you all for the answers, they have helped me learn a lot more.
 
russ_watters said:
Have you taken a high school level geometry (math) class? Are you familiar with how functions are graphed/data is plotted on a graph? Visual representation of data is just x,y,z coordinates on a defined coordinate system. There are a number of techniques for actually drawing objects, but typically they are a collection of equations describing shapes.

Yes, I have and I am aware of the method to represent data on graph but that is exactly what baffles me. We do it on a simple piece of paper where you can move up and down, left and right and our movement is restrained. And after some time, we had stopped plotting graphs and our focus had shifted to simply solving equations, even during Calculus. While I am not an excellent student, that concept was clear to me. But in computers, you actually do create a 3D Object which you can manipulate in real-time. That would imply that there exists a 3D Space of some sort. A simple cube both on a paper and in a computer is the same, as long as it is still. Moving along the z-axis can be considered the object being scaled bigger or smaller but when it really comes into perspective is when it is rotated. The rotation is partly the screen updating the pixels based on the shading information provided by the computer but what I find to be mind boggling is that all the points are well-connected and they do not break. Instead of 3D Space in Computers, I should be asking about the cameras.

A big thank you to all the participants and Janus Sir, but I think my question is actually about how cameras are created and how they work in computers and about the arrays which store the point-cloud information and how those points are connected.
 
  • #10
Raj Harsh said:
Yes, I have and I am aware of the method to represent data on graph but that is exactly what baffles me.
I don't see why. There is no meaningful difference.
But in computers, you actually do create a 3D Object which you can manipulate in real-time. That would imply that there exists a 3D Space of some sort.
Sure. There is a set of allowable values of x,y,z coordinates, just like a piece of graph paper has a certain number of blocks.
A simple cube both on a paper and in a computer is the same, as long as it is still.
All you need to make the cube move on paper is a second piece of paper.
Moving along the z-axis can be considered the object being scaled bigger or smaller but when it really comes into perspective is when it is rotated. The rotation is partly the screen updating the pixels based on the shading information provided by the computer but what I find to be mind boggling is that all the points are well-connected and they do not break.
I don't understand what you are saying there.
Instead of 3D Space in Computers, I should be asking about the cameras.

A big thank you to all the participants and Janus Sir, but I think my question is actually about how cameras are created and how they work in computers and about the arrays which store the point-cloud information and how those points are connected.
You mean virtual cameras that enable displaying the picture? As I and others said, it's just angles, and perspective just like in art. As the artist or programmer, you pick a location and orientation for the camera and a field of view angle and plot what you see!
 
  • #11
Raj Harsh said:
But in computers, you actually do create a 3D Object which you can manipulate in real-time. That would imply that there exists a 3D Space of some sort.
No. What is there is only a data set, a rule set and an 'engine' what processing them and creates a kind of representation of the objects. The representation is not 'some sort' of existence in any conventional manner.The whole thing is just more detailed but not entirely different than a few words on a piece of paper about the placement of the furniture inside a room to inform the staff of the moving company about the requirements.
Nobody would take those few words as 3D space, right?
 
  • #12
Raj Harsh said:
Yes, I have and I am aware of the method to represent data on graph but that is exactly what baffles me. We do it on a simple piece of paper where you can move up and down, left and right and our movement is restrained. And after some time, we had stopped plotting graphs and our focus had shifted to simply solving equations, even during Calculus.
For computers, that is the key. They work with the equations and then output a picture.
Consider the following pseudo code as an example:
Code:
note: units are seconds, meters, meters/sec, meters/sec/sec
ballA.type = "sphere"
ballA.color = "red"
ballA.radius = 10
ballA.x,y,z = -30, 20, 100
ballA.vx,vy,vz = 0, 0, 0
ballB.type = "sphere"
ballB.color = "blue"
ballB.radius = 10
ballB.x,y,z = -25, 20, 10
ballB.vx,vy,vz = 0, 0, 0

t = 0
g=10

camera.x,y,z = 0, -10, 20
camera.dirx,diry,dirz = -1, 0, 0
camera.zoom = 0.5

So I have now defined two balls, both stationary, both 10 meters in radius.
If we presume that the ground is at z=0, then one is sitting on the ground.

Time might represent the current time (time zero) and "g", might represent our gravity.
So with calculations, we could start to change the position and location of the balls as time progresses.
ballB will not move right away, because it is sitting on the ground. But ballA will fall towards the floor and strike ballA.

Let's say we did this 1/30 of a second at a time (t=0, 0.033, 0.067, 0.01, ...). We could compute the position of each ball at each point in time.

The we could use the colors of the balls and the camera information to generate 2D images for each of these frames.

But notice that we model the motion numerically before rendering it to the display. We do not use the 2D display to compute anything.


Raj Harsh said:
That would imply that there exists a 3D Space of some sort.
As the example above shows, the computer does not hold a 3D space in the sense you are thinking. Instead, it holds a numerical description of a 3D space.

Raj Harsh said:
The rotation is partly the screen updating the pixels based on the shading information provided by the computer but what I find to be mind boggling is that all the points are well-connected and they do not break. Instead of 3D Space in Computers, I should be asking about the cameras.

Raj Harsh said:
I think my question is actually about how cameras are created and how they work in computers and about the arrays which store the point-cloud information and how those points are connected.

In the example above, there does not have to be a "point cloud". Instead, there is simply an array of pixels. Given the camera location (the camera's focal point in 3D space) and a description of its "retina", you can compute the 3D location of any point (pixel) on that retina. Then, the 3D location of that point and the focal point will denote a line through the 3D space. That line may intersect with either of the two balls. If it does not, assign a background color to the pixel. It is does, determine which surface in intersects first, what its color is, and what angle the light it striking it (guess I need a lamp object in my code). This method is called ray-tracing, although a primitive form of it. IT is one method of generating a 2D image of a 3D model.
 
  • Like
Likes russ_watters and Raj Harsh
  • #13
In a response to the last two posts from @russ_watters and @Rive, I beg to differ. Computers make use of point-cloud information. Each plane has a said/certain number of vertices and two adjoining surfaces or planes will have overlapping points. The point-cloud information could be stored either in a linear fashion where the progression is from one end to the other or it could be done through using common points (noticing the same co-ordinates and simply removing the extra points to avoid overlapping). And maybe you did not understand my example of the cube on a paper. When you draw a cube on a paper from one perspective, it's fixed and cannot be changed. You can indeed take another paper and draw it from a different perspective and that can is similar to the frames in computer, but the difference is that the cube in the computer is one object. So regardless of whether you change the frame or the perspective or not, you will still be having a cube with all of it's sides, the same cannot be said for the cube on the paper. The 3D cube in the computer does not need to be recreated each and every time we need to change the perspective, only the screen needs to be refreshed to display the new information based on the transformation parameters of the cube but the transformation only happens because you actually have 3D Object. Consider this, in 2D animation, each frame needs to drawn, or a few and then they use interpolation but you do need to draw again and again from each and every single perspective. But if you have a 3D model, you do not have to do that. Once you make it, you can watch it from any angle you want. Now, consider this. You draw a cube on a piece of paper and make one out of wood. To see the the wooden cube from a different angle, you would have to rotate it but to see the one on the paper, you will have to redraw the cube. Now, you can simply move around the cube yourself (like a camera in computer) to look at it from other angles but the one on the paper will always remain the same, even if you draw it with shades to create a three dimensional depth. I hope this makes my point clearer.

And now about the grid. What I meant by that is that we get the option to snap something to the grid, take SolidWorks or Maya for example, or AutoCAD. But how does it do that ? What defines the spacing ? And this is the reason why I asked whether the units are relevant or not and what they are and how they are defined in computers. Let us say my workspace is set to metres, everything is in scale of metres but I can go to the scale of nanometre and yoctometre, maybe to add something. How would the computer differentiate between both of those scales ? What are the parameters that help computers construct things proportionate to each other with respect to units and scales ? If I have an object 10 metres long and then I need to add something very small, I switch to millimetre and choose snap to grid, what is it going to do ? Will the computer actually bring those lines so close together ? If you think about it, it should be able to get smaller and closer to a vast extent, but that does not happen. There is a limit and each and every single thing is so well defined that you have to work within the parameters set by the developer. Many softwares start clipping because you need to change the project's unit scale and sometimes, change the attributes of the camera too.
 
  • #14
@.Scott Thank you so very much ! That is very helpful ! Would you please look at my new post ? I have updated my question to make it more clear. I was never asking about the rendered images on a screen but the 3D models or objects themselves. I wanted to know they are created. I was aware that the values are stored numerically in the form of vectors or matrices and that is not what I was asking about. What I was meaning to ask is/was - There are values assigned to each point or a vertex, and a surface is created between them. That is fine. But, it works with a z-axis and you cannot do the same on a piece of paper without intersection and overlaps. If I give you balls you can suspend in a room from the ceiling and hold them up with a support from the walls and then add a surface between each four points with a cardboard, you will be able to make an actual object but on paper, that is very hard, not even possible.

Or, I may be thinking and reading too much into this.
 
  • #15
Raj Harsh said:
In a response to the last two posts from @russ_watters and @Rive, I beg to differ...
...maybe you did not understand my example of the cube on a paper. When you draw a cube on a paper from one perspective, it's fixed and cannot be changed. You can indeed take another paper and draw it from a different perspective and that can is similar to the frames in computer, but the difference is that the cube in the computer is one object.
You're misunderstanding the analogy. The computer is your brain and the paper is the monitor. When drawn on paper, the "virtual" cube exists in the mind of the artist and the 2d representation drawn on paper is like the computer screen...or more directly, like a paper print-out. Same thing.
So regardless of whether you change the frame or the perspective or not, you will still be having a cube with all of it's sides, the same cannot be said for the cube on the paper.
To repeat for emphasis: the "virtual" cube is not on the paper or screen, whether printed out or drawn by a human or computer. The "virtual" cube is in the CPU and the artist's brain.
The 3D cube in the computer does not need to be recreated each and every time we need to change the perspective, only the screen needs to be refreshed to display the new information based on the transformation parameters of the cube but the transformation only happens because you actually have 3D Object. Consider this, in 2D animation, each frame needs to drawn, or a few and then they use interpolation but you do need to draw again and again from each and every single perspective. But if you have a 3D model, you do not have to do that. Once you make it, you can watch it from any angle you want.
Again, for emphasis: there is no actual difference in the logic of the two processes. You make the hand-drawn animation sound harded, but it strikes me that you don't realize just how difficult the 3d rendering output of the model is: It's the vast majority of the workload the computer has to do.
Now, consider this. You draw a cube on a piece of paper and make one out of wood. To see the the wooden cube from a different angle, you would have to rotate it but to see the one on the paper, you will have to redraw the cube. Now, you can simply move around the cube yourself (like a camera in computer) to look at it from other angles but the one on the paper will always remain the same, even if you draw it with shades to create a three dimensional depth. I hope this makes my point clearer.
What I would actually like to know is what your point is here. You started by asking questions about how computers work and now you are telling us how they work. It seems you have a larger agenda in mind. Perhaps some philosophical/existential belief about reality?
And now about the grid. What I meant by that is that we get the option to snap something to the grid, take SolidWorks or Maya for example, or AutoCAD. But how does it do that ? What defines the spacing ?
Typically it is just a pre-chosen number size (such as 16 bit) or extent (1,000 or 1,000,000 units).
And this is the reason why I asked whether the units are relevant or not and what they are and how they are defined in computers. Let us say my workspace is set to metres, everything is in scale of metres but I can go to the scale of nanometre and yoctometre, maybe to add something. How would the computer differentiate between both of those scales ?
There is no need to differentiate unless the user feels the need to. The computer doesn't care. Think about a blank piece of graph paper. Do you need to label the axes with units in order to draw on it?
What are the parameters that help computers construct things proportionate to each other with respect to units and scales ? If I have an object 10 metres long and then I need to add something very small, I switch to millimetre and choose snap to grid, what is it going to do ? Will the computer actually bring those lines so close together ? If you think about it, it should be able to get smaller and closer to a vast extent, but that does not happen. There is a limit and each and every single thing is so well defined that you have to work within the parameters set by the developer. Many softwares start clipping because you need to change the project's unit scale and sometimes, change the attributes of the camera too.
It seems to me that you are making this way more complicated than it really is. A 10m cube is 10x10x10. A 1mm cube is 0.001x0.001x0.001. In CAD, you literally just type in the numbers. The computer doesn't care.
 
  • #16
Raj Harsh said:
I was never asking about the rendered images on a screen but the 3D models or objects themselves. I wanted to know they are created. I was aware that the values are stored numerically in the form of vectors or matrices and that is not what I was asking about. What I was meaning to ask is/was - There are values assigned to each point or a vertex, and a surface is created between them. That is fine. But, it works with a z-axis and you cannot do the same on a piece of paper...

Or, I may be thinking and reading too much into this.
I think you are reading too much into this. The pice of paper with the 2D representation is not/is not analagous to the 3D model. Printed-out to show what the computer is actually thinking, the 3D model is a list of number or equations.
 
  • #17
russ_watters said:
You're misunderstanding the analogy. The computer is your brain and the paper is the monitor. When drawn on paper, the "virtual" cube exists in the mind of the artist and the 2d representation drawn on paper is like the computer screen...or more directly, like a paper print-out. Same thing.
Thank you. This helps.

russ_watters said:
What I would actually like to know is what your point is here. You started by asking questions about how computers work and now you are telling us how they work. It seems you have a larger agenda in mind. Perhaps some philosophical/existential belief about reality?
No, nothing like such. And I am not telling you how they work, I was only giving you an example to clarify my question.
 
  • Like
Likes russ_watters
  • #18
@Raj Harsh, you had a lot of misconceptions about computers in your first post in this thread.
Raj Harsh said:
I am having trouble comprehending how grids are made and defined in computers.
There are no grids in a computer. A programmer can write a program that will display a two-dimensional grid on a screen, but the computer itself has no such grid.

Raj Harsh said:
What is the unit that they use and how is it defined ?
There are no units. The contents of a computer's memory are just numbers.

Raj Harsh said:
I know that softwares use standardized units of measure (measurement) such as centimetre.
No, there are no standardized units. Software can be written to indicate units of measure, but the computer deals only with numbers.

Raj Harsh said:
Basically, how is a 3-Dimensional Space created in computers where they are actually restricted two dimensions (I am referring to the pixels which ultimately form the image). Even the 3D image we see is a 2D representation/image.
All there is in a computer is memory, which is laid out in a one-dimensional form. The computer's operating system maps part of this memory to pixels on the screen, using a variety of formats and resolutions. A computer program can display an image that appears to be three-dimensional, using perspective, lighting, and shading, the same as how an artist depicts a similar scene on a flat, two-dimensional piece of paper or canvas. As already mentioned, some software uses ray tracing to compute how each point in the image will be lit by an assumed light source.
 
  • Like
Likes russ_watters
  • #19
Raj Harsh said:
Computers make use of point-cloud information ... The point-cloud information could be stored either in a linear fashion
Well, not really. There are two different things here. One is modelling, the other is visualisation. For modelling, it is about specific coordinates and dimensions: center and radius of a sphere, points of a square or cube, endpoints and width of a track - all coordinates and linear dimensions. Who would want to use a point cloud to define a triangle, when it is perfectly defined by just nine number? For (graphics) modelling it is all about coordinates (and some other properties) of objects.

For visualization, it is about breaking down all objects to small graphics elements (usually: triangles) which can be processed in a fast and uniform manner by just the coordinates of their corners (lookup: tesselation, triangle mesh) only. So also no real point-cloud here, only a big set of 3D coordinates and an advanced GPU what is chewing them endlessly and transforming them to a 2D image according to the required viewpoint. (There are many other parts of this, like textures and more, but this is the starting point what makes it '3D')

There are other ways to do this, usually with more advanced math and more calculations. But there is no real 'point cloud' here: no 3D space inside.
 
  • #20
Mark44 said:
There are no grids in a computer. A programmer can write a program that will display a two-dimensional grid on a screen, but the computer itself has no such grid.
You seem to have misunderstood my statement. The computer is nothing if you come to think of it, it's simply electricity. According to Science, we function on electricity and so do all the equipment that we may use to test for the speed of light so no matter what, the speed of electricity should not be exceeded as any information would only be conveyed and interpreted at the speed of electricity, which even if little, is less than and different from the speed of light but we still have a different value for it. And light is electromagnetic radiation, magnets can be made out of electricity as change in electric field can cause magnetic field, moving charges cause magnetism so from the surface, it is all just electricity. My question and my point was not what you thought it was.

Mark44 said:
There are no units. The contents of a computer's memory are just numbers.
Everyone keeps on misunderstanding my question and keep telling the same thing over and over again, that there is nothing in computers. I am very well aware of that. My school taught me about binary very early on, with respect to computers as well. According to all of you, there is nothing in a computer. And that is true, everything needs to be defined. But that would also mean that physics simulations and such are bogus and mean nothing in computers. You basically instruct the computer to either modify the transformation attributes of another object to make it fall towards the a surface which you would call ground plane and call it gravity. But since there are no metres or centimetres or anything like such, how do you know whether what the computer is doing is accurate or not ? The softwares I have used have (so far) allowed me to select unit of length. I have learned modelling and digital sculpting myself but I want to learn more from the scientific aspect and learn about the inner workings of it.

When I move the joystick on my controller, the game character tends to move forward. But what is it moving on ? A surface. But where is he going ? Forward ? How could he go forward when there is no forward or backward but just numbers ? That was the question, in a nutshell. The game character moves as if there is an entirely new world inside the computer, just as ours, it moves similar to how we can move. The question is much harder than the answer of it would be. It will remain an internal struggle for me to find an answer as I it is hard for me to explain what I want to ask.
 
  • #21
Raj Harsh said:
How could he go forward when there is no forward or backward but just numbers ?
Philosophy... Any 'meaning' is always an user defined parameter for a computer. No help for this. The game engine modifies the object set according to the rule set what belongs to a specified action, the 3D engine renders it into images, but it will move 'forward' only for you.
 
  • #22
Raj Harsh said:
When I move the joystick on my controller, the game character tends to move forward. But what is it moving on ? A surface. But where is he going ? Forward ? How could he go forward when there is no forward or backward but just numbers ?
All that happens is that the coordinates of your character are updated, subject to constraints like "the z coordinate must be equal to 1 plus the z coordinate of a specified plane". Since the camera is usually attached to your character, the screen needs to be redrawn from another point of view. But nothing's moving, any more than it is in a film. You're just generating a new drawing.

Regarding units, there's nothing in the program that defines the units. It's true that your character is about twice as tall as a table. But there's nothing in the program that says you are 2m tall or 2km tall. You assume the former because you assume that the program is simulating a typical human in something like the real world - but that's your prejudice.

When you get to a full physics simulation, the units must be consistent, otherwise the answers won't work. But there are no units in the computer. When you write the maths you may make some assumptions (for example, G=6.67x10-11 assumes that we're working in SI units). But the computer assigns no significance to that. You can ask the user what units they want to use, and write code to convert to the units your other values assume if you like. But if you don't do it, and you enter r=1000 while using G=6.67x10-11 then the answer will come out as if you entered the distance in metres, whatever you intend.

I hope that makes sense. The point is that the maths of a physics system will only work if you use consistent units. Because if you use inconsistent units your maths is not a good description of the real world. But interpreting the input and output as having physical meaning is up to you.
 
  • Like
Likes Raj Harsh
  • #23
I think I have found a better way to present this question. There are co-ordinate points, which may act as vertices. But where do you put or draw those points ? Putting the grid aside, people have kept on telling me that there is nothing but numbers, but those numbers act like co-ordinate points. And if there is nothing else, where would those points be plotted ? If you do not have a paper, you cannot make the graph you are asked to draw. I am addressing the paper here, that is what I want to know about. Even if you just make points in air, you do have an environment, a surrounding to move about. What is that surrounding and how is that created ? This is my question. And if it is still not clear, this particular topic can be closed.
 
  • #24
Ibix said:
All that happens is that the coordinates of your character are updated, subject to constraints like "the z coordinate must be equal to 1 plus the z coordinate of a specified plane". Since the camera is usually attached to your character, the screen needs to be redrawn from another point of view. But nothing's moving, any more than it is in a film. You're just generating a new drawing.

Regarding units, there's nothing in the program that defines the units. It's true that your character is about twice as tall as a table. But there's nothing in the program that says you are 2m tall or 2km tall. You assume the former because you assume that the program is simulating a typical human in something like the real world - but that's your prejudice.

When you get to a full physics simulation, the units must be consistent, otherwise the answers won't work. But there are no units in the computer. When you write the maths you may make some assumptions (for example, G=6.67x10-11 assumes that we're working in SI units). But the computer assigns no significance to that. You can ask the user what units they want to use, and write code to convert to the units your other values assume if you like. But if you don't do it, and you enter r=1000 while using G=6.67x10-11 then the answer will come out as if you entered the distance in metres, whatever you intend.

I hope that makes sense. The point is that the maths of a physics system will only work if you use consistent units. Because if you use inconsistent units your maths is not a good description of the real world. But interpreting the input and output as having physical meaning is up to you.

Thank you. This is the answer I think I was looking for regarding units. Thank You Sir !
 
  • #25
Raj Harsh said:
I think I have found a better to present this question. There are co-ordinate points, which may act as vertices. But where do you put or draw those points ?
Depends on where you decided your camera was and what camera properties (field of view etc) you decided to simulate.
Raj Harsh said:
And if there is nothing else, where would those points be plotted ?
If you don't have a camera, or some kind of reference, then it's up to you. Plot 'em where you want 'em.
 
  • #26
Ibix said:
Depends on where you decided your camera was and what camera properties (field of view etc) you decided to simulate.
If this makes any sense, the camera would also require a space to be in, would it not ?
 
  • #27
Raj Harsh said:
You seem to have misunderstood my statement. The computer is nothing if you come to think of it, it's simply electricity. According to Science, we function on electricity and so do all the equipment that we may use to test for the speed of light so no matter what, the speed of electricity should not be exceeded as any information would only be conveyed and interpreted at the speed of electricity, which even if little, is less than and different from the speed of light but we still have a different value for it. And light is electromagnetic radiation, magnets can be made out of electricity as change in electric field can cause magnetic field, moving charges cause magnetism so from the surface, it is all just electricity. My question and my point was not what you thought it was.
Part of the problem in this thread is that your posts appear to contain a lot of irrelevant information, such as all of the above.
Everyone keeps on misunderstanding my question and keep telling the same thing over and over again, that there is nothing in computers. I am very well aware of that.
It does not appear that you do...
But that would also mean that physics simulations and such are bogus and mean nothing in computers. You basically instruct the computer to either modify the transformation attributes of another object to make it fall towards the a surface which you would call ground plane and call it gravity. But since there are no metres or centimetres or anything like such, how do you know whether what the computer is doing is accurate or not ?
It is up to the programmer to make it accurate. Some physics simulations - like video games - are not accurate. Often they aren't meant to be.

But again with the units: the computer doesn't care about the units. It is up to the programmer to keep track of them. Think about what you do with a calculator. Do you type in units or do you keep track of them separately? Or when you graph something in a spreadsheet: if you forget to tell Excel the units, does it affect the shape of the graph? The units are just labels you add for your own benefit. The computer doesn't care if you use them or not.
When I move the joystick on my controller, the game character tends to move forward. But what is it moving on ? A surface. But where is he going ? Forward ? How could he go forward when there is no forward or backward but just numbers ?
Starting from 0, +1 is forwards, -1 is backwards. But the answer to your existential conundrum is that since he doesn't exist he doesn't go anywhere (unless you choose to plot him on a monitor; then he can be said to move from left to right, for example).
That was the question, in a nutshell. The game character moves as if there is an entirely new world inside the computer, just as ours, it moves similar to how we can move. The question is much harder than the answer of it would be. It will remain an internal struggle for me to find an answer as I it is hard for me to explain what I want to ask.
This and other statements make it sound like you think there is a physical universe inside the computer. There isn't. It's just a big list of numbers.

Consider an Excel spreadsheet. It's a 2-dimensional array of numbers and letters. An Excel spreadsheet has a pre-defined size of 2^20 rows (about a million) and 2^12 (about 16,000) columns. But even if those are the defined space, the spreadsheet does not necessarily have 16 billion pieces of data in it. You can tell by the size of the file that only the space actually filled by the numbers is fully described in the computer's memory memory.
 
  • #28
Raj Harsh said:
The computer is nothing if you come to think of it, it's simply electricity.
This is completely untrue. A computer is much more than just electricity -- it has memory, a central processing unit, input devices like keyboards, mice, and joysticks, output devices like monitors and printers, and non-volatile storage.

Raj Harsh said:
Everyone keeps on misunderstanding my question and keep telling the same thing over and over again, that there is nothing in computers. I am very well aware of that.
No one is saying that there is nothing in computers. What they are saying is that some scenario is modeled by a program that simulates reality in some fashion. However, that scenario involving objects in some physical space doesn't exist -- at heart, the whole thing is just a long list of numbers that are manipulated by the CPU or GPU (central processing unit, graphics processing unit).

Raj Harsh said:
When I move the joystick on my controller, the game character tends to move forward. But what is it moving on ? A surface. But where is he going ? Forward ? How could he go forward when there is no forward or backward but just numbers ?
There is no surface -- the program has set up memory with a coordinate system that models a surface or a space. Moving the joystick or mouse causes the program to adjust the position of the character to some other place.
 
  • #29
Raj Harsh said:
Hello,

I am having trouble comprehending how grids are made and defined in computers. What is the unit that they use and how is it defined ? I know that softwares use standardized units of measure (measurement) such as centimetre. Basically, how is a 3-Dimensional Space created in computers where they are actually restricted two dimensions (I am referring to the pixels which ultimately form the image). Even the 3D image we see is a 2D representation/image.

This is also the reason I am having a hard time understanding and grasping (the concept of) real-world Physics simulations. How do computer based Physics simulations work ? And are the scales even relevant ? What I mean is that you can simply change the scale from metres to centimetres and save yourself some computing power and effectively, some time as well, i.e; if the units affect the performance, in the case of which, the units are somewhat relevant but otherwise not at all.

Your first question is easy enough to answer.

Consider 3D space, where each point is defined as having coordinates (x,y,z). To display this on a 2D surface, use the perspective transformation which maps (x,y,z) in 3D space onto a point (x',y') in 2D space. You can look up perspective transformation. It's simple mathematically. Think of looking at a real world scene outside your window, and trying to draw that scene on your window, with your eye at a fixed point. Or think of a camera, where the 3D reality is projected onto the 2D film surface.

Once you have coordinates in 2D space, then it's just a question of mapping the 2D coordinates to a point on the screen, which is also 2D. In other words, you map (x',y') to a pixel (x'',y''). You can look this up also.

The units are up to you. A distance of one unit in my 3D world space might represent any number of cm I want. What I select depends on the application.

As far as simulations or animation in general, one is moving the scene, or the camera, or both. The computational challenge in real-time 3D animation is that one must normally recalculate the transformation of 3D object onto the computer screen many times per second in order to achieve a realistic looking animation.

I am just giving you some main points. This is not a hard subject provided you find the right book. The math is basically high school level, with vectors and matrices playing a central role in coordinate transformations.

Years ago I learned by reading Ian Angell's book "Computer Graphics in C." It's a bit outdated now in terms of how he uses the C language, but perhaps you can find another suitable book. There are some books on game programming which may help. You can also find websites which explain 3D computer graphics from a game programming point of view. IMO this is a good place to start.
 
Last edited by a moderator:
  • Like
Likes Raj Harsh
  • #30
russ_watters said:
Part of the problem in this thread is that your posts appear to contain a lot of irrelevant information, such as all of the above.
It may seem irrelevant to you but it is not. That was just to show that anything can be narrowed down to a simple idea. You keep saying it's just numbers but where are those those numbers put to use ? The numbers you are referring to are the co-ordinate points, the location of the points that make up the object that we see on screen but to give directions, you need to have some sort of spatial matter which you could refer to. You have a telescope in your photo. Planets have co-ordinate points, do they not ? If I am not mistaken it is called celestial co-ordinate. Regardless of what it is called, you do require reference for it.

russ_watters said:
This and other statements make it sound like you think there is a physical universe inside the computer. There isn't. It's just a big list of numbers.
I do not but it definitely seems like it. If it is in fact only about numbers, then we would not have GPUs today. A GPU is not just used to display the images.

I think it is best to close this case. My question is not making sense to any of you. I am not being disrespectful by saying that, this is only a remark on my failure. Thank you all for your contribution. People here are nice, they take out time to help others which is very gracious of them.
 
  • #31
Mark44 said:
his is completely untrue. A computer is much more than just electricity -- it has memory, a central processing unit, input devices like keyboards, mice, and joysticks, output devices like monitors and printers, and non-volatile storage.
All of which work on electricity. And they are made up of electrons. And electrons are charged particles. So basically, electricity. Even we work on electricity, once it sparks out, that usually marks the end of life. This was just a way for me to show how anything can be narrowed down to one simple thing.
Mark44 said:
There is no surface -- the program has set up memory with a coordinate system that models a surface or a space. Moving the joystick or mouse causes the program to adjust the position of the character to some other place.
Thank you ! Now we are getting somewhere. The co-ordinates are stored in memory in relation to one another which is then used to form surfaces, which is then displayed on the screen. Just to trouble you a little bit more, Is the creation of surface predicated upon the existence of a display unit or is some information generated separately ? And to make it clear, what I mean to ask is - are the instructions to create the pixels based on the properties of the shader material, lighting etc. given to the computer when the model is loaded up (meaning that it is built into the model's instruction set) or is it dictated by whether the model will be rendered or not (essentially meaning that it is only the location of the points which is stored for the model and the rest is codependent of other factors, which would also mean that a possible scenario is - say points a,b,c,d are meant to form a square but they instead connect diagonally and form a surface that way) ? This is what I have been asking for so long. By 3D Space in computers, I meant that the points for a cube know how to connect and create planes to form that cube. So thinking of them as pixels, it does not need to be recreated each time. The model itself contains the relative information of the points and their relation is so well defined (the question itself is becoming more clear over time) that the pixels are instantly manipulated if you say, rotate it. And this is why I gave the example of the wooden cube. Let us say we are given an armature, a basic structure. We can add in blocks anywhere we want as we please, which means that the cube may or may not be created (I will reiterate the diagonal connection example). But if we know that four points need to form a plane surface, we can do that. Such an instruction can be given but what I am having a hard time with is regarding those co-ordinate points themselves. Is it that the co-ordinate points are essentially pixel information which exist in relation to the transformation of the cube with the respect to the angle from which the cube is being rendered ?
 
  • #32
Aufbauwerk 2045 said:
Consider 3D space, where each point is defined as having coordinates (x,y,z). To display this on a 2D surface, use the perspective transformation which maps (x,y,z) in 3D space onto a point (x',y') in 2D space. You can look up perspective transformation. It's simple mathematically. Think of looking at a real world scene outside your window, and trying to draw that scene on your window, with your eye at a fixed point. Or think of a camera, where the 3D reality is projected onto the 2D film surface.
If I draw something from an angle, say a cuboid which stretches from x'y to xy, it will be a flat image. A picture taken from a camera or a sketch of what I see from the window is also flat. Once done, I cannot manipulate the data, no further changes can be made. Co-ordinates in computers are like me telling you where to plot the points on a numbered graph and instructing how to connect those points, I know this much. But what I do not understand or need a confirmation on (I had a thought which is mentioned in my previous post) is how the spacing between the points is defined and how the the image of the model is instantly rendered on screen if I rotate it or change the camera angle.
 
  • #33
Raj Harsh said:
If I draw something from an angle, say a cuboid which stretches from x'y to xy, it will be a flat image. A picture taken from a camera or a sketch of what I see from the window is also flat. Once done, I cannot manipulate the data, no further changes can be made. Co-ordinates in computers are like me telling you where to plot the points on a numbered graph and instructing how to connect those points, I know this much. But what I do not understand or need a confirmation on (I had a thought which is mentioned in my previous post) is how the spacing between the points is defined and how the the image of the model is instantly rendered on screen if I rotate it or change the camera angle.

Taking the second part of your question first, the image of the model appears to be rendered instantly (hopefully at least) because the calculations are done very quickly. Behind the scenes, the program is recalculating the new projection of the 3D coordinates of the object onto the 2D pixel coordinates. Then the new 2D pixel coordinates are used to draw the next frame. So if, for example, we have a stationary camera, and the object is a cube, and the cube is rotating, then the 3D coordinates of the cube's vertices are changing, and those new vertices need to be transformed into pixel coordinates. What happens during the actual drawing is that we have triangles (think of two triangles per cube face) which are drawn and filled in with the appropriate color or texture. Each triangle is defined by three vertices. We use triangles because the GPU is good at drawing triangles very quickly. For more details, I really suggest finding a good book on 3D real time graphics. You may also be interested in a book on Physics for Game Programmers. Not that you are necessarily a game programmer, I just mention that because that is a good place to find this information. As for spacing between points, if you mean mapping real-world spacing into a 3D coordinate system to begin with, that's somewhat arbitrary. I could represent a real-world object using many different coordinates, depending on my choice.

That's about all I can say on this topic. I need to get back to work now! Best wishes. :)
 
  • Like
Likes scottdave and Raj Harsh
  • #34
Raj Harsh said:
The computer is nothing if you come to think of it, it's simply electricity.
Mark44 said:
This is completely untrue. A computer is much more than just electricity -- it has memory, a central processing unit, input devices like keyboards, mice, and joysticks, output devices like monitors and printers, and non-volatile storage.

Raj Harsh said:
All of which work on electricity. And they are made up of electrons.
True, computer components operate on electricity, but that's very different from saying that a computer is "simply electricity."
Raj Harsh said:
And electrons are charged particles. So basically, electricity. Even we work on electricity, once it sparks out, that usually marks the end of life. This was just a way for me to show how anything can be narrowed down to one simple thing.
There's a saying that I believe is due to Einstein: "Make things as simple as possible, but no simpler." By saying that a computer is "simply electricity" you have vastly oversimplified things.
 
  • Like
Likes scottdave
  • #35
Raj Harsh said:
The co-ordinates are stored in memory in relation to one another which is then used to form surfaces, which is then displayed on the screen. Just to trouble you a little bit more, Is the creation of surface predicated upon the existence of a display unit or is some information generated separately ?
Just to be clear, a surface is not actually created, but rather, memory is modified so that when it is displayed on a monitor, the resulting image appears to be a surface. Of course, if you want to see the image, you need a monitor.

Raj Harsh said:
And to make it clear, what I mean to ask is - are the instructions to create the pixels
The pixels aren't created -- they are built into the monitor. What the program does is to turn on the appropriate red, green, or blue pixels to form the image. Part of the computer's memory (video memory) is used to represent a bit pattern that will be displayed on the monitor. That's how it works for most graphics, although there are some computers that use what is called vector graphics rather than a bitmap.[/quote]
Raj Harsh said:
based on the properties of the shader material, lighting etc. given to the computer when the model is loaded up (meaning that it is built into the model's instruction set) or is it dictated by whether the model will be rendered or not (essentially meaning that it is only the location of the points which is stored for the model and the rest is codependent of other factors, which would also mean that a possible scenario is - say points a,b,c,d are meant to form a square but they instead connect diagonally and form a surface that way) ? This is what I have been asking for so long. By 3D Space in computers, I meant that the points for a cube know how to connect and create planes to form that cube.
I'm not sure I understand what you're trying to say here. If you intend for points a, b, c, and d to form a square, the program has to "know" to do this. The points for a cube don't "know" anything, especially how to connect or that they are part of some geometric shape. The program has to do all that.

At its simplest, a 3D image is a plane, with the origin at the upper left corner, and the x-axis extending to the right, and the positive y-axis extending down from the origin. The third dimension is the z-axis, which extends back into the screen. For a simple object like a cube that we're viewing head-on, a program can determine that the front plane should be shown, but that the rear plane should not be shown, based solely on the z-coordinates.
Raj Harsh said:
So thinking of them as pixels, it does not need to be recreated each time. The model itself contains the relative information of the points and their relation is so well defined (the question itself is becoming more clear over time) that the pixels are instantly manipulated if you say, rotate it.
The rotation may seem instantaneous, but there are thousands of machine instructions that have to be executed to rotate the points of even a small object. My understanding is that a matrix transformation has to be applied to each point to calculate its new location.
Raj Harsh said:
And this is why I gave the example of the wooden cube. Let us say we are given an armature, a basic structure. We can add in blocks anywhere we want as we please, which means that the cube may or may not be created (I will reiterate the diagonal connection example). But if we know that four points need to form a plane surface, we can do that. Such an instruction can be given but what I am having a hard time with is regarding those co-ordinate points themselves. Is it that the co-ordinate points are essentially pixel information which exist in relation to the transformation of the cube with the respect to the angle from which the cube is being rendered ?
This doesn't make a lot of sense to me. On the one hand, there are the points that form the framework of the object, and these have nothing to do with pixels. There are other concepts, such as pixel shaders, meshes, textures and others, that are used to tranform the framework of an object to what you see on a monitor (the pixels).

Take a look at this wiki article: https://en.wikipedia.org/wiki/Computer_graphics
 
  • #36
Aufbauwerk 2045 said:
Taking the second part of your question first, the image of the model appears to be rendered instantly (hopefully at least) because the calculations are done very quickly. Behind the scenes, the program is recalculating the new projection of the 3D coordinates of the object onto the 2D pixel coordinates. Then the new 2D pixel coordinates are used to draw the next frame. So if, for example, we have a stationary camera, and the object is a cube, and the cube is rotating, then the 3D coordinates of the cube's vertices are changing, and those new vertices need to be transformed into pixel coordinates. What happens during the actual drawing is that we have triangles (think of two triangles per cube face) which are drawn and filled in with the appropriate color or texture. Each triangle is defined by three vertices. We use triangles because the GPU is good at drawing triangles very quickly. For more details, I really suggest finding a good book on 3D real time graphics. You may also be interested in a book on Physics for Game Programmers. Not that you are necessarily a game programmer, I just mention that because that is a good place to find this information. As for spacing between points, if you mean mapping real-world spacing into a 3D coordinate system to begin with, that's somewhat arbitrary. I could represent a real-world object using many different coordinates, depending on my choice.

That's about all I can say on this topic. I need to get back to work now! Best wishes. :)

This combined with many of the other explanation has solved my conundrum, or so I think. And I think I was thinking and had said something similar that the instruction set for a 3D model contain information for creation of the model for display on a 2D screen with respect to pixels (such as the points and how they connect to form surfaces) where the pixels work in conjunction with the translation properties of the model, essentially meaning that even a 3D Model is not actually 3D but a 2D image but it is so well defined that it's instruction set includes construction information and connectivity of all the points from each and every perspective/angle but only on a 2D plane, i.e; for display of the side or whatever part of the model it is that is facing the camera, or the screen. It is like how movie sets used to be made. The computer is like the set dresser and we are like the director. When we want the front facing, the only make the front of it but from and according to our given instruction, they prepare a plan for the entire set, but do not create it. When we switch from orthographic to perspective and view it at an angle, they immediately create the set from the plan that they had (which in a way is given to the computer by us when we make a model, the plan is the instruction it seems) and say, prepare half of the front and the entire side and maybe a little bit of the top or the bottom and show that to us. And the computer or the set dresser does it so quickly (since we have already given them the instruction) that we may (or in this case, I was) lead to believe that there is construction of an actual 3D model (although, I do not think that I was since my query was always about the space). But the difference between the set and the computer generated image is that the image is flat and so is the model. The space for the computer is the pixel, that is the ground which they build the set on. And the pixels also act as the paint, we simply pick the colour. So, a 3D model is a collection instruction sets of 2D images conveying information for the construction of how something presumably looks when looked at from different angles. This is such a relief !
Edit - I was thinking about what I have written and a thought struck me about 3D Printing. Those things come from 3D Models. I somewhat conclusively decided that they are 2D information from all the angles. A sculptor is capable of making sculptures just by looking at something or someone from different angles. Is that the principle which 3D Printing works on/by ? Is that applicable ? If not, my entire understanding would now turn out to be a disappointment.

I would like to apologize to Sir @russ_watters because I think I dismissed your analogy simply due to my incompetence and my misinterpretation of it. You drew a cube on a paper and as soon as I asked to look at the other side, you immediately drew the other side. Like I said, not the brightest student, but I like to think it helps me tread carefully. And if I may, I would also like to blame studying about 'ray-tracing' for this because ray-tracing (speaking from my limited knowledge) makes it sound like there is a 3D space and 3D object and light. This has also raised another question, what is light in computers ? It has to be pixels. So, how can there be simulations of illumination in computers ?
 
Last edited:
  • #37
Raj Harsh said:
Co-ordinates in computers are like me telling you where to plot the points on a numbered graph and instructing how to connect those points, I know this much. But what I do not understand or need a confirmation on (I had a thought which is mentioned in my previous post) is how the spacing between the points is defined and how the the image of the model is instantly rendered on screen if I rotate it or change the camera angle.
I was one of the original developers on ComputerVision's CADDS-3 system, precursor to their CADDS-5 system.

In general, you have (in the computer) more than points. You also have ways of forming the polygons (or other 2D surface shapes) that are anchored by those points.

Here's another example:
Code:
units = "mm"

splineA.pt1.x,y,z = 100,0,15
splineA.pt2.x,y,z = 100,20,25
splineA.pt3.x,y,z = 100,40,18
splineA.pt4.x,y,z = 100,60,18
splineA.pt5.x,y,z = 100,80,20
splineA.pt6.x,y,z = 100,100,28

splineB.pt1.x,y,z = -100,0,24
splineB.pt2.x,y,z = -100,20,22
splineB.pt3.x,y,z = -100,40,32
splineB.pt4.x,y,z = -100,60,10
splineB.pt5.x,y,z = -100,80,15
splineB.pt6.x,y,z = -100,100,30

ruledsurface.edge1 = splineA
ruledsurface.edge2 = splineB
ruledsurface.ystart = 0
ruledsurface.ystart = 100
So what we are doing here is:
1) defining our units of measure (millimeters)
2) fitting a cubic spline "splineA" to a series points (a way of specifying a smooth curve)
3) fitting another cubic spline "splineB" to another series points
4) defining a surface which is construct by sliding a rule (straight-edge) along both splines simultaneously. So, with the data provided, the ruler will run from y=0 to y=100 and will remain in the x/z plane.

We now have a smooth, ripply 3D surface defined by a 12 3D points - plus a lot of context information. It's that context information that specifies "spline" or "ruled surface" that allows general purpose software to reconstruct exactly what you want.

Once you have that surface, you understand how you could generate a 2D picture that can be displayed or printed to paper.

But, depending on the application, that may not be what is most important...

Let's say we specify a CNC milling machine (things like its command structure, units of measure, bad dimensions, tool home locations, etc), and a tool (cross-section (shape), coolant requirements, etc) and now we want to generate a list of instructions to that machine that will cut that surface from a block of aluminum.

So you have one data set that describes the geometry you are creating and other data sets describing camera set-ups and CNC machines.

That said, there may be time when all you have is a point cloud. In those cases, you will need to discover the surfaces (little triangles) before you can do the kind of rendering (display, CNC machine, whatever) we have been discussing. For a dense creating those triangles is often easy. The points hook up to the other points that are closest to them. So for each point, find its closest neighbors in each of say 6 directions, then work out the surfaces based on whatever heuristic method works for you.
 
  • Like
Likes Raj Harsh
  • #38
Mark44 said:
The pixels aren't created -- they are built into the monitor.
By pixel, I meant the display information for a point, which is usually represented by the physical form of pixel. I was somewhat referring to the digital form because even the vector art is represented by pixels, the physical type in display units.

Mark44 said:
Just to be clear, a surface is not actually created, but rather, memory is modified so that when it is displayed on a monitor, the resulting image appears to be a surface. Of course, if you want to see the image, you need a monitor.
Basically what I asked confirmation on and I have said it again in my previous post.
Mark44 said:
This doesn't make a lot of sense to me. On the one hand, there are the points that form the framework of the object, and these have nothing to do with pixels. There are other concepts, such as pixel shaders, meshes, textures and others, that are used to tranform the framework of an object to what you see on a monitor (the pixels).
I am well aware of shaders and meshes as I have studied about them when I was learning digital modelling. I used the word framework/armature as an indicator of the points but also as the entire frame. The frame of a cube is an empty cube with borders. If I give you say a plank, you can connect the top corner to the bottom corner. So in this case, I think my question was regarding how the computer knows how to create the surface when it only works with points. I know that I did say that the computers store the information about how those points connect as well but my question was actually aimed at tessellation. In computers, models only contain the information about the points and then tessellation takes care of the rest by simply reading the the information on how the points are connected and what surface needs to be created and by illuminating the desired and the required light source on the display unit to the shade defined by the modeller, in accordance to the environment and it's behaviour and reaction to light. But I think it has been rendered moot since I have arrived at the conclusion that it is not actually 3D but 2D. But I still think it should be hard for computers to progress the connection of the points on their own if the instructions are not clear. Connecting in a linear fashion - connect the top left points to the top right, then connect the top right to the bottom right, then from the bottom right to bottom left and then back again to the top right. But this is where it can get tricky. It is supposed to connect the front two points to the bottom two in the front and the top points in the back only to the bottom points on the backside but the computer usually (most often, although, I have seen this merge function as well as others fail at times) knows itself what it needs to connect to round out the model. It can make use of deletion of overlap and follow the same linear path from all the sides. It does not follow a linear progression model but it handles tessellation very well in most cases. I wanted to learn about that.
 
  • #39
If you really want to see what's going on, learn to program it yourself. I can recommend the excellent series "games with go", which is currently appearing on twitch, and is archived on youtube. https://gameswithgo.org/

It goes all the way from simple text adventures to full 3D graphics.

--hendrik
 
Last edited:
  • #40
Hendrik Boom said:
If you really want to see what's going on, learn to program it yourself. I can recommend the excellent series "games with go", which is currently appearing on twitch, and is archived on youtube. https://gameswithgo.org/

It goes all the way from simple text adventures to full 3D graphics.

--hendrik
Thank you for the suggestion. I will look into it.
 
  • #41
Raj Harsh said:
Thank you for the suggestion. I will look into it.
He's an entertaining lecturer even though he just sits there, programs, and talks about what he's doing. You don't have to pay the ten dollars or so for access to his github archive of all the code; you can just type everything in off the video, though I've found myself wondering sometimes what kind of brackets he's using.

He uses Visual Studio on Windows; I use Emacs and command line in Linux. I can assist with setup if you use Linux too.

Above all, have fun.
 
  • #42
Hendrik Boom said:
He's an entertaining lecturer even though he just sits there, programs, and talks about what he's doing. You don't have to pay the ten dollars or so for access to his github archive of all the code; you can just type everything in off the video, though I've found myself wondering sometimes what kind of brackets he's using.

He uses Visual Studio on Windows; I use Emacs and command line in Linux. I can assist with setup if you use Linux too.

Above all, have fun.

Thank you very much, but I use Windows and I am more familiar with Visual Studio's interface (toolbars etc.) so I will stick with that. I have yet to start my studies on graphical programming. I am also wondering which API would be good to begin with. OpenGL was the mainstream API when I was growing up but then came DirectX and now, everything runs on DirectX and there is also another library called Vulkan on the rise. I have tried searching for books on computer graphics programming but all of them make use of one or the API. The knowledge may be transferable but it would be better to focus on one.
 
  • #43
Raj Harsh said:
Thank you very much, but I use Windows and I am more familiar with Visual Studio's interface (toolbars etc.) so I will stick with that. I have yet to start my studies on graphical programming. I am also wondering which API would be good to begin with. OpenGL was the mainstream API when I was growing up but then came DirectX and now, everything runs on DirectX and there is also another library called Vulkan on the rise. I have tried searching for books on computer graphics programming but all of them make use of one or the API. The knowledge may be transferable but it would be better to focus on one.

I've used DirectX and OpenGL. I prefer OpenGL, but I can't recommend which one to use. I suppose it depends on what exactly you are doing and who are the intended users. Of course I don't know anything about the future of either API.

One reason I like the Angell book I mentioned is that he concentrates on the basic concepts and mathematics of computer graphics, and he does not lock his explanations to a single API. On the contrary, he uses a small set of graphics primitives which can be implemented by whichever API you want to use. He gives a few examples of implementing the primitives. I think that is a good approach for learning, because you can focus on the math and not need to worry about mastering the huge DirectX or OpenGL API at the same time.

Since people are discussing development environments, the one I've settled on for my private efforts is the following. I use Code::Blocks as my IDE. I use MinGW for compiling. I develop on my Windows 10 machine because right now it's all I have. Setting up a Linux system is on my to do list. I prefer Linux to Windows for development purposes, but again everyone's needs are different.

I've also used Visual C++. What can I say? I don't want to seem like a MS-basher, but I will say that some people really need to read Wirth's essay "A Plea for Lean Software" and take its message to heart. I would say that about everyone, not just MS.

Here are links to what I like to use.

http://www.mingw.org

http://www.codeblocks.org

As long as I'm recommending stuff, I like this simple text editor for when I don't need to use Code::Blocks.

https://www.notetab.com

For document preparation, I use Latex.

https://www.texstudio.org
 
  • Like
Likes Raj Harsh
  • #44
Aufbauwerk 2045 said:
I've used DirectX and OpenGL. I prefer OpenGL, but I can't recommend which one to use. I suppose it depends on what exactly you are doing and who are the intended users. Of course I don't know anything about the future of either API.

One reason I like the Angell book I mentioned is that he concentrates on the basic concepts and mathematics of computer graphics, and he does not lock his explanations to a single API. On the contrary, he uses a small set of graphics primitives which can be implemented by whichever API you want to use. He gives a few examples of implementing the primitives. I think that is a good approach for learning, because you can focus on the math and not need to worry about mastering the huge DirectX or OpenGL API at the same time.

Since people are discussing development environments, the one I've settled on for my private efforts is the following. I use Code::Blocks as my IDE. I use MinGW for compiling. I develop on my Windows 10 machine because right now it's all I have. Setting up a Linux system is on my to do list. I prefer Linux to Windows for development purposes, but again everyone's needs are different.

I've also used Visual C++. What can I say? I don't want to seem like a MS-basher, but I will say that some people really need to read Wirth's essay "A Plea for Lean Software" and take its message to heart. I would say that about everyone, not just MS.

Here are links to what I like to use.

http://www.mingw.org

http://www.codeblocks.org

As long as I'm recommending stuff, I like this simple text editor for when I don't need to use Code::Blocks.

https://www.notetab.com

For document preparation, I use Latex.

https://www.texstudio.org

I wonder what leads to the requirement and development of a new API. Could they not modify/update the old one to meet today's requirements and standard ? I have seen more of OpenGL codes than DirectX and the library is very neat. I am seesaw-battling but I think I will first learn about OpenGL a little as it is older than DirectX and then move to Microsoft's API because it is much widely used today, or study both to keep my options open. Their functionality is more or less the same. I think I am most likely to go towards game development, which is why I learned modelling and a little bit of VFX, and I do not want it to go to waste.

I could not find the exact book you mentioned but I have found other books from the author, one with similar title - High Resolution Computer Graphics Using C. I will get it. Hopefully, I will finish it because many of my books after a few days end up on shelf, collecting dust.

Thank you for the recommendations. I have chosen Windows because I run many applications on my computer, and video games too and Linux is not suitable for that. I may buy a new system with Ubuntu, but not now. And as the saying goes, to each their own. I do not have any issues with Visual C++, which is only available as a part of Visual Studio now, I think. And I do not like that Microsoft's offline installer is also an online installer which you download to download the setup. I currently do not care much about other things, my perspective may change once I am step out of the learning phase. But I have used CodeBlocks too, but I do not use it currently. But I do like that it is lightweight.

Coincidentally, I have read an excerpt from that not too long ago. I have very recently updated to Windows 10 because I did not like the idea of being aware of Microsoft's data collection from my system and despite that, assenting to it simply to use their product. I can turn off certain settings but we have to accept their terms. There is also Cortana which I never use, basically a bloatware, it should have been optional. And I think Cortana is a custom made Nuance's Dragon, from the Naturally Speaking series. But what upset me the most is that Microsoft's old and simple Photo Viewer was nowhere to be seen and when you install an application and choose to make it the default for certain extensions, it does not work. And then seeing people complain about it on forums is what lead me to that excerpt which said that there has not been much advancement in terms of functions of the softwares but it is the aesthetics which has massively increased the size and the prices. And I do agree with it, it is something I have thought of too. Apple and Google, both release updates for their mobile OS frequently but the changes are very minor. They change the menu and the arrangement but most of the features are still the same. And the design today is not very good. People are keen on making everything to look futuristic, which to them means sharp and crisp and a little dull. Before, there was a time when the design was well-rounded, quite literally as the edges were smoothed out and the colour pallet was big and they tried to add depth to each design whereas today, it is flat. Take Windows for example. Their Aero design compared to the current design of Windows 8 and Windows 10 is much more better so the change is definitely not for the better, and it was unnecessary. And Windows 10's design is much more flatter. Regardless of whether people agree or not, the current design is not much different from the Windows 98 or the Classic Windows layout, which is better because it can help one save memory. Many people switch to it on their workstation.
 
  • Like
Likes Aufbauwerk 2045
  • #45
Raj Harsh said:
I wonder what leads to the requirement and development of a new API. Could they not modify/update the old one to meet today's requirements and standard ? I have seen more of OpenGL codes than DirectX and the library is very neat. I am seesaw-battling but I think I will first learn about OpenGL a little as it is older than DirectX and then move to Microsoft's API because it is much widely used today, or study both to keep my options open. Their functionality is more or less the same. I think I am most likely to go towards game development, which is why I learned modelling and a little bit of VFX, and I do not want it to go to waste.

I could not find the exact book you mentioned but I have found other books from the author, one with similar title - High Resolution Computer Graphics Using C. I will get it. Hopefully, I will finish it because many of my books after a few days end up on shelf, collecting dust.

Thank you for the recommendations. I have chosen Windows because I run many applications on my computer, and video games too and Linux is not suitable for that. I may buy a new system with Ubuntu, but not now. And as the saying goes, to each their own. I do not have any issues with Visual C++, which is only available as a part of Visual Studio now, I think. And I do not like that Microsoft's offline installer is also an online installer which you download to download the setup. I currently do not care much about other things, my perspective may change once I am step out of the learning phase. But I have used CodeBlocks too, but I do not use it currently. But I do like that it is lightweight.

Coincidentally, I have read an excerpt from that not too long ago. I have very recently updated to Windows 10 because I did not like the idea of being aware of Microsoft's data collection from my system and despite that, assenting to it simply to use their product. I can turn off certain settings but we have to accept their terms. There is also Cortana which I never use, basically a bloatware, it should have been optional. And I think Cortana is a custom made Nuance's Dragon, from the Naturally Speaking series. But what upset me the most is that Microsoft's old and simple Photo Viewer was nowhere to be seen and when you install an application and choose to make it the default for certain extensions, it does not work. And then seeing people complain about it on forums is what lead me to that excerpt which said that there has not been much advancement in terms of functions of the softwares but it is the aesthetics which has massively increased the size and the prices. And I do agree with it, it is something I have thought of too. Apple and Google, both release updates for their mobile OS frequently but the changes are very minor. They change the menu and the arrangement but most of the features are still the same. And the design today is not very good. People are keen on making everything to look futuristic, which to them means sharp and crisp and a little dull. Before, there was a time when the design was well-rounded, quite literally as the edges were smoothed out and the colour pallet was big and they tried to add depth to each design whereas today, it is flat. Take Windows for example. Their Aero design compared to the current design of Windows 8 and Windows 10 is much more better so the change is definitely not for the better, and it was unnecessary. And Windows 10's design is much more flatter. Regardless of whether people agree or not, the current design is not much different from the Windows 98 or the Classic Windows layout, which is better because it can help one save memory. Many people switch to it on their workstation.

Sorry, I did not quote the exact title. I haven't used the book for a while. Here are the correct details.

Ian O. Angell, High-resolution Computer Graphics in C. New York: John Wiley & Sons. (Halsted Press).

ISBN: 0-470-21634-4

I think Windows was improving for a few years, but I really did not like Vista, and I strongly dislike Windows 10.

For an alternative, which it seems few people are using, there is Oberon, designed by Wirth. At least he shows how to design good software.
 
  • #46
Raj Harsh said:
I wonder what leads to the requirement and development of a new API. Could they not modify/update the old one to meet today's requirements and standard ?

Sure but at some point that gets harder, messier, involves ugly compromises, etc and the best way forward becomes a clean break. One could argue that this was done with OpenGL 2.0 (OpenGL 1.4 and earlier are very different). OpenGL is old, it's been more than 20 years since the release of OpenGL 1.1 and 14 years since OpenGL 2.0 and a lot has happens to CPU/GPUs in those years. So a few years ago the people behind OpenGL (Khronos Group) released an updated API that again broke with the old API. They could have called it OpenGL 5.0 (originally it was "Next Generation OpenGL Initiative" or "OpenGL next") but they decided to name it Vulkan.
 
  • Like
Likes Aufbauwerk 2045
  • #47
Vulkan is quite new and one must have very strong reasons to use it, otherwise OpenGL is still just fine.

By the way, you may want to look over a couple of projects of mine, described here:

https://compphys.go.ro/Newtonian-gravity/
https://compphys.go.ro/event-driven-molecular-dynamics/

Both contain very similar OpenGL code (in fact, I simply got the code from the first project into the second one), the difference is that in the second project I added instancing. Of course, the gl programs are different between the two, but a lot of code is common.

The projects are open source, available on GitHub. Hopefully they could help. You should be able to compile them with the latest Visual Studio.
 
  • #49
Raj Harsh said:
Thank you very much, but I use Windows and I am more familiar with Visual Studio's interface (toolbars etc.) so I will stick with that. I have yet to start my studies on graphical programming. I am also wondering which API would be good to begin with. OpenGL was the mainstream API when I was growing up but then came DirectX and now, everything runs on DirectX and there is also another library called Vulkan on the rise. I have tried searching for books on computer graphics programming but all of them make use of one or the API. The knowledge may be transferable but it would be better to focus on one.
OpenGL exists on a variety of platforms; DirectX is Microsoft only. So if you or your users might ever want to run your code on, say, Linux, you'd want to be using OpenGL.

Vulkan is new. I'm not familiar with it, but I don't know how widely available it is.
 
Back
Top