- #36

Raj Harsh

- 27

- 2

Aufbauwerk 2045 said:Taking the second part of your question first, the image of the model appears to be rendered instantly (hopefully at least) because the calculations are done very quickly. Behind the scenes, the program is recalculating the new projection of the 3D coordinates of the object onto the 2D pixel coordinates. Then the new 2D pixel coordinates are used to draw the next frame. So if, for example, we have a stationary camera, and the object is a cube, and the cube is rotating, then the 3D coordinates of the cube's vertices are changing, and those new vertices need to be transformed into pixel coordinates. What happens during the actual drawing is that we have triangles (think of two triangles per cube face) which are drawn and filled in with the appropriate color or texture. Each triangle is defined by three vertices. We use triangles because the GPU is good at drawing triangles very quickly. For more details, I really suggest finding a good book on 3D real time graphics. You may also be interested in a book on Physics for Game Programmers. Not that you are necessarily a game programmer, I just mention that because that is a good place to find this information. As for spacing between points, if you mean mapping real-world spacing into a 3D coordinate system to begin with, that's somewhat arbitrary. I could represent a real-world object using many different coordinates, depending on my choice.

That's about all I can say on this topic. I need to get back to work now! Best wishes. :)

This combined with many of the other explanation has solved my conundrum, or so I think. And I think I was thinking and had said something similar that the instruction set for a 3D model contain information for creation of the model for display on a 2D screen with respect to pixels (such as the points and how they connect to form surfaces) where the pixels work in conjunction with the translation properties of the model, essentially meaning that even a 3D Model is not actually 3D but a 2D image but it is so well defined that it's instruction set includes construction information and connectivity of all the points from each and every perspective/angle but only on a 2D plane, i.e; for display of the side or whatever part of the model it is that is facing the camera, or the screen. It is like how movie sets used to be made. The computer is like the set dresser and we are like the director. When we want the front facing, the only make the front of it but from and according to our given instruction, they prepare a plan for the entire set, but do not create it. When we switch from orthographic to perspective and view it at an angle, they immediately create the set from the plan that they had (which in a way is given to the computer by us when we make a model, the plan is the instruction it seems) and say, prepare half of the front and the entire side and maybe a little bit of the top or the bottom and show that to us. And the computer or the set dresser does it so quickly (since we have already given them the instruction) that we may (or in this case, I was) lead to believe that there is construction of an actual 3D model (although, I do not think that I was since my query was always about the space). But the difference between the set and the computer generated image is that the image is flat and so is the model. The space for the computer is the pixel, that is the ground which they build the set on. And the pixels also act as the paint, we simply pick the colour. So, a 3D model is a collection instruction sets of 2D images conveying information for the construction of how something presumably looks when looked at from different angles. This is such a relief !

Edit - I was thinking about what I have written and a thought struck me about 3D Printing. Those things come from 3D Models. I somewhat conclusively decided that they are 2D information from all the angles. A sculptor is capable of making sculptures just by looking at something or someone from different angles. Is that the principle which 3D Printing works on/by ? Is that applicable ? If not, my entire understanding would now turn out to be a disappointment.

I would like to apologize to Sir @russ_watters because I think I dismissed your analogy simply due to my incompetence and my misinterpretation of it. You drew a cube on a paper and as soon as I asked to look at the other side, you immediately drew the other side. Like I said, not the brightest student, but I like to think it helps me tread carefully. And if I may, I would also like to blame studying about 'ray-tracing' for this because ray-tracing (speaking from my limited knowledge) makes it sound like there is a 3D space and 3D object and light. This has also raised another question, what is light in computers ? It has to be pixels. So, how can there be simulations of illumination in computers ?

Last edited: