UV Map View Plane: Calculate 2D Position?

  • Thread starter Thread starter ADDA
  • Start date Start date
  • Tags Tags
    Map Plane Uv
Click For Summary
SUMMARY

This discussion focuses on calculating the 2D position of a viewing plane using UV mapping without expanding into three dimensions. The conversation highlights the importance of calculating normals through the cross product of partial derivatives, specifically for a sphere defined by parametric equations. The participants emphasize the role of shaders, particularly Gouraud and Phong shaders, in handling normal calculations on the GPU, suggesting that CPU calculations may be less efficient. The main inquiry revolves around optimizing rendering by directly mapping UV regions to the viewing plane.

PREREQUISITES
  • Understanding of UV mapping and texture coordinates
  • Familiarity with parametric equations in 3D space
  • Knowledge of normal vector calculations using cross products
  • Basic concepts of shader programming, particularly Gouraud and Phong shaders
NEXT STEPS
  • Research "UV mapping techniques in OpenGL" for practical applications
  • Learn about "cross product and normal vector calculations" in 3D graphics
  • Explore "Gouraud vs. Phong shading" to understand their differences and use cases
  • Investigate "optimizing rendering pipelines in OpenGL" for performance improvements
USEFUL FOR

3D graphics developers, shader programmers, and anyone interested in optimizing rendering techniques in computer graphics.

ADDA
Messages
67
Reaction score
2
Would there be a way to take a uv map and compute the 2 dimensional position of a viewing plane or eye or camera, without the need to expand the parametric equation into three dimensions?



In the above video the uv map in the background yellow is calculated first. Then only the bright portions are translated to the foreground by means of expansion into three dimensions then direct projection onto a viewing plane.
 
Technology news on Phys.org
That video is no longer available. I'm completely confused by your question, are you just trying to project an object without actually sending it to the GPU? UV mapping is usually handled by a shader on the graphics card by dedicated hardware, which would probably be faster than trying to calculate it on the processor.
 
Yeah, I guess I need to explain my thoughts better. The background was the inner product of the the camera's direction with the computed normal of the surface. The normal of the surface is computed by the cross product of the partial derivative of with respect to u and the partial derivative with respect to v. So take for instance the parametric equation for a sphere:

EDIT: that symbol is meant to be phi, I think that it is omega?

x = r * sin(πΦ) * cos(2πθ)
y = r * sin(πΦ) * sin(2πθ)
z = r * cos(πΦ)

θ = u, range [0.0,1.0]
Φ = v, range [0.0,1.0]

The partial I've used is for du:

x = r * sin(πΦ) * -1.0 * sin(2πθ) * 2π
y = r * sin(πΦ) * cos(2πθ) * 2π
z = r * cos(πΦ)

for dv:

x = r * cos(πΦ) * π * cos(2πθ)
y = r * cos(πΦ) * π * sin(2πθ)
z = r * -1.0 * sin(πΦ) * π

With the above video, I'm referring to uv in terms of points in 2 space, then translated to a vertex of point in 3 space via a parametric equation. I just did a bit of on-line research, and found that a uv map is a texture. I was confused, newjersyrunner, I apologize. I rely mostly on textbooks. So my main question is if I could just calculate the normal, form the inner product or dot product [if the arc cosine of the dot product of normal vectors is the angle between them; a dot product of the direction of viewing plane and the normal of the surface would yield a negative value because the angle is obtuse;]

The foreground in the above video was a three dimensional representation of the uv region mapped through a parametric equation. To do this, I took the 3 Space vector and projected it onto the orthonormal basis of the viewing plane. I call it simple the up and right vectors. So if we project the point in three space onto the right vector of the viewing plane, we are given an x value for plotting to the screen. vis a vi, y.

Now, after further explanation, I would like to prevent the extra steps for mapping to the view plane. So my question involves mapping one uv region to the viewing plane; another uv region. How would it be possible to calculate the x,y variables of the viewing plane, given the two parametric equations and a surface's uv region.

As far as GPU and CPU calculations go, I am calculating on the CPU right now, to right glsl later to optimize the rendering. I'm thinking that my idea would optimize the rendering pipeline of gl, if 3 space calculations are omitted.
 
Last edited:
Seems like google would be your best bet for the math, you just don’t know the terminology that you want to look for.

All that calculation of normals is handled by something called the “shader.” The exact normal you get depends on the algorithm of the shader. From what it sounds like, you are using an algorithm similar to a ”gourand shader.” You can find lots of examples of that. “Phong shader” is slightly more complex, but is what OpenGL uses by default so it’s a good starting point.

This assumes that your texture itself is perfectly flat. Bump mapping also affects normal calculation.
 
newjersyrunner, thanks for the reply, seems like you are as lost as I am as to how to accomplish this.
 
ADDA said:
EDIT: that symbol is meant to be phi, I think that it is omega?

x = r * sin(πΦ) * cos(2πθ)
The Greek letter in the sine factor is phi (upper case).
Omega (upper case): ##\Omega## and lower case: ##\omega##
Phi (upper case): ##\Phi## and lower case: ##\phi##
##\theta## and ##\phi## (both in lower case) are commonly used in spherical coordinates and trignonometry in general.
 
We have many threads on AI, which are mostly AI/LLM, e.g,. ChatGPT, Claude, etc. It is important to draw a distinction between AI/LLM and AI/ML/DL, where ML - Machine Learning and DL = Deep Learning. AI is a broad technology; the AI/ML/DL is being developed to handle large data sets, and even seemingly disparate datasets to rapidly evaluated the data and determine the quantitative relationships in order to understand what those relationships (about the variaboles) mean. At the Harvard &...

Similar threads

Replies
1
Views
2K
  • · Replies 14 ·
Replies
14
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 31 ·
2
Replies
31
Views
4K
Replies
5
Views
2K
  • · Replies 13 ·
Replies
13
Views
3K
  • · Replies 4 ·
Replies
4
Views
7K
  • Poll Poll
  • · Replies 1 ·
Replies
1
Views
5K
Replies
1
Views
2K