UV Map View Plane: Calculate 2D Position?

  • Thread starter Thread starter ADDA
  • Start date Start date
  • Tags Tags
    Map Plane Uv
Click For Summary

Discussion Overview

The discussion revolves around the computation of a 2D position on a viewing plane from a UV map, exploring the possibility of avoiding the expansion into three dimensions. Participants examine the relationship between UV mapping, surface normals, and projection techniques in rendering.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested

Main Points Raised

  • One participant inquires about calculating the 2D position of a viewing plane from a UV map without expanding into three dimensions, referencing a video that illustrates this process.
  • Another participant expresses confusion about the original question, suggesting that UV mapping is typically handled by shaders on the GPU, which may be more efficient than CPU calculations.
  • A participant elaborates on their understanding of the camera's direction and surface normals, providing parametric equations for a sphere and discussing the computation of normals using partial derivatives.
  • There is mention of the dot product and its relation to angles between vectors, with a focus on how this could relate to projecting points onto a viewing plane.
  • One participant suggests that the terminology used may hinder the search for relevant mathematical concepts, recommending looking into shader algorithms like Gouraud and Phong shading.
  • Another participant acknowledges the confusion and expresses a desire to optimize rendering by calculating directly on the CPU before transitioning to GLSL for GPU optimization.
  • Clarifications are made regarding the notation of Greek letters used in the parametric equations, specifically phi and omega, in the context of spherical coordinates.

Areas of Agreement / Disagreement

Participants do not appear to reach a consensus on the best approach to calculate the 2D position from the UV map, with multiple competing views and uncertainties expressed throughout the discussion.

Contextual Notes

The discussion includes various assumptions about the nature of UV mapping and the calculations involved, with some participants relying on specific algorithms and others expressing confusion about terminology and methods.

ADDA
Messages
67
Reaction score
2
Would there be a way to take a uv map and compute the 2 dimensional position of a viewing plane or eye or camera, without the need to expand the parametric equation into three dimensions?



In the above video the uv map in the background yellow is calculated first. Then only the bright portions are translated to the foreground by means of expansion into three dimensions then direct projection onto a viewing plane.
 
Technology news on Phys.org
That video is no longer available. I'm completely confused by your question, are you just trying to project an object without actually sending it to the GPU? UV mapping is usually handled by a shader on the graphics card by dedicated hardware, which would probably be faster than trying to calculate it on the processor.
 
Yeah, I guess I need to explain my thoughts better. The background was the inner product of the the camera's direction with the computed normal of the surface. The normal of the surface is computed by the cross product of the partial derivative of with respect to u and the partial derivative with respect to v. So take for instance the parametric equation for a sphere:

EDIT: that symbol is meant to be phi, I think that it is omega?

x = r * sin(πΦ) * cos(2πθ)
y = r * sin(πΦ) * sin(2πθ)
z = r * cos(πΦ)

θ = u, range [0.0,1.0]
Φ = v, range [0.0,1.0]

The partial I've used is for du:

x = r * sin(πΦ) * -1.0 * sin(2πθ) * 2π
y = r * sin(πΦ) * cos(2πθ) * 2π
z = r * cos(πΦ)

for dv:

x = r * cos(πΦ) * π * cos(2πθ)
y = r * cos(πΦ) * π * sin(2πθ)
z = r * -1.0 * sin(πΦ) * π

With the above video, I'm referring to uv in terms of points in 2 space, then translated to a vertex of point in 3 space via a parametric equation. I just did a bit of on-line research, and found that a uv map is a texture. I was confused, newjersyrunner, I apologize. I rely mostly on textbooks. So my main question is if I could just calculate the normal, form the inner product or dot product [if the arc cosine of the dot product of normal vectors is the angle between them; a dot product of the direction of viewing plane and the normal of the surface would yield a negative value because the angle is obtuse;]

The foreground in the above video was a three dimensional representation of the uv region mapped through a parametric equation. To do this, I took the 3 Space vector and projected it onto the orthonormal basis of the viewing plane. I call it simple the up and right vectors. So if we project the point in three space onto the right vector of the viewing plane, we are given an x value for plotting to the screen. vis a vi, y.

Now, after further explanation, I would like to prevent the extra steps for mapping to the view plane. So my question involves mapping one uv region to the viewing plane; another uv region. How would it be possible to calculate the x,y variables of the viewing plane, given the two parametric equations and a surface's uv region.

As far as GPU and CPU calculations go, I am calculating on the CPU right now, to right glsl later to optimize the rendering. I'm thinking that my idea would optimize the rendering pipeline of gl, if 3 space calculations are omitted.
 
Last edited:
Seems like google would be your best bet for the math, you just don’t know the terminology that you want to look for.

All that calculation of normals is handled by something called the “shader.” The exact normal you get depends on the algorithm of the shader. From what it sounds like, you are using an algorithm similar to a ”gourand shader.” You can find lots of examples of that. “Phong shader” is slightly more complex, but is what OpenGL uses by default so it’s a good starting point.

This assumes that your texture itself is perfectly flat. Bump mapping also affects normal calculation.
 
newjersyrunner, thanks for the reply, seems like you are as lost as I am as to how to accomplish this.
 
ADDA said:
EDIT: that symbol is meant to be phi, I think that it is omega?

x = r * sin(πΦ) * cos(2πθ)
The Greek letter in the sine factor is phi (upper case).
Omega (upper case): ##\Omega## and lower case: ##\omega##
Phi (upper case): ##\Phi## and lower case: ##\phi##
##\theta## and ##\phi## (both in lower case) are commonly used in spherical coordinates and trignonometry in general.
 

Similar threads

Replies
1
Views
2K
  • · Replies 14 ·
Replies
14
Views
3K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 31 ·
2
Replies
31
Views
5K
Replies
5
Views
2K
  • · Replies 13 ·
Replies
13
Views
3K
  • · Replies 4 ·
Replies
4
Views
8K
  • · Replies 1 ·
Replies
1
Views
2K
  • Poll Poll
  • · Replies 1 ·
Replies
1
Views
5K