Why Do We Minimize the Squared Length in Surface Distance Calculations?

  • Thread starter Thread starter Noxide
  • Start date Start date
  • Tags Tags
    Dot Surfaces
Noxide
Messages
120
Reaction score
0
A little clarification is required for the following techniqueTHE TECHNIQUE
Given a surface z = f(x,y), and some point Q in R3 (not on the surface)
The point P on the surface for which the distance from P(x, y, f(x,y)) to Q is the shortest distance from the surface to Q (i.e. vector PQ has minimal length) is determined by minimizing the squared length (or PQ dot PQ)of vector PQ.

A REMARK
The next paragraph seems dauntingly long, I think it asks 2 questions...

THE QUESTIONS
That's fine and dandy as techniques go, but I'm having trouble understanding exactly why we do that to the squared length. I can understand wanting to minimize the length... but minimizing the dot product/squared length seems foreign to me. Clearly there's some gap in my knowledge as to why this is done. Also, we are finding the minimum of the new function g(x,y) = PQ dot PQ by setting it's first order partial derivatives w.r.t x and y equal to zero, but we then substitute those same values of x and y into the surface f(x,y)... I understand that x and y carry through, but it just seems odd that the values of x and y for which g(x,y) is a minimum (i'm not sure if that's always the case, but the solution manual seems to indicate it is) will yield a point P whose length is minimal to Q.
 
Last edited:
Mathematics news on Phys.org
Hi Noxide! :smile:
Noxide said:
… I'm having trouble understanding exactly why we do that to the squared length. I can understand wanting to minimize the length... but minimizing the dot product/squared length seems foreign to me.

The length of PQ is defined as √(PQ.PQ).

Call that r … it doesn't matter whether we minimise (positive) r or r2, they'll be at minimum or maximum together …

i] this is obvious!
ii] alternatively, if ∂r/∂x = 0, then ∂r2/∂x = 2r∂r/∂x = 0 …

and we choose to do it to r2 because that avoids using square-roots, so it's quicker and easier! :wink:

(sorry, but I don't understand your second question :confused:)
 
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. In Dirac’s Principles of Quantum Mechanics published in 1930 he introduced a “convenient notation” he referred to as a “delta function” which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/ by...
Fermat's Last Theorem has long been one of the most famous mathematical problems, and is now one of the most famous theorems. It simply states that the equation $$ a^n+b^n=c^n $$ has no solutions with positive integers if ##n>2.## It was named after Pierre de Fermat (1607-1665). The problem itself stems from the book Arithmetica by Diophantus of Alexandria. It gained popularity because Fermat noted in his copy "Cubum autem in duos cubos, aut quadratoquadratum in duos quadratoquadratos, et...
I'm interested to know whether the equation $$1 = 2 - \frac{1}{2 - \frac{1}{2 - \cdots}}$$ is true or not. It can be shown easily that if the continued fraction converges, it cannot converge to anything else than 1. It seems that if the continued fraction converges, the convergence is very slow. The apparent slowness of the convergence makes it difficult to estimate the presence of true convergence numerically. At the moment I don't know whether this converges or not.
Back
Top