In particular now I am trying to see better the connections between surfaces described implicitly (F(r)=0) or parametrically (r=?(u,v)): how and when is it possible to pass from one to the other description?
I know the answer to that one! Your question is a direct consequence of the implicit and inverse function theorems.
First, the implicit function theorem solves one direction.
Suppose you have a mapping
T from R
mxR
n -> R
n.
Also suppose there exist vectors
x0 in R
m and
y0 in R
n such that
T(
x0,
y0) = 0
Aside: this is just the multidimensional generalization of your implicit surface F(
r) = 0. The point of the decomposition of the whole vector space into a product of two subspaces is that you are signifying which variables you want to be independant variables and which ones you want to be dependant variables when you produce a parametrization.
Suppose also that the jacobian of the transformation:
J(
x) = |[pard]
T(
x,
y)/[pard]
y| is nonzero in a neighborhood of (
x0,
y0)
Aside: Since Jacobians can be interpreted as the local scaling factor for a transformation, this guarantees that
T is nondegenerate on the dependant variable space because it maps all local regions with nonzero hypervolume onto regions with nonzero hypervolume.
Then the implicit function theorem guarantees the existence of a mapping
S from R
m -> R
n such that:
T(
x,
S(
x)) =
0 near (
x0,
y0)
Which yields the following parametrization of your surface:
(
x,
y) = (
t,
S(
t))
Aside: the guaranteed function is exactly what you'd get if you used the constraint
T =
0 to solve for
y in terms of
x
The other way, old chum, is a job for the inverse function theorem! (cue superhero music)
Suppose you have a surface parametrized by
(
x,
y) = (
&sigma(
t),
&phi(
t)) (the dimension of
x is the same as of
t)
Aside: again we're seperating the variables into independant and dependant groups.
Suppose also that the jacobian
J(
y) = |[pard]
&sigma(
t)/[pard]
t| is nonzero in a neighborhood of
t0
Then the inverse function theorem guarantees that
&sigma is locally invertable, and we can locally rewrite the parametrization by:
t =
&sigma-1(
s)
[4](
x,
y) = (
s,
&phi(
&sigma-1(
s)))
Which, near (
x0,
y0) = (
&sigma(
t0),
&phi(
t0)) we can write as the implicit function
0 =
T(
x,
y) =
&phi(
&sigma-1(
x)) -
y
The key to both theorems is that the underlying mappings have to be nondegenerate, which is checked via the Jacobian. Heuristically, for any nondegenerate mapping, you can find a suitable subspace onto which the projection of your mapping remains nondegenerate, use that subspace as your independant variables, and apply the appropriate theorem to convert to the other representation.
Did that help any, or is it too messy?
Differential Geometry is a subject I would like to learn more about, but I haven't managed to make myself make the time to sit down and actually do more than skim through material. I'll definitely tag along if the thread keeps moving.
P.S. I was 99.9% positive I could write [pard] as &pard, has that been removed, or am I just having a brain fart and forgetting how to spell &pard?