# Differential Geometry: Surfaces

dg
I am trying to learn a little bit more about geometry of surfaces and some differential geometry concepts like principal, gaussian an mean curvatures.

I have found some interesting material at mathworld.wolfram.com but as usual there is a quantity of little incorrect details.

Is anybody willing to study a little bit of this stuff together? I am sure this stuff can give a lot of insight in the interepretation of physical problems and equations as well.

In particular now I am trying to see better the connections between surfaces described implicitly (F(r)=0) or parametrically (r=&sigma;(u,v)): how and when is it possible to pass from one to the other description?

Everybody willing to contribute/learn is more than welcome! :)

Dario

marcus
Gold Member
Dearly Missed
Dario, please have a look at this free online diff geom textbook

http://people.hofstra.edu/faculty/Stefan_Waner/diff_geom/tc.html [Broken]

I think the link was given here in a thread that Tom started.

From your standpoint would this be a useful book?

I also have gone to wolfram and I believe noticed some
trouble with details tho as overall encyclopedia it is
helpful. google often points to it, but it is not a textbook

Last edited by a moderator:
dg
I know this link. It looks pretty good and it is also relativity oriented.
This modern differential geometry approach is to me a little bit too abstract. I am trying to develop an intuitive, visual understanding of formulas and this is not the way my intuition works. So before getting there I need to create a link between what I know from calculus about surfaces and older concept like fundamental forms. I do not want to jump in exterior form calculus like Misner et al. does because to me this is just math, I need to think of a surface in terms of his shape (curvatures) not in term of 12 twelve layers of abstract concepts that separates me from that.

So I would like to begin with concept like surface coordinate base, normal field, surface metric and the other two fundamental forms and understand the kind of information each contain and how they are related. Again Wolfram's website is a pretty good starting point.

Hurkyl
Staff Emeritus
Gold Member
In particular now I am trying to see better the connections between surfaces described implicitly (F(r)=0) or parametrically (r=?(u,v)): how and when is it possible to pass from one to the other description?
I know the answer to that one! Your question is a direct consequence of the implicit and inverse function theorems.

First, the implicit function theorem solves one direction.

Suppose you have a mapping T from RmxRn -> Rn.

Also suppose there exist vectors x0 in Rm and y0 in Rn such that
T(x0, y0) = 0

Aside: this is just the multidimensional generalization of your implicit surface F(r) = 0. The point of the decomposition of the whole vector space into a product of two subspaces is that you are signifying which variables you want to be independant variables and which ones you want to be dependant variables when you produce a parametrization.

Suppose also that the jacobian of the transformation:
J(x) = |[pard]T(x, y)/[pard] y| is nonzero in a neighborhood of (x0, y0)

Aside: Since Jacobians can be interpreted as the local scaling factor for a transformation, this guarantees that T is nondegenerate on the dependant variable space because it maps all local regions with nonzero hypervolume onto regions with nonzero hypervolume.

Then the implicit function theorem guarantees the existance of a mapping S from Rm -> Rn such that:

T(x, S(x)) = 0 near (x0, y0)

Which yields the following parametrization of your surface:

(x, y) = (t, S(t))

Aside: the guaranteed function is exactly what you'd get if you used the constraint T = 0 to solve for y in terms of x

The other way, old chum, is a job for the inverse function theorem! (cue superhero music)

Suppose you have a surface parametrized by
(x, y) = (&sigma(t), &phi(t)) (the dimension of x is the same as of t)

Aside: again we're seperating the variables into independant and dependant groups.

Suppose also that the jacobian
J(y) = |[pard] &sigma(t)/[pard] t| is nonzero in a neighborhood of t0

Then the inverse function theorem guarantees that &sigma is locally invertable, and we can locally rewrite the parametrization by:

t = &sigma-1(s)
(x, y) = (s, &phi(&sigma-1(s)))

Which, near (x0, y0) = (&sigma(t0), &phi(t0)) we can write as the implicit function
0 = T(x, y) = &phi(&sigma-1(x)) - y

The key to both theorems is that the underlying mappings have to be nondegenerate, which is checked via the Jacobian. Heuristically, for any nondegenerate mapping, you can find a suitable subspace onto which the projection of your mapping remains nondegenerate, use that subspace as your independant variables, and apply the appropriate theorem to convert to the other representation.

Did that help any, or is it too messy?

Differential Geometry is a subject I would like to learn more about, but I haven't managed to make myself make the time to sit down and actually do more than skim through material. I'll definately tag along if the thread keeps moving. P.S. I was 99.9% positive I could write [pard] as &pard, has that been removed, or am I just having a brain fart and forgetting how to spell &pard?

Last edited:
dg
Thank you so much for now Hurkyl I know the theorem but I just had not thought of applying it to this problem... I will have a more thorough look at your long post and reply soon!

I will do my best to keep this post alive... for now I have to go!

Thank again :) Dario

climbhi
&part; It works for me, but that's becuase I'm using & part ; (no spaces of course) remembering all these different spellings is hard, it drives me nuts when trying to post.

Last edited by a moderator:
Hurkyl
Staff Emeritus
Gold Member
One day I was in a lecture and the teacher meant to write 'm' in a proof but wrote 'n' instead. After realizing the mistake, he apologized that he "misspelled 'm'"!

I guess I'm not quite so bad, but I feel silly having used & part all this time, and then somehow switched to &pard a few days ago and hadn't been able to figure out why it wasn't working!

Thank you so much for now Hurkyl I know the theorem but I just had not thought of applying it to this problem
Glad I could help! I had presumed you knew of the theorems, but I wasn't sure so I was being overly informative.

Incidentally, I think one of the greatest realizations I got out of my advanced calc course was that for a parametric mapping &sigma(t) from Rm to Rn, the rank of the (usually nonsquare) matrix &part(&sigma)/&part(t) corresponds exactly with our intuitive concept of the dimensionality of a surface.

And this is closely related to how tangent spaces are defined. Imagine how you would go about giving a detailed rationale of why the rank of &part(&sigma)/&part(t) should correspond to our intuitive idea of dimensionality, and I bet you'll practically write down the definition of a tangent space. (I don't know how far you've already gotten into the subject, so if you've already figured out tangent spaces, ignore me. )

Last edited:
climbhi
Originally posted by Hurkyl
One day I was in a lecture and the teacher meant to write 'm' in a proof but wrote 'n' instead. After realizing the mistake, he apologized that he "misspelled 'm'"!
Only a mathmetician could misspell a letter

marcus
Gold Member
Dearly Missed
Originally posted by dg
I am trying to learn a little bit more about geometry of surfaces and some differential geometry concepts like principal, gaussian an mean curvatures.

I have found some interesting material at mathworld.wolfram.com but as usual there is a quantity of little incorrect details.

Is anybody willing to study a little bit of this stuff together? ...

....

Dario
What wolfram pages are you currently going over?
I just looked at holonomy and
Riemannian metric and some related things
at wolfram. I will try what you mentioned as keywords
(principal, gaussian, mean curvature) later today.
Maybe you should post the URL of pages that are
focal points for you, in case people want to do what
you say, namely study some differential geometry together.

damgo
I could use a refresher on this stuff, so I'll follow the thread. The book we used was Do Carmo (he has several, the one on curves and surfaces in 3D) which was pretty good.

There is an online text at http://www.cs.elte.hu/geometry/csikos/dif/dif.html which seems to take the approach you want, too. I might suggest skipping the "hypersurface" stuff at first -- normally you do curves, surfaces, and then go straight to general manifolds.

dg
Very little time today (SOB!)

Hurkyl, I have read your your post and it seems to work pretty well, my only doubt is: how restrictive is the condition of non-zero jacobian? It is a sufficient condition for finding a parametrization but is it also necessary? It seems to me that the parameterization we find that way is quite special... Any thought about this?
I will post a more detailed reply tomorrow...

Marcus, the pages I am referring to are exactly what I have found by searching for gaussian and mean curvature and following the related links. In particular now I am studying the relations between shape operator, metric and fundamental forms both represented in the tangent space or in the immersion space.

Damgo, you totally got it! Awesome link thank you! I will shop around for Do Carmo's book.

I'll write you tomorrow about what I have understood so far about fundamental forms and Co, and relative doubts...

Dario

Hurkyl
Staff Emeritus
Gold Member
It is also necessary.

A zero jacobian means that the jacobian doesn't have full rank, thus we may choose a coordinate system where for one of the variables we chose as "independant", the derivative of the mapping with respect to that variable is zero, which (by definition) does not happen for smooth surfaces, thus there is no smooth parametrization of the implicit surface (in terms of the chosen variables) when the jacobian is zero.

It does seem special, but remember we are also free to apply any smooth change of variables we like before applying the implicit function theorem, and this procedure will allow us to arrive at any smooth parametrization of the surface in question.

Last edited:
dg
Ok today I have some time to dedicate to PF eventually :)

Let me see Hurkyl if I followed you right:

Originally posted by Hurkyl
I know the answer to that one! Your question is a direct consequence of the implicit and inverse function theorems.

First, the implicit function theorem solves one direction.

Suppose you have a mapping T from RmxRn -> Rn.

Suppose there exist vectors x0 in Rm and y0 in Rn such that
T(x0, y0) = 0

Aside: this is just the multidimensional generalization of your implicit surface F(r) = 0. The point of the decomposition of the whole vector space into a product of two subspaces is that you are signifying which variables you want to be independant variables and which ones you want to be dependant variables when you produce a parametrization.

Suppose also that the jacobian of the transformation:
J(x) = |[pard]T(x, y)/[pard] y| is nonzero in a neighborhood of (x0, y0)

Aside: Since Jacobians can be interpreted as the local scaling factor for a transformation, this guarantees that T is nondegenerate on the dependant variable space because it maps all local regions with nonzero hypervolume onto regions with nonzero hypervolume.

Then the implicit function theorem guarantees the existance of a mapping S from Rm -> Rn such that:

T(x, S(x)) = 0 near (x0, y0)

Which yields the following parametrization of your surface:

(x, y) = (t, S(t))

Aside: the guaranteed function is exactly what you'd get if you used the constraint T = 0 to solve for y in terms of x
Ok so if I want to apply this to my special case of a surface in 3D:

I have a mapping T from R2xR -> R.

Also suppose there exist vectors (X0,Y0) in R2 and Z0 in R such that
T(X0,Y0;Z0) = 0

Suppose that the jacobian of the transformation (determinant reduce to the only element of the transfo itself):
J(X,Y;Z) = [pard]T(X,Y;Z)/[pard]Z is nonzero in a neighborhood of (X0,Y0;Z0)

Aside: Since Jacobians can be interpreted as the local scaling factor for a transformation, this guarantees that T is nondegenerate on the dependant variable (Z) space because it maps all local regions with nonzero hypervolume (area?) onto regions with nonzero hypervolume (area?).

Then the implicit function theorem guarantees the existance of a mapping S from R2 -> R such that:

T(X,Y;S(X,Y)) = 0 near (X0,Y0;Z0)

Which yields the following parametrization of my surface:

(X,Y;Z) = (u,v;S(u,v))

The guaranteed function is exactly what I'd get if I used the constraint T(X,Y,Z) = 0 to solve for Z in terms of (X,Y)

Now if my starting point is a surface given as F(x,y,z)=0, I can apply a smooth transformation K from R3 -> R3 such that (x,y,z)=K(X,Y,Z), and use the above procedure to get my parameterization.
In such a case T = F o K and the condition on the Jacobian becomes

0 != [pard]T/[pard]Z =
= [pard]K/[pard]Z [pard]F/[pard](x,y,z) =
= ([pard]K/[pard]Z)(K-1(x,y,z)) [pard]F/[pard](x,y,z)

where the first is a (1x3) matrix (row) and the second a (3x1) matrix (column). Such a condition can be read as follows:
for a surface defined implicitly as F(r)=0 you can parametrize the surface with a new set of coordinates as long as the dependance of the position r from the coordinate chosen as the independent one (Z) does not give a derivative orthogonal to the gradient of F (WRT r), that is the derivative of r WRT Z cannot be tangent to the surface!

Hurkyl
Staff Emeritus
Gold Member
Exactly right!

To put it differently, the surface F(x, y, z) = 0 can be written locally as Z = f(X, Y) if and only if at each point on the surface, the vector in the Z direction is not tangent to the surface.

Last edited:
dg
Now, Hurkyl, let me see if I can get an implicit definition of a surface from a parametric one:
Originally posted by Hurkyl
Suppose you have a surface parametrized by
(x, y) = (&sigma;(t), &phi;(t)) (the dimension of x is the same as of t)

Aside: again we're seperating the variables into independant and dependant groups.

Suppose also that the jacobian
J(t) = |[pard]&sigma;(t)/[pard]t| is nonzero in a neighborhood of t0

Then the inverse function theorem guarantees that &sigma; is locally invertable, and we can locally rewrite the parametrization by:

t = &sigma;-1(s)
(x, y) = (s, &phi;(&sigma;-1(s)))

Which, near (x0, y0) = (&sigma;(t0), &phi;(t0)) we can write as the implicit function
0 = T(x, y) = &phi;(&sigma;-1(x)) - y

The key to both theorems is that the underlying mappings have to be nondegenerate, which is checked via the Jacobian. Heuristically, for any nondegenerate mapping, you can find a suitable subspace onto which the projection of your mapping remains nondegenerate, use that subspace as your independant variables, and apply the appropriate theorem to convert to the other representation.
I have a surface parametrized by

(X,Y;Z) = (&sigma;(u,v);&phi;(u,v))

The jacobian

J(u,v) = det([pard]&sigma;/[pard](u,v))

is nonzero in a neighborhood of (u0,v0).

Then the inverse function theorem guarantees that &sigma; is locally invertable, and we can locally rewrite the parametrization by:

(u,v) = &sigma;-1(s)
(X,Y;Z) = (s;&phi;(&sigma;-1(s)))

Which, near (X0,Y0;Z0) = (&sigma;(u0,v0);&phi;(u0,v0)) we can write as the implicit function

0 = T(X,Y;Z) = &phi;(&sigma;-1(X,Y)) - Z

Once again I want to get to a more general form of things: in particular I will have a starting parametrization given as

(x,y,z) = &Sigma;(u,v)

Then I will use a smooth change of variables

(X,Y;Z) = &Lambda;(x,y,z) = (&Lambda;XY(x,y,z);&Lambda;Z(x,y,z))

so that the surface in the new coordinates will be

(X,Y;Z) = (&Lambda;XY(&Sigma;(u,v));&Lambda;Z(&Sigma;(u,v)))

that compared with out initial form of the parametrization gives

&Lambda;XY o &Sigma; = &sigma;

for the condition on the non-zero jacobian we'll have

J(u,v) = det([pard]&sigma;/[pard](u,v)) = det( [pard]&Sigma;/[pard](u,v) . [pard]&Lambda;XY/[pard](xyz) ) =
= det|.|........|.|......|.......|........|.| =
.........\ |rv-->| |.....V.......V........|/
= det|.................................|

which can be read as a request that the projection of the directions X and Y of the new coordinates on the surface are linearly independent.

We get to a general form of the implicit equation of the surface
0 = T(X,Y,Z) = (T o &Lambda;)(x,y,z) =
= &phi;(&sigma;-1(&Lambda;XY(r)))-&Lambda;Z(r) =
= (&Lambda;Zo&Sigma;)o(&Lambda;XYo&Sigma;)-1o&Lambda;XY(r) - &Lambda;Z(r)

where (I believe) the invertibility of &Lambda; does not guarantee the invertibility of &Lambda;XY, but our request (non-zero jacobian) guarantees the invertibility of &Lambda;XYo&Sigma;.

Hurkyl
Staff Emeritus
Gold Member
I'm a little tired ATM so I didn't dare pour carefully over your derivations for correctness for fear I would make a mistake one way or another.

However, I believe your conclusions are correct.

which can be read as a request that the projection of the directions X and Y of the new coordinates on the surface are linearly independent.
That sounds right. Since, intuitively, we're reparametrizing the surface by X and Y, we need to guarantee that (X, Y) space maps locally 1-to-1 onto the surface, and that linear independance gives us that guarantee.

Your final equation seems correct as well; one way of the iff condition can be checked by substituting r=&sigma(u, v). The smoothness of &Lambda in concert with the original parametrization seems to guarantee the other direction.

You are correct that the invertibility of &Lambda does not guarantee the invertibility of &LambdaXY... examples can be made by considering when the quoted conditions are not met.

dg
Let us resume the discussion...

Today's subject will be curvatures and related matrices/operator/tensors...

For now the most fascinating object I have found in the study of differential geometry is the so-called shape operator S
(for its definition I refer you to mathworld.wolfram.com).

What I have tried to clarify so far is its matrix representation:
it is pretty clear how to deal with it once you have a parametric representation of the surface but not as much when we have a surface given in implicit form.

Euristically the shape operator collects the information relative to the derivatives of the normal along a direction tangent to the surface. If we think of the theory of (plane) curves it is clear how this connects to curvature; we derive the normal along the curve and what we obtain is a vector whose length is equal to local curvature (not so in three dimensions, by the way...).

In terms of so-called fundamental forms the intrinsic (2,2) matrix representation of S is easily obtained by:

S=IFF-1.IIFF

which yields the so-called Weingarten equations.

The first fundamental form (FF) is the metric tensor of the surface, and is simply related to the change of coordinates matrix

&Lambda;=(&Lambda;kk')=([pard]r/[pard](u,v))T=(xk,k')
(that is the cartesian components of the coordinate basis {ru,rv} written along a row)

IFF=G=&Lambda;&Lambda;T

while the second FF is represented by the component of the derivative of the coordinate basis {ru,rv} along the normal.

IIFF=(ri'j'.n)

Now the shape operator has the beautiful property that its eigenvalues are the principal (normal) curvatures of the surface, so that its determinant is the gaussian curvature and its trace is (2 times) the mean curvature.

What if I want to retrieve this information from a (3,3) representation of this operator (-grad n)T? Can I go straight ahead and calculate trace and determinant of this matrix and expect their restriction to the surface points will return mean and gaussian curvature?

I guess so but I have neither checked or being able to proof it yet. Ideas anybody??