# Intro to differential forms

don't want my words on PF

Last edited:

Related Differential Geometry News on Phys.org
a couple of weeks ago there was a thread over there about Stoke s theorems. one of the prerequisites for understanding how to calculate with stoke s theorems is the knowledge of differential forms. maxwell s equations can then be formulated in a coordinate independent way, using differential forms, and elucidating the dependence on geometry/ differential forms are also heavily used in describing yang-mills gauge theory.

i m hoping to keep the prerequisites for this thread at a minimum. you definitely need to know some calculus to follow this thread. you have to know what a derivative is, and you should probably know a little bit of multivariable calculus, what a partial derivative is, basically.

also, some familiarity with vector spaces is useful, although everything we need to know about vector spaces is included somewhere. if you are familiar with vectors in $$\mathbb{R}^3$$, and the vector cross product, that will probably suffice. oh and i assume that you know what it means for vectors to be linearly independent, and what a basis of a vector space is. those concepts could certainly be explained here, if anyone wishes.

i do make quite a few references to the concept of a manifold throughout. i don t expect you to know the technical definition of a manifold. we won t need it here, and it is not really taught in any undergraduate math or physics curricula, as far as i know. so let me just describe generally what i mean here when i say manifold.

a manifold is basically a generalization of $$\mathbb{R}^n$$. for example, $$\mathbb{R}^n$$ is itself a manifold, albeit a flat one, but we want to extend our idea of a space to include curved spaces, so lemme just give a few examples: a parabola is a curved 1 dimensional manifold, that extends to infinity. a circle is a 1 dimensional manifold that folds back on itself. a sphere is a 2 dimensional manifold, or actually, any surface you would want to think of is a manifold. a manifold is just a space that is not necessarily flat. that is about all we need to know about them.

i encourage anyone to ask questions about any parts of this thread that are unclear, if you re interested to learn this stuff. or correct me if you find any mistakes, if you already know this stuff.

Last edited:
Euclidean Vectors

i am going to assume that you are a little familiar with euclidean vectors. a euclidean vector is an arrow between two points. it has direction and magnitude1. mathematically, we can specify a vector in euclidean space with a pair of points in the space, and let the vector be the arrow directed from one point to the other. or you can assume that the first point is always the origin, and specify the vector with just a single point. by doing this, you are essentially moving the vector from it s basepoint, to the origin. this is possible because euclidean space is both a manifold and a vector space.

this won t be true when we move to noneuclidean manifolds. for example, there is no sensible way to make points on a sphere into a vector space. there is no sensible way to define addition on these points.

1Well, the vectors don t have magnitude or direction until we endow the space with a metric. almost everything we are going to talk about here is independent of metric, and we will not need to specify a metric on this space. when using metric dependent quantities, this is the differential geometry, and when dealing with the more general metric independent quantities, this is differential topology. if you don t know what any of this means, ignore it.

Last edited:
now, hopefully we are all pretty comfortable with what a normal euclidean vector is. it s basically just an arrow between two points. it has a magnitude and a direction. right? euclidean space also comes endowed with a way to calculate the length of a vector. the pythagorean theorem. hopefully we re all familiar with this concept. if anyone wants to hear a little more about euclidean vectors, just holler, and we ll let you know.

This should be interesting! Keep it up!

Tom Mattson
Staff Emeritus
Gold Member
Prerequisite Review Sheet

Originally posted by lethe
oh and i assume that you know what it means for vectors to be linearly independent,
Vectors vi (i=1,2,3,...) in Rn are independent iff

a1v1+a2v2+a3v3+...=0

implies that

a1+a2+a3+...=0

and what a basis of a vector space is.
A set of vectors (v1,v2,v3,...) is a basis for a vector space V iff

1. v1,v2,v3,... span* V.
2. v1,v2,v3,... are independent.

*span(v1,v2,v3,...)={a1v1+a2v2+a3v3+...|for ai in R}

edit: fixed subscript bracket

Last edited:
Tom Mattson
Staff Emeritus
Gold Member
A note on Euclidean vectors.

Originally posted by lethe
now, hopefully we are all pretty comfortable with what a normal euclidean vector is. it s basically just an arrow between two points. it has a magnitude and a direction. right?
I don't know if this is going to be an issue with what you are going to bring up later, but when I teach special relativity I try to get the students to stop thinking of vectors in this way, because the "magnitude and direction" definition of a vector is only good for Euclidean space.

When something is said to be a "vector", one has to specify a set of transformations with respect to which that object is a vector. In the case of Euclidean 3-space, that set of transformations is rotations and parity.

Definition: A vector in Euclidean 3-space (E3) is a mathematical object that transforms under rotations R and parity &Pi; as follows.

x-->x'=Rx
x-->x'=&Pi;x=-x

where R is an orthogonal matrix (RTR=1). Orthogonality is important because the norm of the vector must be preserved under the rotation.

Explicitly, we must have:

v'.v'=v.v

in terms of row and column vectors (vT and v, respectively):

v'Tv'=vTRTRv

For the equality of the inner products to hold, we can see that we must have RTR=1.

IMO, when vectors are defined in terms of transformations, the extension to other vector spaces and to higher rank tensors in the same vector space is most natural.

Lethe, if you don't mind, could you wait to post the next section for another day? I would like to pick a few exercises out of my linear algebra to reinforce this stuff.

edit: fixed superscript bracket, various typos

Last edited:

Originally posted by Tom
I don't know if this is going to be an issue with what you are going to bring up later, but when I teach special relativity I try to get the students to stop thinking of vectors in this way, because the "magnitude and direction" definition of a vector is only good for Euclidean space.
yes, i do want to de-emphasize the notion of a vector as an arrow with direction. in the next post to come, i will write the definition of an abstract vector space, and ask that the reader abandon any preconceptions about vectors as arrows, and think of a vector as mathematical object obeying certain alebraic rules.

When something is said to be a "vector", one has to specify a set of transformations with respect to which that object is a vector. In the case of Euclidean 3-space, that set of transformations is rotations and parity.

Definition: A vector in Euclidean 3-space (E3) is a mathematical object that transforms under rotations R and parity &Pi; as follows.

x-->x'=Rx
x-->x'=&Pi;x=-x

where R is an orthogonal matrix (RTR=1). Orthogonality is important because the norm of the vector must be preserved under the rotation.

Explicitly, we must have:

v'.v'=v.v

in terms of row and column vectors (vT and v, respectively):

v'Tv'=vTRTRv

For the equality of the inner products to hold, we can see that we must have RTR=1.

IMO, when vectors are defined in terms of transformations, the extension to other vector spaces and to higher rank tensors in the same vector space is most natural.
well, i m not sure that i want to emphasize a definition that relies on transformations, we want to delay any introduction of coordinates and metrics/inner products as much as possible. orthogonality relies on the metric, so i don t want to talk about it. transformation rules of vectors rely on the introduction of coordinates on the manifold, so i don t want to talk about that either, at least to start. i want to define, e.g. tangent vectors to a manifold without any reference to local coordinates, and then derive the transformation rule for coordinate transformations, including rotations.

the definition of a (contravariant) vector as an object which transforms one way, and a covariant vector as one that transforms another way is what i was taught when i first learned GR, and i don t like it and am not going to use it. those are derivable, once you choose local coordinates, and should not be considered fundamental to the definition of the vector.

also, there is confusion about the terms contravariant and covariant, and to add to the confusion, i am going to define them oppositely of most textbooks. and then to alleviate some of the confusion, i will agree to never use those terms anymore.

Lethe, if you don't mind, could you wait to post the next section for another day? I would like to pick a few exercises out of my linear algebra to reinforce this stuff.
of course.

Hurkyl
Staff Emeritus
Gold Member
I don't know if this is going to be an issue with what you are going to bring up later, but when I teach special relativity I try to get the students to stop thinking of vectors in this way, because the "magnitude and direction" definition of a vector is only good for Euclidean space.
Interesting; I was going to suggest exactly the opposite; I was going to suggest that students think of vectors as bound to a point in space and having a direction and magnitude, to try and quell the notion of a vector as a displacement, and to emphasize that we cannot slide the arrows around like we can in Euclidean space.

IMHO, the magnitude and direction interpretation helps with the understanding of a tangent space. At least it helped me understand a tangent space when I was trying to figure out what it was.

Tom Mattson
Staff Emeritus
Gold Member
Examples and Exercises

Here are two worked examples and one exercise to reinforce the prerequisites that were touched on earlier. This is not meant to be comprehensive, it is only meant to show you how the definitions I posted earlier ("Prerequisite Review Sheet") are used.

Linear Independence
Example:
Determine whether the set {v1,v2,v3} is linearly independent or linearly dependent, where

vT1=[1 2 3]
vT2=[2 -1 4]
vT3=[0 5 2]

Solution:
We must determine whether the vector equation:

a1v1+a2v2+a3v3=0

has a nontrivial solution (where 0 is the zero vector).

Note that the above is equivalent to Va=0, where V is the 3x3 matrix [v1,v2,v3]. The augmented matrix [V|0] for the system is:

Code:
      [1  2  0|0]
[V|[b]0[/b]]=[2 -1  5|0]
[3  4  2|0]
This reduces to:

Code:
       [1  2  0|0]
[V|[b]0[/b]]=[0  -5  5|0]
[0  0  0|0]
Backsolving, we get:

x1=-2x3
x2=x3
x3 is arbitrary.

Let x3=1, which gives x1=-2 and x2=1, and we have:

-2v1+v2+v3=0

Thus, the set {v1,v2,v3} is linearly dependent.

Spanning Sets
Example:
In R3 (regular Euclidean 3-space) let S={u1,u2,u3}, where

uT1=[1 -1 0]
uT2=[-2 3 1]
uT3=[1 2 4]

Determine whether S is a spanning set for R3.

Solution:
We must determine whether an arbitrary vector v in R3 can be constructed as a linear combination of the ui. In other words, we must determine whether the equation

a1u1+a2u2+a3u3=v

always has a solution. Note that the above is equivalent to the system Ax=v, where A os the 3x3 matrix [u1,u2,u3]. The augmented matrix [A|v] is:

Code:
      [1  -2  1| a]
[A|[b]v[/b]]=[-1  3  2| b]
[0  1  4| c]
Solving this system yields:

Code:
      [1  0  0|10a+9b-7c]
[A|[b]v[/b]]=[0  1  0|4a+4b-3c]
[0  0  1|-a-b+c]
Thus, our original vector equation indeed always has a solution, and so S is a spanning set for R3.

Basis
Exercise
Use the definition of a basis to determine whether S in the above Example is a basis of R3.

Lethe, that's all I wanted to say. The floor is yours.

edit: fixed a greivous error

Last edited:

Originally posted by Tom

Lethe, that's all I wanted to say. The floor is yours.
should we let someone solve your exercise before we move on?

Tom Mattson
Staff Emeritus
Gold Member
All that remains to be done on that is to check for linear independence, and I gave an example of that already. I don't think it's necessary to have the solution posted, but this is your show, so you can decide.

Abstract Vector Spaces

The notion of vector arithmetic and linear spaces turns out to be very useful in many different areas of mathematics and physics. so let s write down those mathematical properties of vector spaces that make them useful, and forget any notion of vectors as arrows with direction and magnitude. let v and w be vectors, and here i mean it in the abstract sense.

in other words they are just elements of a set which i am going to call vectors. they are not necessarily arrows. let a and b be real numbers2. these are the properties that the vectors must satisfy to be called a vector space.

Abelian Group Properties

1. v+w = w+v. i.e. vector addition is commutative.
2. there is a vector in my set 0 such that v+0=v.

in other words, there is a zero vector
3. for any vector v, -v is also a vector.
[/list=1]

Distributivity and Associativity
1. (a+b)v=av+bv
2. a(v+w)=av+aw
3. a(bv)=(ab)v
[/list=1]

this all seems a little abstract, but i assure you, the abstraction will pay off, when we can use all the theorems we know for vectors for all kinds of things that look nothing like our euclidean arrows with magnitude and direction.

any set of objects which satisfy these axioms, together with the set of numbers, is called a vector space over those numbers.

1. 2 To be abstract and general, i do not need to require that a and b are real numbers. they can be members of any field. in fact, to be completely general, most, but not all, of the useful properties of a vector space also hold if i use a ring, instead of a field. this is called a module, instead of a vector space. if you don t know what any of this means, then ignore it.

Last edited:
Yikes, lethe, is this thread intended for those of us who haven't taken linear algebra yet, or is that another prerequisite that you neglected to mention? I would like to try to follow it, but it looks like it will require a fair amount of "outside" reading.

Please try to clarify the concept of a manifold. You said that "a manifold is just a space that is not necessarily flat." I guess that doesn't really help me until I fully understand the concept of a flat space. I mean, it's clear enough that a plane is a 2 dimensional flat space (I hope), but what is a flat 3-space, a flat 4-space, etc.?

The examples you gave of curved lines and curved surfaces don't really convey the essence of "manifoldness" (whatever that is). Presumably a space is not necessarily flat. So what distinguishes a manifold from a space?

Next problem:
what course would cover rotations, parity and orthogonality? I haven't come across any of these terms before (at least not in this context). Should I be trying to read about them now, or is that unnecessary?

Also, your summary of the properties that a set of vectors must have to be called a vector space is clear enough, but can you define or explain the concept of a vector space in words?

Aha, finally a rather clear explination of what a module is. Thanks. This is proving to be rather refreashing to go over some principles, but also educational to learn new stuff!

Originally posted by gnome
Yikes, lethe, is this thread intended for those of us who haven't taken linear algebra yet, or is that another prerequisite that you neglected to mention? I would like to try to follow it, but it looks like it will require a fair amount of "outside" reading.
well, this is supposed to be a suicidal crash course in linear algebra, and cover all the prerequisites. i don t know if that is too ambitious a hope, but at least you re trying. that is promising.

Please try to clarify the concept of a manifold. You said that "a manifold is just a space that is not necessarily flat." I guess that doesn't really help me until I fully understand the concept of a flat space. I mean, it's clear enough that a plane is a 2 dimensional flat space (I hope), but what is a flat 3-space, a flat 4-space, etc.?
OK, yes, a plane is flat 2-dimensional space. an example of a non-flat 2-dimensional space would be the surface of a sphere, or torus (doughnut).

also, a flat 1-dimensional space is just a straight line, whereas a non-flat 1-dimensional space could basically be any curve, like a circle.

flat 3-dimensional space would just be R3. it would just be a big straight volume. draw the x-, y-, and z- axes, if they go straight in each direction off to infinity, then the space is flat. so it doesn t mean flat in the sense that it s flat like a pancake. it still has volume. i just mean flat as in not curvy. flat means that the pythagorean theorem still holds (the pythagorean theorem is not true on the surface of a sphere, in case you didn t know).

but what do i mean by curvy (non-flat)? what is an example of a curvy 3-dimensional space? well this is a bit tricky to explain. recall that my examples of non-flat 2-dimensional spaces were surfaces drawn in R3. it is the case that you always need to draw your space in some Rn, where n is more than the dimension of your space. the reason is, that the space neds extra room to bend. like when you bend the line into a circle, you need the second dimension to bend into, even though your space (the circle) is only 1-dimensional. when you bend a plane into a sphere, you need the third dimension to bend around, even though the space (the sphere) is only 2-dimensional. so non-flat 3-dimensional space can only fit in some Rn if n is 4 or more.

problem is, i don t know how to draw that kind of thing. i can t draw R4. i can t even imagine R4. it s very hard to imagine what is meant by a non-flat 3-dimensional space, so the best advice i can give you is just use your 2-dimensional analogies, and try to just generalize that in your mind. just think of a 3-dimensional space, with 3 coordinate axes, that bend, and all 3 meet back on themselves. this is the 3-sphere, the 3-dimensional version of the sphere.

if you can swallow that story for 3 dimensional spaces, it is no harder to go to higher dimensions.

The examples you gave of curved lines and curved surfaces don't really convey the essence of "manifoldness" (whatever that is). Presumably a space is not necessarily flat. So what distinguishes a manifold from a space?
not much. for example, the flat space is itself a manifold. so the line, and the plane, those are both manifolds. but a manifold is not necessarily flat. so the circle and the sphere are also manifolds, but they are not flat spaces. what makes them a manifold is that if you are standing very very close to a sphere, and you forgot your glasses, and you don t look around you, you re just looking at one point on the sphere, right on top of your nose, then it will look like it is flat space, and you might not realize that it is actually a sphere.

kind of like planet earth. she is actually a sphere, but from where i m standing, she looks pretty flat.

Next problem:
what course would cover rotations, parity and orthogonality? I haven't come across any of these terms before (at least not in this context). Should I be trying to read about them now, or is that unnecessary?
hmmm.... well, i might introduce orthogonality at some point in this thread. i m not sure. rotations and parity probably not. oh i see, you re refering to the stuff tom was saying to explain what a vector is? i don t think that stuff is too important for now, but there is some disagreement as to what is the best way to introduce vectors.

so one way to understand what a vector is, is to understand its geometric properties. that is why tom is talking about rotations and orthogonality.

by the way, these concepts are not hard. don t be put off by tom s rather advanced notations of matrices and transposes and such.

rotation means exactly what you think it means: rotation. if i have a vector that points east, and i rotate it 90 degrees clockwise, it will be pointing south. a parity transformation means reflection through a mirror. if my mirror is on the x-axis, and my vector is pointing south, i ll end up with one pointing north, right? in neither case does the magnitude of the vector change, only the direction.

the set of all such transformations is called the orthogonal group. never mind for now, what a group is, or why this group is called orthogonal. i will just mention for now that orthogonal is just another word for perpendicular.

another way to understand vectors is to understand their algebraic properties. that is what i was talking about, with the axioms and such. this approach is more general, it includes a broader class of kinds of vectors, however it is less intuitive.

more on this below.

Also, your summary of the properties that a set of vectors must have to be called a vector space is clear enough, but can you define or explain the concept of a vector space in words?
OK, so let me take another stab at is. do you know what a vector is? the starting place to understand vectors is arrows in R3. they have magnitude and direction. you can rotate them, and reflect them. you can also add them, and scale them.

well you know what? that s about all their is to a vector space. a vector space is just a bunch of guys that you can add, scale, and transform. those algebraic axioms give tell you exactly how to do arithmetic with these guys, but the axioms should look like just your regular old mathematical rules.

there is one type of arithmetic on vectors that i never mentioned, and this is important, so take note, nowhere does a vector space allow you to multiply 2 vectors. you can add vectors, or scale them (which is just multiplication by a number (scalar), like 2. multiply a vector by 2, and you get a new vector that is twice as long.) we will eventually get to some notions that are like multiplications of two vectors, but they will be kinda funny. we ll get there.

OK, so i hope this was helpful. if it was not, feel free to let me know what needs to be explained a little better. after all, this thread is here for you, not for the people who already know this stuff!! thanks for the interest.

Originally posted by On Radioactive Waves
lethe, will this basicly be the same as that other thread or have you revised it at all?
i am copy-pasting from the other forum, so yes, it will be identical. when i write new entries, i will put them on both threads as well, but obviously discussions/question-answers and such will not be the same.

Thanks, lethe. I understand your explanation of flat vs. curvy spaces, but I'm still hung up on this manifold concept. If a manifold is simply a space that either is actually flat or is curvy but up close looks flat, then a Lilliputian and a Brobdingnagian might disagree as to whether a particular space is a manifold. That's not a problem?

As to vectors, you said "nowhere does a vector space allow you to multiply 2 vectors". So, the vector cross-product doesn't apply to this discussion?

Last edited:
Originally posted by gnome
If a manifold is simply a space that either is actually flat or is curvy but up close looks flat, then a Lilliputian and a Brobdingnagian might disagree as to whether a particular space is a manifold. That's not a problem?
not a problem. it doesn t have to look flat to everyone, to be a manifold, it only has to look flat to someone who is really really close. and how close you would have to be might depend on how curvy the space is. a really sharply turning curve only looks flat if you re super close, whereas a very broadly turning curve, you don t have to be so close to.

this all sounds rather vague, but i assure you, these notions can be made completely precise.

As to vectors, you said "nowhere does a vector space allow you to multiply 2 vectors". So, the vector cross-product doesn't apply to this discussion?
that s right. nor do vector dot products. however, we will meet those beasties eventually. a vector dot product is an example of something called an inner product. a vector space that has this additional structure is called an inner product space. once you have an inner product, you can use words like orthogonal, to describe two perpendicular vectors (two vectors are orthogonal iff their inner product is zero), but not before then.

the vector cross product is an example of something called a Lie bracket. vector spaces with vector products like this are called algebras. if the vector product is a Lie bracket, then the algebra is a Lie algebra. there is an active discussion of this in the group theory for dummies thread, do i won t say any more about it.

the point of the story is: the definition of a vector space does not include any ways to multiply vectors, but it includes basically everything else you want to do with a vector. it does not mean that the vector space can t additionally have one, it just means that we re not talking about it.

The Dual Space

a linear functional is simply a function that takes vectors as input, and spits out numbers as output. it should also be linear, that is to say, &sigma; is a linear functional over a vector space if and only if
$$\sigma (a\mathbf{v}+b\mathbf{w}) = a\sigma(\mathbf{v})+b\sigma(\mathbf{w})$$
where a and b are numbers, and v and w are members of the (abstract, i.e. not necessarily arrows) vector space.

now the reason i talked at length about abstract vectors is that i want you to be comfortable with the fact that something doesn t have to be an arrow to be something i would want to call a vector. polynomials are vectors in the abstract sense, and so are linear functionals. this fact is of paramount importance for our purposes in this thread.

i will ask for a volunteer to show that the set of linear functionals on a given vector space is itself a vector space. it s not to hard, just check the vector space axioms given above.

the set of linear functionals on a vector space V is called the dual space to that vector space, and it is denoted with the symbol V*.

OK, so if you believe me that the dual space of a vector space is itself a vector space, then you should know that the dual space should have a basis. well it does. in fact, there is a special basis for the dual space called the dual basis, that i want to look at now.

suppose we are given a basis for our original vector space V, {e&mu;}. then this induces a natural choice of basis for the dual space V*, {&sigma;&nu;}, determined by
$$\sigma^\nu(\mathbf{e}_\mu) = \delta^\nu_\mu$$&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;(1)
in other words, for each basis vector, there exists exactly one linear functional that takes that basis vector to the number 1, and takes every other basis vector to the number 0. these linear functionals form a basis of the dual space, that we will have occasion to use.

the &mu; in {e&mu;} is just a label that runs from 1 to n, where n is the dimension of the vector space. so that is a set of n independent vectors. likewise, the &nu; in {&sigma;&nu;} is just an index that runs from 1 to n, where n is the dimension of my dual space, which is the same as the dimension of the vector space. &delta;&nu;&mu; is the kronicker delta. it is 1 when &mu; = &nu;, and 0 when &mu; &ne; &nu;. so this just a mathematical symbol that means what i said in words: the ith basis linear functional has the value 1 when it acts on the ith basis vector, and has the value 0 when it acts on any other basis vector.

i invite anyone to try, as an exercise, to show that these linear functionals are are unique, and independent, and span the dual space, i.e. that they are a basis as i claim they are. it s not hard, everything follows from the linearity.

some examples: if your vector space is a set of column vectors, then the dual space is the set of row vectors. a row vector operates on a column vector linearly, and yields a number. if your vector space is some quantum mechanical hilbert space, then the dual to the space of kets is the space of bras.

Last edited:
jeff
I think it's worthwhile describing manifolds in a tiny bit more detail. So, an n-dimensional manifold is a (topological) space such that every point has a neighbourhood homeomorphic to Rn. This means that the points in such neighbourhoods may be coordinatized as if the neighbourhoods were open subsets of Rn. It also means that manifolds can't in general be covered by a single coordinate system: coordinate systems stretched too far become singular. In general, coordinate systems that together coordinatize every point on a manifold overlap each other. Manifolds come equipped with functions that allow one to change coordinate systems in these regions of overlap. If these function are differentiable, we have a differentiable manifold.

Originally posted by lethe
what do i mean by curvy (non-flat)? what is an example of a curvy 3-dimensional space? well this is a bit tricky to explain. recall that my examples of non-flat 2-dimensional spaces were surfaces drawn in R 3. it is the case that you always need to draw your space in some R n, where n is more than the dimension of your space. the reason is, that the space neds extra room to bend. like when you bend the line into a circle, you need the second dimension to bend into, even though your space (the circle) is only 1-dimensional. when you bend a plane into a sphere, you need the third dimension to bend around, even though the space (the sphere) is only 2-dimensional. so non-flat 3-dimensional space can only fit in some R nif n is 4 or more.
You're talking about "extrinsic" curvature which describes how a surface is embedded in a higher dimensional space. It is the type of curvature in terms of which we ordinarily perceive and describe shapes. However, it's really a surface's "intrinsic" curvature that's of interest here.

Intrinsic curvature is defined by using the fairly easy to understand idea of "parallel transport". Imagine some closed curve on a flat surface with the tail of a vector placed on a point of this curve. Now push the tail around the curve in such a way that in moving it between infinitessimally separated points on the curve, the vector is kept parallel to itself. When the tail returns to the starting point the vector will be pointing in the same direction as it was initially. However, in performing the same exercise on a curved surface, the final and initial orientations of the vector will in general differ.

We can use this process of parallel transport to define curvature at any given point x in a space. We simply let x be the initial point on some closed curve in the space and observe the change due to parallel transport in orientation of the vector in the limit that the loop shrinks down to x.

The extra mathematical structure needed on manifolds to define parallel transport is known as the "connection".

Last edited:
Originally posted by jeff

You're talking about "extrinsic" curvature which describes how a surface is embedded in a higher dimensional space. It is the type of curvature in terms of which we ordinarily perceive and describe shapes. However, it's really a surface's "intrinsic" curvature that's of interest here.
well, intrinsic curvature is certainly more important when you re doing geometry. but so far, in this thread, we have not introduced any metric, so curvature, either intrinsic or extrinsic, is not defined. really, i was just trying to give a layman description of what it means for a higher dimensional space to be flat or not flat. just an intuitive picture.

for right now, restricting ourself to intrinsic geometry would not be useful. we are going to talk about tangent vectors and 1-forms, and these would look different, for example, if you write them down for a cylinder embedded in R3 than for a plane, even though both have the same intrinsic geometry.

however, your comments are useful. a discussion of riemannian geometry would be a nice addition to the math forum here, and this thread could supplement it nicely, so if you want to talk about geometry, i encourage you to start a thread about it.

jeff
Originally posted by lethe
i was just trying to give a layman description of what it means for a higher dimensional space to be flat or not flat. just an intuitive picture.
It's simply wrong - whatever the theme of this thread - to describe "what it means for a higher dimensional space to be flat or not flat" in terms of it's embedding in a higher dimensional space. As you pointed out, surfaces can have the same intrinsic curvature but different extrinsic curvatures.

Originally posted by lethe
...we have not introduced any metric, so curvature, either intrinsic or extrinsic, is not defined.
A metric is not needed to distinguish between surfaces that are flat and curved. All one needs is the idea of parallel transport which requires only a connection to define.

Edited in:

In fact, a metric isn't needed to define the ( intrinsic)curvature.

Last edited:
Originally posted by jeff
It's simply wrong - whatever the theme of this thread - to describe "what it means for a higher dimensional space to be flat or not flat" in terms of it's embedding in a higher dimensional space. As you pointed out, surfaces can have the same intrinsic curvature but different extrinsic curvatures.
yes, perhaps. but you know, i think the notion of intrinsic geometry takes a little while to develop, whereas anyone can picture a "curvy" embedding. and since i m not going to do anything with metrics or connections in this thread (except possibly the hodge star operator), then i didn t think it was the right place for that.

True, but one doesn't need the metric to distinguish between surfaces that are flat and curved. All one needs is the idea of parallel transport which requires only a connection to define.
yeah, well you know what? just like there is no metric yet, there is also no connection yet.

the point that you re missing here, is that differential forms are specifically designed to be metric independent (they are also connection independent). there are a lot of things that you can do on a differentiable manifold even without a connection, even without a metric. like integration, most importantly.

a lot of times, your metric (or your connection) is a dynamic object. perhaps you want to be able to work on some space before you know what the metric is. there are still some things you can do, without knowing the geometry, and when you do introduce the metric, it is nice to explicitly know what objects depend on it.