What is a tensor

  • #51
mathwonk
Science Advisor
Homework Helper
10,780
954
Since this line of thought has proved so wildly popular I will push it further:

Why would one take the previous point of view? The answer may be in the fact that many geometric surfaces of interest are not manifolds, i.e. are not actually smooth objects which are locally parametrizable by euclidean space, hence local coordinates are not available.

I.e. in that event, even the familiar tangent and cotangent bundles are actually not products. In fact one definition of a manifold, after defining the intrinsic cotangent bundle, is that the (possibly singular) variety is a manifold if and only if the cotangent bundle is locally a product.


This means one cannot define even tangent and cotangent vectors in the simple intuitive way that has been used for manifolds. I.e. we think usually first of "what is a vector at a point"? Then we think of a field of vectors on an open coordinate set, and finally we introduce changes of coordinates for different but overlapping open sets.

Then for tensors, we use the same procedure, passing from pointwise, to local, to global. I.e. we go back to a point and define a tensor at a point, then take products of that eucldiean tensor space with an open coordinate set, and finally ask how the coefficients or components of the tensor change as we change coordinates.

But what if there is no possibility of introducing local coordinates near a certain "bad" point? i.e. a singular point? such as near the origin of a cone. Then the solution in algebraic and analytic geometry is to do the local version first, (taking account of the fact that "local" does not mean "locally trivial"), and after that the pointwise version. And tensor products play a crucial role, even in the definiton of vectors and covectors.

Here is what comes out:
From a certain strange point of view, the tangent bundle to a manifold X, is the same as the normal bundle to the diagonal in the product XxX. (This is familiar in differential topology, where the euler characteristic of a manifold is defined sometimes as the self intersection of the diagonal, i.e. via Hopf's theorem, as the number of zeroes of a general vector field.) Thus the conormal bundle to the diagonal is the cotangent bundle to X.

Now the advantage of this point of view, since we have not defined cotangents or tangents yet, is that conormal bundles are more basic than tangent bundles! I.e. the conormal bundle to the diagonal in XxX, is just the family of functions vanishing on the diagonal, modulo those vanishing twice!

To see what this has to do with derivatives, note the usual difference quotient defining a derivative, i.e. [f(x) - f(a)]/(x-a). See there, that denominator is a function of two variables, a and x, hence a function on the product XxX. Moreover note that when x=a, the numerator is zero, so "deltaf" is a function on the product XxX which vanishes on the diagonal.

However as a derivative, or a differential, df is not consiered zero unless it vanishes twice on the diagonal, i.e. unless after dividing out by the first order zero, i.e. by x-a, we still get zero. Now in algebra we just divide by x-a straightaway, and we can define the derivative of f at a, as the value of the actual algebraic quotient

[f(x)-f(a)]/(x-a), at a. That is how fermat and descartes took derivatives, or found tangents. But in analysis we must take a limit to evaluate this, the usual Newton definition of the derivative.


So to sum up, the cotangent bundle of X is by definition, locally the quotient of the ideal I of functions on XxX which vanish on the diagonal, modulo I^2, those vanishing twice.

If we let C(X) be the ring of functions on X, then it turns out that the ring of functions on XxX is locally C(X)[tensor]C(X), and I is the kernel of the multiplication map C(X)[tensor]C(X)-->C(X). Then the cotangent bundle of X is locally I/I^2. This ism true even at singular, i.e. non manifold, points.

Now this is all more or less true in algebraic and analytic geometry (plus or minus my inherent inaccuracy and ignorance), but I have not checked it for C infinity functions, as my notation suggests here. maybe Hurkyl would be interested in trying this out along with his investigation of general schemes and their smooth analogues.

To pass back to the point wise situation, one defines the pointwise cotangent space as the (pointwise) localization of the module I/I^2, at the point p, and this is done, guess what? by tensoring I/I^2 with the field of coinstants at the point.

I.e. T*(p) = (I/I^2)[tensor(C(X))]R, where R say is the field of real numbers, (and the tacitly assumed homomorphism from C(X) to R, is just evaluation at p).

So tensors have a huge variety of uses.

peace,

I hope this does not kill the interest of this thread for good. Just ignore what I said here if you like.
 
Last edited:
  • #52
2,946
0
mathwonk said:
Why would one take the previous point of view? The answer may be in the fact that many geometric surfaces of interest are not manifolds, i.e. are not actually smooth objects which are locally parametrizable by euclidean space, hence local coordinates are not available.
Please clarify. Why do you think non-smooth surfaces are necessarily not locally parametrizable?
I.e. in that event, even the familiar tangent and cotangent bundles are actually not products.
I don't see that those bundles are even defined for anything but a manifold.
[qoute]
In fact one definition of a manifold, after defining the intrinsic cotangent bundle, is that the (possibly singular) variety is a manifold if and only if the cotangent bundle is locally a product.
[/quote]
Sorry dude but you lost me.

Pete
 
  • #53
mathwonk
Science Advisor
Homework Helper
10,780
954
The definition of the cotangent bundle of a not necessarily amnifold, is as I gave it above:

" If we let C(X) be the ring of functions on X, then it turns out that the ring of functions on XxX is locally C(X)[tensor]C(X), and I is the kernel of the multiplication map C(X)[tensor]C(X)-->C(X).

Then the cotangent bundle of X is locally I/I^2. This is true even at singular, i.e. non manifold, points."


As a simple example of trying to define tangent spaces geometrically for a non manifold, consider a cone, the zero set of x^2 + y^2 - z^2 = 0, and suppose that is our space.

then near the origin it is not locally like any open set in euclidean space, rather it looks like the union of two discs with their centers identified.

So in this simple case we could locally parametrize each disc separately with the proviso that the two centers are the same point.

But there are much much more complicated non manifolds imaginable, such the common zeroes of any polynomial in several variables at all.

Even in the simple case of the cone above we must decide what we mean by a vector tangent to the cone at the origin.

Now if tangency means order of contact more than one, then every line though the origin is tangent to the cone at the origin, since the equation of the cone has second order vanishing when restricted to any such line, because it begins with terms of order 2.

But such vectors are not "tangent" to the cone in the sense that they occur as velocity vectors for any curve in the cone. So the velocity vector definition would give a different concept of tangent vector for a cone than would the order of vanishing definition.

In fact for this cone the order of vanishing definition gives a three dimensional tangent space at the origin of an essentially two dimensional object, the cone. This is unfamiliar from manifold theory.

On the other hand the velocity vector to a curve definition, gives not a vector space at all, but another copy of the cone itself, a two dimensional object but not a linear object. So which one to use?

I.e. for a manifold essentially all different definitons one can think of for tangent vectors or cotangents vectors agree, but not for non manifolds.

When one uses the "order of contact" definition of a tangent space them one gets this phenomenon that the dimension of the tangent space jumps up at a non manifold point, and that causes the family or bundle of tangents not to be a product near there.

When one uses the velocity vector definition, one gets a non linear object at a non manifold point and that again causes the tangent spaces not to be locally trivial near there. There is no way out, if we stick to the desire to have a locally trivial family of objects, i.e. one parametrizable by an open subset of euclidean space.

So we use both concepts, order of contact definition gives the best linear approximation to our space, but possibly of alrger dimension. The velocity vector definition gives the right dimension, and a good approximation to the space, but possibly not linear.

Then a point is manifold point if these two agree, i.e. if the linear object has the right dimension, or equivalently if the correct dimensional object is linear.

The definition above, i.e. I/I^2, is the bundle dual to the order of contact definition of tangent bundle, because locally it gives m/m^2, the maximal ideal of functions vanishing at the point, modulo those vanishing twice.

Note that at the origin of the plane, this gives the vector space of linear polynomials, i.e. the space of all polynomials beginnig with terms of degree at least one, modulo those beginning with therms of degre at least two, i.e. linear polyonimals, i.e. the cotangent space at the origin.
 
Last edited:
  • #54
sal
78
2
pmb_phy said:
mathwonk said:
I.e. "tensor" is a verb, not a noun.
That would be a re-definition/different use of the term "tensor"
Actually, in both colloquial and technical English, verbing a noun is perfectly acceptable.

"I wolfed down dinner and rushed out to shoe my horse so I could cart some things I'd already boxed up into town, but I was caught speeding and they booked me for it."

Nouning a verb is possible too but most verbs were nouned so long ago that it's hard to find recognizable examples. "Going for a run", "joining a sing around the campfire", "making a good throw" come to mind.

For an example of a noun which was verbed and then subsequently adjectived, consider "fish": I see a fish. I fish for it... I install a phone wire. I need to fish it through the wall. I do a neat, fast job of fishing the wire, and my boss says, "That was a good fish job".

In conclusion, if tensor is a noun, then it's surely a verb too, and conversely.
 
  • #55
mathwonk
Science Advisor
Homework Helper
10,780
954
to be a little more precise, I/I^2 is the local C(X) module of covector fields, rather than the "bundle" of cotangent spaces.

The covector fields are the reason for defining the cotangent spaces in the first place, so the fields are more important than the spaces, but if one wants to recover the actual points in the family of tangent spaces say, there is an algebraic way to do this. As a set of points, I believe (over the complex numbers) the tangent "bundle" would be (locally) the maximal ideal spectrum of the symmetric tensor algebra on the C(X) module I/I^2, whatever that means.
 
  • #56
mathwonk
Science Advisor
Homework Helper
10,780
954
can we make adjectives too? like "tensor than thou?"

or at least "tensor than necessary?"

Sometimes, to paraphrase Ghostbusters, I feel like "I've been tensored!"
 
Last edited:
  • #57
2,946
0
sal said:
Actually, in both colloquial and technical English, verbing a noun is perfectly acceptable.
And when you do so you change the meaning of the word.

mathwonk - interesting stuff but alot of it I don't follow. The language I either don't recognize or have forgotten from non-use. When I'm allowed to sit here for extended times then we'll have to chat more.

Glad to have you aboard.

Pete
 
  • #58
mathwonk
Science Advisor
Homework Helper
10,780
954
Thank you Pete! it is very friendly and fun here.

I want to see if I understand enough of the mumbo jumbo i was parroting to give an example.

Lets try to reproduce the algebraic differential one forms, i.e. (0,1) tensors?, on the line (dual vector fields, things you integrate over paths).

The ring of algebraic functions on the line is C[t], polynomials in one variable. The ring of functions on the plane is C[x,y] which happens to be the tensor product as algebras, of the two polynomials rings C[x], and C[Y].

Then the diagonal embedding of the line into the plane, and restriction of polynomials from the plane, to the diagonal, correponds to the map C(X)tensorC(Y) = C[X,Y]-->C[t] taking f(X,Y) to f(t,t), I guess, what else? Then the kernel I, of this map contains things like f(x)-f(Y), which in tensor notation would be represented by
f(x)(tensor)1 - 1(tensor)f(Y), but so what.

Notice this object f(x)-f(Y), looks a lot like "deltaf", the numerator of a derivative.

I.e. this is a function vanishing on the diagonal. Now to get "df" out of this, we just consider it as an element of the quotient object I/I^2, i.e. just decree that such a thing is zero if it is a product of two such things. Now this is a little esoteric, but I beg to be given the benefit of the doubt since a derivative is indeed the second order value of a function, so it is zero if the function vanishes "doubly".

To prove algebraically that this gadget is what it should be, we define a more plebeian version, by simply taking all symbols df, for all f in C[t], then we take all linear combinations of products of form g df, for various g's and f's, and we call such a thing a differential one form.

But of course we have to have some relations, so we mod out by (i.e. consider to be zero) all such linear combinations of form d(f+g) - df - dg, and d(cf) - cdf, and d(fg) - fdg - gdf.

Then we really do have the space of differential one forms on the line. i.e. sums of thignjs like g df, with the usual relations. I suppose also we can show in this case that they all can be written uniquely as actually just g(x)dx, for some g. For more general, uglier spaces, especially non manifolds, this is not true.

Now I claim on good authority that I/I^2 is isomorphic to this module of differential one forms. To show it we have to have a map between them and show the map is an isomorphism.

Well just send df in the module of one forms, to f(X)-f(Y), in I, i.e. to delta f, or rather go ahead and send it further to the equivalence class of f(X)-f(Y) in I/I^2, i.e. send df to "df"!

this defines a map from the differential one forms to the space I/I^2.

It can be shown by someone with better algebra skills than mine that this is an isomorphism of C[t] modules.

well that was pretty wimpy, but i claim it is a sketch of an example of showing that I/I^2, in the case of the line embedded as the diagonal of the plane, really gives the expressions of form: summation gj dfj, i.e. fields of covariant tensors, i.e. differential one forms.

Now on objects that are not manifolds, i.e. that have singularities, these modules of fields are not locally trivial, hence are not sections of local product bundles. I suppose you can still take higher order tensor powers of them but to me it becomes a little hard to understand what you are getting.

You can see I run out of gas pretty quick after basic rank one tensors.

best,

roy
 
  • #59
Hurkyl
Staff Emeritus
Science Advisor
Gold Member
14,916
17
I didn't understand it until I looked at a nice simple case.

On S, the set of real valued functions on R^n differentiable around the origin, we have the operator d(.)(0) that takes a function to a cotangent vector at the origin.

Additive constants don't matter, so we can strip off the zeroeth order terms, leaving us with the subset I of functions in S that are zero at the origin.

(Note that I is an ideal of the ring S!)

Two functions of I evaluate to the same vector under d(.)(0) if and only if they have the same linear terms. In other words, if their difference consists only of second order terms and higher.

This set is precisely I^2: the set of sums of things of the form p q where p and q are in I.

For example,

f(x, y) = ax^2 + bxy + cy^2 + ...
= x (ax + by + ...) + y (cy + ...)


So, if we take the set, I, of all functions in S zero at the origin, and we mod out by I^2, the set of everything with a double zero at the origin, then d is a bijection between I/I^2 with R*^n.


As an example, let's take S to be the set of all polynomials in x and y.
Let f(x, y) = x^3 + 3xy^2 + 7xy - 3x - 7y

f is an element of I, since f(0, 0) = 0
Now, df = [3x^2 + 3y^2 + 7y - 3] dx + [6xy + 7x - 7] dy
so df(0, 0) = -3 dx - 7 dy = <-3, -7>

Also, consider g(x, y) = -3x - 7y. Then dg(0, 0) = <-3, -7> also.

Now, take (f-g)(x, y) = x^3 + 3xy^2 + 7xy. This can be written as
(f-g)(x, y) = x (x^2 + 3y^2 + y)

So, f - g is a product of two functions in I (and thus a sum of things that are a product of two functions in I), thus f - g is in I^2.

This confirms the earlier observation that if two functions have the same image under d(.)(0), then they differ by something in I^2.


So, the result is that the map, d(.)(0) : I/I^2 --> R*^n : f --> df(0) is an isomorphism!

This means, considering the vector space I/I^2 is just as good as considering R*^n of traditional differential forms.


I/I^2 has an advantage of being a purely algebraic construction, thus it can be used to define "differential forms" on things where we can't ordinarily talk about differentiation.
 
  • #60
Hurkyl
Staff Emeritus
Science Advisor
Gold Member
14,916
17
Wee, writing that up has helped me understand more.


Setting S to be some nice space of functions, like real functions analytic at the origin, we have:

S corresponds to all functions analytic at the origin. In other words, it consists of all functions that are given by a power series about the origin.

I is the ideal of all functions zero at the origin. It is all functions of S given by power series with no constant terms.

I^2 is the ideal of things that are sums of things of the form i*j where i and j are both in I. In this case, it is all power series without any constant or linear terms.

...

I^n is the ideal of things that are sums of things of the form i1*i2*...*in where all of the i_m are in I. It is all power series with terms only of degree n or more.


Now, because I is the set of all power series with no constant terms, if we mod out by I, we eliminate all terms with degree 1 or more. In particular,
f = g (mod I)
iff f(0) = g(0).

Similarly, because I^2 is the set of all power series with no constant or linear terms, if we mod out by I^2 we eliminate all terms with degree 2 or more. Thus,
f = g (mod I^2)
iff f(0) = g(0) and f'(0) = g'(0)

And so on.

In particular, if we just the ideal I^n and we mod out by I^(n+1), we're left with terms of degree exactly n: terms of lesser degree don't exist in I^n, and terms of greater degree are in I^(n+1) and thus equivalent to zero.

(Here, I'm setting I^0 = S)

So, if we interpret I^n / I^(n+1) as a vector space over R, then we get a nice thing. S/I is isomorphic R. I/I^2 is isomorphic to the space of linear forms. I^2/I^3 is isomorphic to the space of all (homogenous) quadratic forms. I^3/I^4 to cubic forms, et cetera.


As an example of using these for fun and profit, let's compute the Maclaurin series for 1/(2+x):

In the case where S is simply analytic functions in x, we have that I = (x); the set of all multiples of x. (such as x e^x), I^2 = (x^2), ..., I^n = (x^n)

The constant term lives in (a space isomorphic to) S/I. So, we have:
1/(2+x) = f(x) (mod x)
1 = f(x) (2+x) (mod x)
1 = 2 f(x) (mod x)
1/2 = f(x) (mod x)

So, the constant term is 1/2. (notice that 1/2 + x, or 1/2 + x e^x, or anything similar is fine; we are only finding the constant term, we don't care about the higher order terms)

So we've chosen 1/2 as the constant term, we can find the linear term. Actually, for funsies, let's use 1/2 + x as the constant term and see what happens:

1/(2+x) = [1/2 + x] + f(x) (mod x^2)
1 = [1/2 + x](2+x) + f(x)(2+x) (mod x^2)
1 = 1 + (5/2)x + f(x) (2 + x) (mod x^2)
-(5/2)x = f(x) (2 + x) (mod x^2)
We can see that f(x) = -(5/4)x satisfies this equation. (As expected, f(x) is an element of I) So, if we add our constant and linear terms, we get
1/(2+x) = [1/2 + x] - (5/4)x (mod x^2)
= 1/2 - (1/4)x (mod x^2)

Which is exactly what we expect the first two terms to be.

We can apply everyone's favorite algebraic trick to this too:

(2+x) (1/2 - x/4 + x^2/8 - x^3/16 + x^4/32)
= 1 - x/2 + x^2/4 - x^3 / 8 + x^4 / 16
+ x/2 - x^2/4 + x^3/8 - x^4/16 + x^5/32
= 1 + x^5/32
= 1 (mod x^5)

Giving us that 1/(2+x) = 1/2 - x/4 + x^2/8 - x^3/16 + x^4/32 (mod x^5)



As Mathwonk was pointing out, the key is that none of this involves any analysis whatsoever. We don't need derivatives, or even a topology!


As an example, let's take S to be the set of all functions continuous at the origin.

Then I is the set of all functions continuous and zero at the origin.

Then I^2 is... *drumroll*... the set of all functions continuous and zero at the origin!

Proof: let f(x) be continuous and zero at the origin. Take g(x) = |f(x)|^(1/2) and h(x) = sign(f(x)) g(x)

It's easy to see that both g and h are continuous at the origin, and that f = g*h. Therefore, any continuous function zero at the origin is also a product of two continuous functions zero at the origin, so that I^2 = I.

So, when we try to take a look at all linear forms, by looking at the space I/I^2, we find that the only linear form is zero!

And this is exactly what it should be, since our knowledge of ugly continuous functions tells us that the only reliable approximations of continuous functions in general are their evaluations!
 
  • #61
mathwonk
Science Advisor
Homework Helper
10,780
954
Hi, I'm back after the first day of school. Fortunately the other kids liked me enough not to take my lunch money.

Thus I am emboldened again to define "A TENSOR". I notice some dork with my same handle has maintained there is no such thing as a tensor, since "to tensor" is a verb.

But to paraphrase Bill Murray again, "I have been tensored therefore I am a tensor".

I.e. one can perhaps accept both uses of the word, properly restricted.


Thus:

Basic object: manifold X with a differentiable structure.

derived structure: tangent bundle T= T(X),
(family of tangent spaces Tp, at points p of X)

second derived structure: cotangent bundle T*
(family of dual tangent spaces T*p).

Operation: tensor product of bundles, yielding new bundles:

T(tensor)T(tensor)T(.....)T(tensor)T*(tensor)T*(.....)T*.

with r factors of T and s factors of T*.

Then a section of this bundle (drumroll), i.e. a function with domain X and value at p an element of Tp(tensor)Tp(....)T*p(tensor)T*p(....)T*p,

is called a tensor of type (r,s).

how are them peaches?
 
  • #62
Hurkyl
Staff Emeritus
Science Advisor
Gold Member
14,916
17
Notice this object f(x)-f(Y), looks a lot like "deltaf", the numerator of a derivative.
I checked out Hartshorne again to look at this stuff again; it introduces a construction of relative differential forms just as you have here... but it is missing this important sentence which explains what's happening. :smile:


I have a question though; I'm happy enough with polynomial rings, because we can just lump all of the generators together as you describe, but I want to make sure I have it right in the general case, since I can't find a definition of the tensor product of algebras anywhere.


When B is an algebra over A, I (mostly) understand the B-module BxB (where I'm using x for the tensor product over A)... to make it into an algebra, do we just define fxg * pxq as (fp)x(gq)?
 
Last edited:
  • #63
mathwonk
Science Advisor
Homework Helper
10,780
954
sure, why not? consult e.g.:

zariski samuel, p.179,
atiyah macdonald page 30,
lang, algebra, second edition, pages 576, 582.

also notes from my 1997 course math 845:
(where my fonts did not reproduce well.)

Categorical Sums of Commutative Rings and Algebras
As an extension of the ideas of the section above on base change, consider what happens if both modules in a tensor product are rings, hence R-algebras, rather than just R-modules. Let S, T be R-algebras, i.e. let ring maps ƒ:R-->S, ¥:R-->T be given, and form the R-module S.tens(R).T.

(which denotes the tensor product of the R modules S and T.)


This is both an S-module and a T-module, but we claim it is also a ring, and an R-algebra. The multiplication is the obvious one, i.e. (aºb)(sºt) = asºbt.

(where the little circle denotes the tensor product of two elements.)


Claim: This gives an associative, distributive operation, with identity 1º1. First we check it gives a well defined R-bilinear operation:
The function (SxT)x(SxT)-->StensT, taking ((a,b),(s,t)) to asºbt gives, for each fixed value of (s,t), a bilinear map on SxT, hence induces a linear map (StensT)x{(s,t)}-->StensT. The induced pairing (StensT)x(StensT)-->StensT is also bilinear in the second variable for each fixed element of StensT, hence induces a map (StensT)x(StensT)-->StensT, which is linear in each variable. Hence our propsed multiplication is well defined and R-bilinear.
Since (1º1)(sºt) = sºt, the element 1º1 acts as an identity on a set of generators, hence also everywhere. Similarly, (s0ºt0)(s1s2ºt1t2) = (s0s1s2ºt0t1t2) = (s0s1ºt0t1)(s2ºt2), so the product is associative on generators. Since these expressions are linear in each quantity siºti, associativity holds for all elements.
Since the R-module structures on S,T are by means of the maps ƒ:R-->S, and ¥:R-->T, the following elements of StensT = Stens(R)T are equal: r(xºy) = (rxºy) = (ƒ(r)x)ºy = xº(¥(r)y) = (xºry). Thus there is a unique R-algebra structure on StensT defined by the map R-->StensT, taking r to r(1º1) = 1rº1 = ƒ(r)º1 = 1º¥(r) = 1ºr1. Since for a,b in R, rab(1º1) = ƒ(a)º¥(êb) = (ƒ(a)º1)(1º¥(b)) = (a(1º1))(b(1º1)), and (a+b)(1º1) = a(1º1) + b(1º1), and 1ÿ(1º1), this is indeed a ring map.
Remark: With the understanding given above of the notation, we may write simply rº1 for r(1º1) = ƒ(r)º1 = 1º¥(r).

This simple construction yields a nice conclusion:
Theorem: Any two R-algebras R-->S, R-->T, have a direct sum in the category of R-algebras, namely: Stens(R)T.

etc....
 
Last edited:
  • #64
mathwonk
Science Advisor
Homework Helper
10,780
954
summary of my notes content:
Graduate Algebra, Main results

843: Main idea: Counting principle for a group acting on a set: the order of the group. equals the product of the order of the subgroup fixing a point, times the order of the orbit of that point.
Main theorems:
1) Sylow theorems on existence of p-subgroups of finite groups,
2) simplicity of An,
3) Jordan Holder theorem on existence and uniqueness of set of simple quotients for a finite group,
4) classification theorem: all finite abelian groups are products of cyclic groups.
5) Galois` theorem that a solvable polynomial has a ``solvable`` Galois group (i.e. the Galois group has an abelian normal tower), and an example of a polynomial whose Galois group is A5, hence has no abelian normal tower, thus an example of a polynomial with no solution formula by radicals.

844: Main idea: The Galois group of relative automorphisms of a simple field extension, is determined by the way the minimal polynomial of the generator factors in successive partial extensions.
1) Gauss` theorem that polynomial rings over a ufd are ufds`s,
2) existence of root fields for polynomials,
3) Hilbert`s basis theorem that a polynomial ring over a noetherian ring is noetherian,
4) the theorem of the primitive element (a finite separable extension is simple),
5) the fundamental theorem of Galois theory, (in a finite normal separable field extensions intermediate fields correspond one - one with subgroups of the Galois group, and the order of the Galois group equals the degree of the extension)
6) the converse of Galois` theorem, i.e. (over a field of characteristic zero) a polynomial is solvable if its Galois group has an abelian normal tower,
7) Cardano`s formulas for explicitly solving cubics and quartics using the structure of an abelian normal tower for the Galois group.

845: Main idea: Diagonalizing a matrix.
1) Theorem on existence of decomposition of a finitely generated module over a pid into a product of cyclic modules, and a procedure for finding it over a Euclidean ring (from a presentation).
2) Application to existence and uniqueness of rational canonical form for any matrix over a field, (a special representative for the conjugacy class of an element of a matrix group), [i.e. an endomorphism T of a k vector space V is equivalent to a structure of k[X] module on V, and the rational canonical form of T is equivalent to a decomposition of V as sum of standard cyclic submodules], and
3) of Jordan form for any matrix over a field in which the characteristic polynomial factors into linear factors.
4) Spectral theorems, (sufficient conditions for a matrix to be diagonalizable, especially into mutually ``orthogonal`` components),
5) multilinear algebra including tensor products (construction of a universal bilinear map out of AxB),
6) exterior products, duality, and the formula for the exterior powers of a direct sum.
 
Last edited:
  • #65
mathwonk
Science Advisor
Homework Helper
10,780
954
Interestingly, I did not receive a single request for copies of my notes on algebra.
 
  • #66
sal
78
2
mathwonk said:
Interestingly, I did not receive a single request for copies of my notes on algebra.
I suspect the tiny handful of group members who have a clue what you've been talking about are already up to their eyebrows in algebra texts. :wink:

Are your notes online somewhere? I'm not sure I could follow them, but I'd be interested in seeing what you said about exterior products, at the least.
 
  • #67
mathwonk
Science Advisor
Homework Helper
10,780
954
you encourage me to make my notes web ready!
 
  • #68
35
1
dang

pmb_phy said:
No. None of that has anything to do with tensors. At least not according to any definition that I've seen. For a definition see

http://www.geocities.com/physics_world/ma/intro_tensor.htm

Pete


pmb_phy.....

Ummm....look at the original post. The user asked for a basic definition of tensors, not some horrid conflagration such as that on that website. I thought mathwonk did a fine job of getting the idea off to a good start.....and I thought a 'good job' to mathwonk was in order. He/she tried and even qualified his/her own offering. You tried too, but instead of posting to the original question you posted to mathwonk, and went overboard in my opinion. I certainly respect your post tho. You probably know far more about tensors than I do, but I know an appropriate post when I see one.

Kindly, and respectfully....
fiz~ :smile:
 
  • #69
35
1
good explanations

mathwonk.....I enjoy your explanations. I know so little about this topic. You're humble, but you are confident in what you know, and you have good arguments to back up what you say.

pmb..... hope your back gets better. You have some good thoughts as well.
 

Related Threads for: What is a tensor

  • Last Post
Replies
13
Views
10K
  • Last Post
Replies
7
Views
4K
Replies
3
Views
5K
  • Last Post
Replies
22
Views
1K
  • Last Post
Replies
11
Views
2K
Replies
2
Views
9K
Replies
90
Views
40K
Replies
2
Views
5K
Top