Electric and magnetic constants are tensors

In summary, a tensor is a multilinear map (a map that's linear in each variable) from a vector space and the dual of the vector space to the Reals. It can take in multiple vectors and covectors and produce a scalar. Examples include matrices, vectors, and the dot product. The components of a tensor can transform when changing coordinate systems, and this transformation follows a specific set of rules.
  • #1
dervast
133
1
What a tensor is .? I have found a text in my book that says that the electric and magnetic constants are tensors.. Do u have something in mind?
Thx a lot
 
Physics news on Phys.org
  • #2
A tensor is simply a multilinear map (a map that's linear in each variable) from a vector space and the dual of the vector space to the Reals.
A very simple example is the dot product. It takes in two 2 vectors and gives a Real number.It is linear in both varibales. Thus the dot product is a (0 2) tensor.
 
  • #3
I am not sure i have completely understabd
 
  • #4
Tzar said:
A tensor is simply a multilinear map (a map that's linear in each variable) from a vector space and the dual of the vector space to the Reals.
A very simple example is the dot product. It takes in two 2 vectors and gives a Real number.It is linear in both varibales. Thus the dot product is a (0 2) tensor.
No, that is not right. The dot product is not a tensor, nor is the result of a dot product a (0,2) tensor-it is a (0,0) tensor a.k.a. scalar.

dervast said:
What a tensor is .? I have found a text in my book that says that the electric and magnetic constants are tensors.. Do u have something in mind?
Thx a lot
You know that we can take several numbers and form a vector. Simmiliarly we can take N vectors of length N and produce an N by N matrix. One way we could do this is like this:
We have a vector, [itex]\vec{A}[/itex] whose components in some coordinate system are [itex]A_1, A_2, A_3...A_N[/itex]. When I write Ai, I mean some particular component of [itex]\vec{A}[/itex]. Also we have the vector [itex]\vec{B}[/itex] with components [itex]B_1, B_2, B_3...B_N[/itex]. Now we can form the matrix D by saying that the element in the i'th row and the jth collumn is:
[tex]D_{ij}=A_iB_j[/tex]
Similarly, I could create an object with three indices:
[tex]E_{ijk}=A_iB_jC_k[/tex]
And so on. Are these tensors? Not necessarily. Notice that the components of the vectors were defined with respect to some coordinate system. We have said nothing about these components would change if we were to change the coordinate system. What makes a tensor a tensor is the way its components change when you change coordinate systems. The rank of a tensor is the number of indices required to specify its compnents.

If we are talking about rectangular coordinates then the components transform in the obvious way: First the components are written as the projections of the vector on each of the basis vectors. When you change coordinate systems you are changing basis vectors, so now you just find the new components along the new basis vectors. This transormation can be represented as a matrix. If the components of this matrix are [itex]a_{ij}[/itex] and the components of [itex]\vec{A}[/itex] in the new coordinate system are written as [itex]\hat{A_1}, \hat{A_2}, \hat{A_3}...\hat{A_N}[/itex], then the new coordinates are related to the old by:
[tex]\hat{A_i}=\sum_{j=1}^n a_{ij}A_j[/tex]
For a tensor of rank greater than one we have:
[tex]\hat{D_{ik}}=\sum_{j=1}^n \sum_{m=1}^n a_{ij}a_{km}D_{jm}[/tex]
This defines the transformation laws tensors must satisfy in rectangular geometry. The law is generalized to tensors of higher rank in the obvious way.

Now when we move to curvilinear coordinates the situation becomes more complicated. There end up being two transformation laws that are useful at various times. One is the contravariant transformation law, and the other is the covariant transformation law. Tensors can be 'mixed' in the sense that they have some components that follow one transformation and some that follow the other. The situation is complicated by the fact that the basis vectors vary from point to point in curvilinear coordinates. If you want to know about how these concepts are generalized, I will explain, but it will take a while.
 
  • #5
I think you are getting confused between what a tensor is, and what the COMPONENTS of a tensor are. Things like [tex]E_{ijk}[/tex] notationaly refer to the components of a tensor (0 3) tensor E, and not the tensor itself. The dot product is a linear FUNCTION on two vectors and hence IS a (0 2) tensor. Tensors are multilinear FUNCTIONS, that's it.
 
  • #6
Let's start with a linear algebra review.

You know about vectors. Hopefully you're comfortable with abstract vectors spaces, but I'm just going to work with n-tuples for now.

For this entire post, I'll assume we're working up from an n-dimensional vector space.


So, a vector is simply an nx1 matrix. It has n rows and 1 column.

Then, you have covectors (a.k.a. dual vectors). In this setting, a covector is simply a 1xn matrix.

The important feature of a covector is

(covector) * (vector) = (scalar)

Then, we have nxn matrices. The important feature of nxn matrices is that

(matrix) * (vector) = (vector)

We also have some side benefits, though:

(covector) * (matrix) = (covector)

and

(covector) * (matrix) * (vector) = (scalar)



We also have another interesting feature:

(vector) * (covector) = (matrix)


This is your first nontrivial example of a tensor product.



You might wonder about other sorts of combinations such as:

(?) * (vector) = (covector)

Or even products with more than one term, like:

(?) * (vector , vector) = (scalar)

You've actually seen an example of this last thing: the dot product is a good example. Traditionally write the dot between the other two arguments:

(vector) (dot) (vector) = (scalar)

though someone fond of indices would actually write it like this:

[tex]g_{ij} v^i w^j = s[/tex]

Where g is the dot, v and w are the vectors, and s is the scalar. (In this notation, you could actually write the three terms in any order you choose -- the indices specify how they're "glued together" for the product)



Anyways, in general, these more complicated things exist, and we call them tensors. A rank (p, q) tensor is something that takes p covectors and q vectors, and gives you a scalar.

For example, an nxn matrix is a rank (1,1) tensor, since we could do:

(covector) * (matrix) * (vector) = (scalar)

to produce a scalar.

A vector is a rank (1,0) tensor, since we can do:

(covector) * (vector) = (scalar)

And a covector is a rank (0,1) tensor for the same reason.

Of course, just like matrices, we can put things together in all sorts of interesting ways. A rank (1,1) tensor can operate on a rank (1,0) tensor and produce a rank (1,0) tensor, and all sorts of other stuff.


We can build higher-rank tensors out of lower-rank tensors. For example, remember the tensor product I mentioned before:

(vector) * (covector) = (matrix)

We've taken a (1,0) tensor and a (0,1) tensor and produced a (1,1) tensor!

In general, given a rank (p,q) tensor and a rank (r,s) tensor, we can take their tensor product which is a rank (p+r,q+s) tensor.


These things aren't all that fun to write out in full, but I'll give a simple example of a tensor product ([itex]\otimes[/itex] is the symbol for tensor product):

[itex]
\left[
\begin{array}{ccc} 2 & 3 & 5 \end{array}
\right]
\otimes
\left[
\begin{array}{ccc} 7 & 11 & 13 \end{array}
\right]
=
\left[
\begin{array}{ccc|ccc|ccc}
14 & 22 & 26 & 21 & 33 & 39 & 35 & 55 & 65
\end{array}
\right]
[/tex]

That last thing is supposed to be read as a "partitioned" matrix -- it a 1x3 matrix, whose entries are 1x3 matrices.

(actually, I may have the terms of the product backwards -- I always forget what convention people like to use when writing these things)




Michael_McGovern talked a lot about transformation laws. Contrary to what he says, such things are not of fundamental importance to the notion of a tensor -- they are merely a consistency check for a particular method of using tensors.

As you might recall from talk about abstract vector spaces, you cannot talk about the components of a vector until you've selected a basis for that vector space. Then, you talked about comparing two different bases, and worked out a change of basis transformation.

Most of what Michael_McGovern talked about is simply the change of basis transformations for tensors.

To be fair, these things are important, because often times, physicists will construct the components of a tensor in some apparently basis-dependent manner -- but bases aren't supposed to matter to physics! So they have to carefully prove that their construction properly respects the change of basis transformations before they can use their tensors.



Also, in physics, one is interested in tensor fields which is yet another layer of complexity! You've probably heard about vector fields (such as the [itex]\vec{E}[/itex] field) -- to each point of space you associate a vector. Well, you can do the same sort of thing with tensors, and have to worry about that.

Since Michael_McGovern likes to do things in index notation (i.e. using coordinates), he has to select a basis. However, for technical reasons, just like a vector field assigns a vector to each point in space, you must also specify a basis for every point in space!

Of course, if you are comfortable doing your linear algebra "abstractly" (i.e. you're happy manipulating the vector [itex]\vec{v}[/itex] as opposed to insisting on picking a basis and manipulating the n-tuple [itex](v_1, \ldots, v_n)[/itex]), you avoid just about all of the issues Michael_McGovern discussed in his post.

(Yes, I am a big fan of doing it "abstractly", why do you ask? :wink:)


But the point I wanted to make in these closing remarks is all of these additional concerns are only concerns about how people use tensors (especially physicists) -- they are not concerns inherent to the tensor concept itself.
 
Last edited:
  • #7
Tensor is no more than a symbol with princepal we use to express what we want express with many useful
 
  • #8
That's intersesting Hurkyl. That is a much better definition of the tensor than the one I learned. I was trying to learn this stuff on my own and I got one book that was more physics-oriented and one that was more pure math. They both defined tensors in terms of the way their components transformed. I do see the advantage of your approach.
 
  • #9
What are Tensors?

Hi, I have been hearing/reading the word "tensor" a lot lately, but I have no idea what it is or what is it used for. I also googled for it but I get bogged down by so much coplicated mathematics that I am unable to make any sense of it. All I know that tensors have something to do with matrices and special relativity, no more no less. Could someone please just give me a gist of what tensors are?
 
  • #10
Swapnil said:
Hi, I have been hearing/reading the word "tensor" a lot lately, but I have no idea what it is or what is it used for. I also googled for it but I get bogged down by so much coplicated mathematics that I am unable to make any sense of it. All I know that tensors have something to do with matrices and special relativity, no more no less. Could someone please just give me a gist of what tensors are?

yes, I implied this question in my vector calc thread. I didn't even bother to look it up, because I'm afraid I'll draw misconceptions from laymen explanations (as I have done in the past with relativity and quantum mechanics).
 
  • #11
i think if you will search on here over the last few years you will find thousands of words written on this question. maybe one thread was called what is a tensor?

i myself have answered this question uncountably many times.
 
  • #12
you might look in the tensor forum.
 
  • #14
Hey you guys, I am trying to learn about tensors on my own this summer and I would be really glad if someone would recommend a good book(s) on them. Preferably a book which gives you a physicist/engineer's perspective on tensors (not a mathematicaian's).

Thanks in advance.
 
  • #15
think of a taylor series expanded at each point of a space. the constant terms are the values of the function at each point. the linear etrms are the differentials of the function at each point. these are first order tensors. then the second order taylor polynomials are second order approximations to the functions ate ach point. these are second order symmetric tensors. etc...

there are also anti synmmetric tensors, like the 1-forms that one integrates over parametrized curves, and the 2 forms that one integrates over parametrized surfaces.

and there are more complicated ones. in general they are multilinear combinations of tangent vectors and cotangent vectors.
 
  • #16
mathwonk said:
think of a taylor series expanded at each point of a space.
I can't even begin to express the difficulty imagining this.

To me, Taylor series is a bunch of equations that 'zoom in' on a slope, but my only practical application with taylor series was using a runge-kutta technique to eliminate error in a computational physics class...

Other than that, it was a completely abstract equation that came at a tough time in math for me, where blatent memorization was my relief from the 'fire hose'*


*I have a physics professor who states that Physics 211/212 are like asking a student to take a drink from a fire hose. It fits with my experience. Maybe it's a local thing, but having had no physics background before 211/212, it was a crazy year; it flew through all kinds of different branches of physics while I was learning calc 2 and 3 as well. I totally lacked soak time. A lot of my mathematical concepts are severely underdeveloped.
 
  • #17
well start with one point.

i.e. think of a polynomial in two variables and collect etrms of the same total degree,like 3 + (x-4y) + (x^2 +6xy -y^2) + (x^3 +xy^2 -y^3).

the constant term, 3, is the zeroth order approximation. the lienar terms (x-4y) are the first order approximation, i.e. approximatiion by a first iorder symmetric tensor. (all a symetric tensor is, is a homogeneous polynomial).

the next etrms ((x^2 +6xy -y^2), the second order approximation, are by a second order tensor.

this polynomial is expanded in powers of X,Y.

the hard
part is when we try to expand it in powers of X-a, Y-b, for every point (a,b) in the plane. the resuklt is we get a wholelot of constants, a whole,lot of linear polynomials, a wholelot of quadratic polynomials, etc...people who do not know what tensors are, think the complicated notation that is used to express these latter families of objects are the tensors, and talk about families of coefficients with upper and lower indices as "being" tensors.

from another point of view the simplest tensors are real vaued funtions, the next simplest are vector vakued functions, and the next simplest are matrix valued functions. etcc...
 
  • #18
lets get systematic. first take the local point of view. we discuss only covariant tensors in the classical sense, and symmetric ones at that. and let us stick to 2 variables.

a covariant 0 tensor is a number. a field of these is a family of numbers, one at each point, i.e. a real valued function. ok?

now a 1 cotensor is a linear polynomial like ax+by. so a field of 1 cotensors is a family of these, one at each point. this is harder to imagine, so just imagine ax+by, but where a and b are functions,

i.e.a(p)x+b(p)y. so in a sense it is represented by the 2 coefficient functions, a(p) and b(p).

now a (symmetric) 2 cotensor is a homogeneous quadratic polynomial like ax^2 + bxy + cy^2, and thus a field of them is the same thing where a,b,c are functions. so in a sense a symmetric 2 co tensor is represented by the three coefficient functions, a,b,c.

now a 2 cotensor that may not be symmetric, will have also a yx term that cannot be combined with the xy term, so will look like ax^2 + bxy + cyx + dy^2, and a field of those is such a thing again where the coefficients are functions.

thats about it. so the big deal I gave you at first was to imagine all these degrees at once. i.e. a symmetric tensor is just a (NOT NECESSARILY HOMOGENEOUS,oops sorry) polynomiaL.


and a field of them is a polynomial same but with function coefficients. so a not necessarily symmetric cotensor is a sort of non commutative polynomial. and so on for a field of them, which then might be represented by a huge family of coefficient functions.


thus for each n, a smooth function f defines a field of symmetric tensors with top degree n, namely the field of nth order taylor polynomials at each point. the associated family of functional coefficients is just the family of partials of f up to degree n.


if we stick to a single homogeneous degree for our (still co-) tensors, we see there is one coefficient for each noncommutative monomial, i.e. in 2 variables there are the following eight degree 3 monomials xxx, xxy, xyy, yyy, yyx, yxx, xyx, yxy.


now i will probably screw this up from my allergy to coordinates, but never fear many people love those best and will leap right in with help.

but anyway, coordinate junkies will prefer to write something like (1,1,1) instead of xxx, and (1,1,2) instead of xxy, and so on, so they will describe an order three cotensor by giving only the coefficients of each term in the form {a(1,1,1), a(1,1,2),...etc...}, where since they always think of fields of cotensors, they will say the a(i,j,k) each represents a function.

hence they will say that an order 3 cotensor is a family of functions of form {a(i,j,k)} and they will also give you rules for how to change coordinates. Indeed they must do this since they have not told you what the symbols mean, so you would have no way of figuring out for yourself how they transform.

ok now i withdraw to a safe bunker to await comments from a classical tensor perspective:biggrin: .
 
Last edited:
  • #19
oh really there are three stages in understanding tensors, tensors at a point, then families of tensors in R^n, then patching together such families to get families on a manifold. i have discussedonly the first two stages.

the part many physicists leave out is stage one. they jump right into stages two and three, and hence do not understand what the objects are that they are globalizing, so are obliged to memorize the rulkes for changing coordinates instead of deriving them from conceptual definitions.

this i not their fault of course since the books they read do not explain what is going on. my near hopeless lack of grasp of physics is also due to the books i read leaving that aspect out from the math.
 
  • #20
Hey Hurkyl,

conventionally, what is the result of

[tex]\left[ \begin {array} {c} a \\ b \end {array} \right] \otimes \left[ \begin {array} {ccc} c & d \end {array} \right] [/tex]

??
 
  • #21
The usual definition is through what I think is called the Kronecker product. I can never remember if it's

[tex]
\left[ \begin{array}{c|c}
c \left[ \begin{array}{c} a \\ b \end{array} \right]
&
d \left[ \begin{array}{c} a \\ b \end{array} \right]
\end{array} \right]
[/tex]

or

[tex]
\left[
\begin{array}{c}
a \left[ \begin{array}{c c} c & d \end{array} \right] \\
\hline
b \left[ \begin{array}{c c} c & d \end{array} \right]
\end{array}
\right]
[/tex]

Fortunately, it doesn't matter in this case -- they are both equal to

[tex]
\left[
\begin{array}{c c}
ac & ad \\
bc & bd
\end{array}
\right]
[/tex]
 
  • #22
By the way, does anyone understand Penrose's tensor notations that look like multi-legged bugs? :grumpy:
 
  • #23
It's been a while since I looked at it, but I think...

The inputs to a tensor are represented as legs on the top side of the bug.
The outputs to a tensor are represented as legs on the bottom side of the bug.
Contraction is represented by connecting the wires.
Raising/lowering indices with a metric is done by attaching a U-shape to change which way (up/down) the wire is travelling.

Oh, and a tensor product is done by placing two tensors side by side.
 
Last edited:
  • #24
Simplest way I always thought you could learn what a tensor is,is by thinking of the indices/or the relationship

Vector->Matrix->Tensor
[N]->[N,M]->[N,M,L...] ...i learned it from an analy mech/programming book.

But lik hurkyl said...the vector and Matrix are also Tensors. Thus I think you can consider Tensors as a data structure accessed by indices/array operator
 
  • #25
I think that's very misleading! A matrix is not a vector nor a tensor: a matrix is just one way of representing vectors or transformations. (And most tensors cannot be written in terms of matrices.)
 
  • #26
I noted threads asking "what is a tensor" in both the General Math and Calclus sections. I am merging both threads and moving the combined thread to the "Tensor" section.
 
  • #27
thank you halls. could you also please put a little "moved" sign in the calculus section to guide the OP here?
 
  • #28
Mathwonk, could you please explain to me:

1) what is a "cotensor" as opposed to a tensor?
2) How does that view of tensors as polynomials relate to the view of tensors as functions of vectors and covectors?
 
  • #29
HallsIvy: you wrote
"For example, an nxn matrix is a rank (1,1) tensor, since we could do"
So does this imply that not all nx"m" matrices are tensors or no nxm matrices are tensors.

http://mathworld.wolfram.com/Tensor.html..mentions [Broken] the whole N indices each of M-D. Does this mean that there is a unique M for all indices...or M is just a variable? so like M1...Mn
 
Last edited by a moderator:
  • #30
cotensors refer to various scalar valued functions on vectors.

they are called covariant tensors in classical language. they are the ones that pull back under mappings.


the other kind are called contravariant tensors in classical language.


lets just look at a manifold and ask what we can build from it.

1) the most fundamental object we can construct is the family of tangent spaces at each point.

a choice of a tangent vector at each point is an example of what i believe is classically called a contravariant tensor field.

2) a higher order (intellectually that is) object is the dual tangent space at each point, i.e. the spaces of linear functions on tangent vectors.

a choice of such a linear function on tangent vectors at each point is i believe an example of something classically called a covariant tensor field.


note that a linear function is nothing but (in coordinates) a linear polynomial, i.e. the linear term of a taylor series. hence assigning the linear term of the taylor series ate ach point, of a smooth function, is one way to define a covariant tensor field.


3) generalizing and raising the ante from linear to bilinear, we could consider at each point the space of bilinear real valued functions on ordered pairs of vectors.

choosing such a bilinear function at each point is another example i believe of a covariant tensor field. an example is a symmetric bilinear function at each point, i.e. in coordinates, merely a quadratic polynomial such as the second term of the taylor series of a function.

there are also non abelian quadratic polynomials, also called covariant tensors (of second order?)

4) now to be consistent we should also define a contravariant tensor field of second order, something dual to a non abelian quadratic polynomial. i skipped this because it is harder to define than the covariant version.

namely we have to define some kind of second order tangent vectors, but the definition is a little mathematical looking and less natural than the quadratic polynomials above. to be semi precise, we want some one gadget, such that evaluating a quadratic polynomial on a pair of tangent vectors, is equivalent to evaluating the 2-tensor defined by the quadratic polynomial on this one gadget.

precisely but probably unhelpfully, the tensor product of a vector space V with tiself is another vector space VtensorV, plus a bilinear map

m:VxV ----> VtensorV, such that every bilinear map VxV--W for any vector space W, can be factored as a composition

VxV--->VtensorV--->W, where the second map VtensorV--->W is linear.


it is easy to show how to write such things, just take linear combiantions of symbols like vtensorv', but that does not explain what they are.



basically just as the first order covectors, or dual vectors, are linear functions on tangent vectors, it is also nice if the second order cotensors, the polynomials, were also actually linear maps, not bilinear maps, on some vector space. VtensorV is that vector space.


i.e. second order cotensors are bilinear maps VxV--->R, but we also want to write them as (VtensorV)* = linear maps on VtensorV.

one unnatural way to do this is to simply define VtensorV = {Bil(VxV,R)}*, i.e. the dual of the bilinear maps.

since in finite dimensions, dual of a dual is the original space back, we get then that (VtensorV)* = {Bil(VxV,R)}** ={Bil(VxV,R)}.


there is another way to do it mathematically but it is so complicated i do not blame anyone for not learning this stuff abstractly.


i guess the most intuitive way to elarn is in afct the way physicists do! i.e. just elarn how to write them down, and not woprry about the definitions.

howeverm, since i am aboiut to find myself agreeing with a point of view i have foiught form years here, i will take a step back and say this:

there is a hgue difference between knowing the absrtact properties a gadget should have, and knowing the picky technical definition and construction of the agdget that ahs those proeprties.

i.e. although i do not advocate struggling throuigh the mathematical construction of tensors, i do advocate knowing the characterizing properties they have.


from a utilitarian point of view, a tensor is anything you can get by starting from a tangent space and iterating or combining the constructions of multil;inear functions on a space already given.

i.e. the dual space V* is the linear functions on V. then one can construct the bilinear maps on VxV*. this is essentially V*tensorV.

then one can take the dual of that, getting VtensorV*.

\then one can take the trilinear maps on VxVxV*, getting essentially

V*tensorV*tensorV. and so on...


cotensors are the ones with the stars on them.:bugeye:
 
  • #31
i.e. briefly,

V*tensorV* is the space of non abelian second order polynomials on V,

i.e. V*tensorV* = bilinear functions on VxV.VtensorV is the dual space of V*tensorV*.thus by mere definition,

V*tensorV* is the space of linear functions on VtensorV.thats all there is to it.
 
  • #32
but anything can be made to look different. take a bilinear map from VxV to W.

by fixing one entry, we get a linear map from V to W.

i.e. if <x,y> is bilinear in both entries, then fixing x, we get a map
y --> <x,y> which is linear in y.

and the map from x to linear maps in y, is itslef lienar in x!

thus we can regard Bil(VxV-->R) = Lin(V-->V*),

thus also Bil(VxV*) = Lin(V-->V).

this says that V*tensorV and Hom(V,V) are essentially the same thing!

i.e. certain (1,1) tensors are equivalent to matrices. people can discuss at length whether these are or are not "the same", but this is largely just language.
 
  • #33
let me go out on a limb here and guess that anyhitng that depends multiplicatively on tangent vectors is a tenmsor.

e.g. whatever, is it called momentum? i.e. mv^2, depends multiplicatively and quadratively on velocity, so it should be expressible as a second order tensor.tensors are nothing but an algebraic way of expressing multiplication of thigns that originally only belonged to a vector space and hence could not be multiplied.suppose v and w arte elements of a vector space V, then we write vtensor w to be their product in VtensorV.

we see that vtensorw is determined by the pair <v,w> and yet is different from that pair, because in the space VxW we add <v,w> to <v',w> and get

<v+v', w+w>, but in VtensorV we add vtensorw and v'tensorw and get (v+v')tensorw.

see the difference? this is what changes bilinear functions on VxV into linear functions on VtensorV.
 
  • #34
i went back and read hurkyls explanation and it sounded so much clearer and easier than mine. he said a tensor is just a multilinear function on a sequence of vectors and covectors. that's right.

here let me relate one of my statements to that:

I said: in trying to define second order contravriant tensors,

"one unnatural way to do this is to simply define VtensorV =
{Bil(VxV,R)}*, i.e. the dual of the bilinear maps."

Hurkyls version would define VtensorV instead as {Bil(V*xV*,R)} i.e. as bilinear maps on pairs of covectors. this is the same thing essentially, i.e. there is an isomorphism of vector spaces between

{Bil(VxV,R)}* and {Bil(V*xV*,R)}, that takes let's see,...,ok i think i got it, remember auslanders dictum, basically anything you can think of is correct.

so first note there is a simple map from pairs of loinear functions to bilinear ones, namely multiply. i.e. if f,g are elements of V*, then ftensorg, which takes <v,w> to f(v)g(w), is an element of Bil(VxV,R).

so here we go:

we want to define a map {Bil(VxV,R)}* --> {Bil(V*xV*,R)}

so let H be an elt of {Bil(VxV,R)}*, i.e. if H sees a bilinear map on VxV, it spits out a number. now we want that to give us an elt of
{Bil(V*xV*,R)}, which is a gadget that spits numbers when it sees two linear maps on V. well easy. let f,g, be two linear maps on V, and apply H to ftensorg.

i.e. the compositon of a linear and a bilinear map is bilinear, so
H(ftensorg) is bilinear in f and g, hence gives an element of

{Bil(V*xV*,R)}.

since this map from {Bil(VxV,R)}* to {Bil(V*xV*,R)} is the only one i can think of, by auslanders dictum, it is an isomorphism. you can vheck this yourself by fidning an inverse.i.e. suppoose K is an elt of {Bil(V*xV*,R)}, i.e. something that spits out a number when it sees a pair of linear functions f,g. now we want it to define an elt of {Bil(VxV,R)}*, i,.e. to spit out a number whe it sees a bilinear map.

so let m be a bilinear map on VxV. Uh oh, i have to produce a pair of linear maps on V, but there is no nice way to do this. i.e.w e are using the finite dimensionality here, and as remarked above that is why the isomrphism depends on a certain amp being in jective hence an isomotrihsm, whereas in infinite dimensions that ca fail. so i suspect there is no natural definition of the inverse here independent of coordinates, since if there were it would work in infinite dimensions too where the reult is false. pooh.

well i over extended auslanders dictum, i.e. since there is nothign i can think of in the reverse direction, it may not always be an isomorphism. he really said the only thing you can think of is the right thing, not that is is an isomorphism, since his dictum requiers you tot hink of two inverse thigns for that to hold. my apologies to his memory. he was always very clear that an isomorphism is a map with an inverse.
 
Last edited:
  • #35
nonetheles, here there is an isomorphism since the two spaces are the same dimension, so all we have to check is injectivity, which iw will do in case it happens to b false, which it won't be.

so let H be any linear map on bilinear maps, and let f,g be any two linear maps. then i claim, if H(ftensorg) is zero for all f,g, then H is zero. I.e. I have to show that bilinear maps of form ftensorg span the space of all bilinear maps on VxV.

Hmmmm, this is the same problem as before, but now I am allowed to use coordiantes, to check a coordinate free statement. I.e. I claim every bilinear map on VxV is a linear combination of ones of form ftensorg, where f,g, are linear.

this is easy (i hope, i always say that as cover), choose a basis v1,...vn, and then a bilinear map is determined by tis values on pairs like <vi,vj>.

but i can get any numbers i want from such a pair using the dual basis f1,...fn, where fi(vj) = kronecker delta (i,j), timesa constant.

i.e. (fi)tensor(fj) has value 1 on <vi,vj>.

so the special bilinear maps (fi)tensor(fj), give basis for all of them.

so any bilinear map can be expressed as a linear combination of the special oens of form (fi)tensor(fj). and since H kills this basis, it is zero.

whew!i am getting some idea of why this is so hard for learners, who do not have a good grasp of all these natural isomorphisms between different ways of saying the same thing that I take pretty much for granted.
 
Last edited:
<h2>What are electric and magnetic constants?</h2><p>Electric and magnetic constants are physical quantities that describe the fundamental properties of electric and magnetic fields. They are used to calculate the strength and behavior of these fields in various situations.</p><h2>Why are electric and magnetic constants considered tensors?</h2><p>Electric and magnetic constants are considered tensors because they have both magnitude and direction, and their values change depending on the coordinate system used to measure them. This makes them more complex than simple scalar quantities.</p><h2>What are the differences between electric and magnetic constants?</h2><p>The main difference between electric and magnetic constants is that they describe different properties of the electromagnetic field. Electric constants, such as permittivity, describe the ability of a material to store electric charge, while magnetic constants, such as permeability, describe the ability of a material to support a magnetic field.</p><h2>How are electric and magnetic constants related?</h2><p>Electric and magnetic constants are related through the speed of light, which is a fundamental constant in physics. The relationship between electric and magnetic constants is described by Maxwell's equations, which govern the behavior of electromagnetic fields.</p><h2>Why are electric and magnetic constants important in science?</h2><p>Electric and magnetic constants are important in science because they are used in a wide range of fields, including electromagnetism, electronics, and telecommunications. They are also crucial in understanding the behavior of light, which is an electromagnetic wave.</p>

What are electric and magnetic constants?

Electric and magnetic constants are physical quantities that describe the fundamental properties of electric and magnetic fields. They are used to calculate the strength and behavior of these fields in various situations.

Why are electric and magnetic constants considered tensors?

Electric and magnetic constants are considered tensors because they have both magnitude and direction, and their values change depending on the coordinate system used to measure them. This makes them more complex than simple scalar quantities.

What are the differences between electric and magnetic constants?

The main difference between electric and magnetic constants is that they describe different properties of the electromagnetic field. Electric constants, such as permittivity, describe the ability of a material to store electric charge, while magnetic constants, such as permeability, describe the ability of a material to support a magnetic field.

How are electric and magnetic constants related?

Electric and magnetic constants are related through the speed of light, which is a fundamental constant in physics. The relationship between electric and magnetic constants is described by Maxwell's equations, which govern the behavior of electromagnetic fields.

Why are electric and magnetic constants important in science?

Electric and magnetic constants are important in science because they are used in a wide range of fields, including electromagnetism, electronics, and telecommunications. They are also crucial in understanding the behavior of light, which is an electromagnetic wave.

Similar threads

  • Differential Geometry
Replies
2
Views
841
Replies
16
Views
2K
  • Differential Geometry
Replies
8
Views
2K
Replies
3
Views
1K
Replies
4
Views
1K
  • Differential Geometry
Replies
3
Views
1K
  • Differential Geometry
Replies
6
Views
2K
  • Differential Geometry
Replies
21
Views
3K
  • Linear and Abstract Algebra
Replies
10
Views
241
Replies
28
Views
6K
Back
Top