Math Newb Wants to know what a Tensor is

  • Thread starter StonedPanda
  • Start date
  • Tags
    Tensor
In summary: This is a 3-tensor, which is a kind of tensor that takes three other tensors as input and produces a fourth tensor as output. The fourth tensor is the "result" of the tensor operation.In summary, Math "Newb" wants to know what a Tensor is and what they're useful for. From what he's gathered, a Tensor is a vector with two bits of information. They're useful for solving equations and understanding physics.
  • #1
StonedPanda
60
0
Math "Newb" Wants to know what a Tensor is

Hey, I'll be entering my senior year of High School next year and this summer I'm taking Multivariable Calculus at UCLA. In September I'll start AP Stats.

Anyway, what is a Tensor? I've always wondered this question. From what I've gathered, It's a Vector with two bits of information.

What are they useful for. How do you "play" with them?

Thanks, guys!
 
Physics news on Phys.org
  • #2
A "tensor" is a generalization of a "vector". The crucial point about tensors (as well as vectors) is that they change "homogeneously" under change of coordinates. Specifically, if a tensor is all zeroes in one coordinate system then it is zero in all possible coordinate systems. That means that if the equation A= B (where A and B are coordinates) is true in one coordinate system (A= B is the same as A-B=0) then it is true in any coordinate system. Since (obviously) a physical law does not depend upon an (arbitrary) choice of coordinates system, it follows that physical laws must be expressed in terms of tensors.
 
  • #3
HallsofIvy nicely captured the main idea. As he could also tell you, a vector is a special case of tensor, namely a vector is a tensor of rank 1. A scalar can be viewed as a tensor of rank zero.
 
  • #4
For a mathematical object to be a tensor it must obey certain relationships when chgnaging between coordinate systems (as Halls of Ivy says tensor analysis is essientially revolves aropuind the idea that physical laws should be indepedndet of coordiante system). The exact relationship it obeys depends on it's covariant and contravariant orders.

As Janitor says scalars can be considered tensors of rank 0 and vectors are tensors of rank 1. As you know a scalar is just a single number where as an n-dimensional vector vector can be fully described with n numbers, for example:

A = 2i - 3j + 5k

Fully desccribes the three vector A (in terms of it's rectangular component vectors).

The number of components of a tensor of N diemsions and rank p (i.e the numbers needed to describe a tensor) is equal to Np (thpough of course there's nothing to stop some or all of these components from being zero)
 
Last edited:
  • #5
I spoke with my aunt's husband who has a degree in engineering as well as read up on this in a physics book. My understanding is that a tensor is sort of like a vector, where the magnitude is a function of the direction.

The example he gave me was the force acting on a surface with a force that makes some angle theta with the surface. More specifically, the component of force that acts against the surface (the perpendicular component of the force) is a vector but its magnitude depends on direction (angle), more specifically, for example the force of gravity on an incline that works against the ground is the weight of the object times the cosine of the angle.
 
  • #6
Even more generally, a tensor is a sort of mathematical machine that takes one or more vectors and produces another vector or number.

A tensor of rank (0,2), often just called rank 2, is a machine that chews up two vectors and spits out a number.

A tensor of rank (0,3) takes three vectors and produces a number.

A tensor of rank (1,2) takes two vectors and produces another vector.

Hopefully you see the pattern.

You actually already know what a (1,1) tensor is -- it's nothing more than a good ol' matrix. It accepts one vector and produces another vector.

If you're working in three dimensions, a (1,1) tensor can be represented by its nine components. Here's a simple (1,1) tensor.

[tex]
T = \left(
\begin{array}{ccc}
1 & 0 & 0\\
0 & 1 & 0\\
0 & 0 & 1
\end{array}
\right)
[/tex]

You already know what this guy does -- it takes a vector and gives you back the exact same vector. It's the identity matrix, of course. You would use it as such:

[tex]\vec v = T \vec v[/tex]

If the world were full of nothing but (1,1) tensors, it'd be pretty easy to remember what T means. However, there are many different kinds of tensors, so we need a notation that will help us remember what kind of tensor T is. We normally use something "abstract index notation," which sounds more difficult than it is. Here's our (1,1) tensor, our identity matrix, laid out in all its regalia:

[tex]T^a_b[/tex]

The a and b are referred to as indices. The one on the bottom indicates the tensor takes one vector as "input." The one of the top indicates it produces one vector as "output."

Tensors don't have to accept just vectors or produce just vectors -- vectors are themselves just a type of tensor. Vectors are tensors of type (0,1). In full generality, tensors can accept other tensors, and produce new tensors. Here are some complicated tensors:

[tex]
R^a{}_{bcd}\ \ \ \ G_{ab}
[/tex]

The second one, [itex]G_{ab}[/itex] is a neat one to understand. You should already understand from its indices that it is a type (0,2) tensor, which means it accepts two vectors as input and produces a number as output. It's called the metric tensor, and represents an operation you already know very well -- the dot product of two vectors!

In normal Euclidean 3-space, [itex]G_{ab}[/itex] is just the identity matrix. You can easily demonstrate the following statement is true by doing the matrix multiplication by hand:

[tex]\vec u \cdot \vec v = G_{ab} \vec u \vec v[/tex]

The metric tensor is more complicated in different spaces. For example, in curved space, it's certainly not the identity matrix anymore -- which means the vector dot product is no longer what you're used to either when you're near a black hole. Tensors are used extensively in a subject called differential geometry, which deals with, among other topics, curved spaces. General relativity, Einstein's very successful theory which explains gravity as the curvature of space, is cast in the language of differential geometry.

So there you have it: tensors are the generalization of vectors and matrices and even scalars. (Scalars, by the way, are considered to be type (0,0) tensors.)

I should mention that there not all mathematical objects with indices are tensors -- a tensor is a specific sort of object that has the transformation properties described by others in this thread. To be called a tensor, an object much transform like a tensor. Don't worry though, you're not going to run into such objects very often.

- Warren
 
  • Like
Likes FredericChopin
  • #7
wow! this is the simplest clearest explanation of tensors i have seen. thank you.
 
  • #9
chroot said:
Even more generally, a tensor is a sort of mathematical machine that takes one or more vectors and produces another vector or number.
That's the definition of a general tensor. You neglected to add that the machine must be linear. There are also general tensors which accepts one-forms as an arguement.

An (M,N) tensor is a linear function of M one-forms and N-vectors into the real numbers.

An affine tensor is another kind of tensor. Examples of affine tensors are Cartesian tensors and Lorentz tensors. These are defined according tothow their components transform under a restricted transformation in a given space. E.g. the transfrormation relavent to a Cartesian tensor is the orthogonal transformation. See -- http://www.geocities.com/physics_world/ma/orthog_trans.htm

The transformation relavent to a Lorentz tensor is the Lorentz transformation.

For details see

Tensors, Differential Forms, and Variational Principles, Lovelock & Rund, Dover Pub., (1989)

Pete
 
  • #10
Of course I left out a bit pmb, I wasn't trying to get into the concepts of one-forms versus vector fields just yet. :smile: Thanks for the additions, though.

- Warren
 
  • #11
chroot said:
Of course I left out a bit pmb, I wasn't trying to get into the concepts of one-forms versus vector fields just yet. :smile: Thanks for the additions, though.

- Warren
You're welcome chroot. I had a feeling you were being sparse intentionaly but wanted to give the StonedPanda more of the details just in case he was overly stoned and needed that much more. :biggrin:

The part about affine tensors I find to be an important addition because not knowing the use of the different kinds of tensors can be confusing. For instance: In D'Inverno's text Introducing Einstein's Relativity even the author goofed up on this subtle point. I.e. D'Inverno defines tensors in a manner which is equivalent to how you have defined them, i.e. as general tensors (by "equivalent" I refer to the definition of tensor as that whose components transform in a particular way). D'Inverno then goes on to define the angular momentum tensor (an important tensor in SR) on page 118

[tex]L^{\mu\nu} = x^{\mu}p^{\nu} - x^{\nu}p^{\mu} [/tex]

This is certainly not a general tensor since xu isn't a general tensor. However it is a Lorentz tensor and therefore Luv is a Lorentz tensor and not a general tensor.

Pete
 
Last edited:
  • #13
You're welcome chroot. I had a feeling you were being sparse intentionaly but wanted to give the StonedPanda more of the details just in case he was overly stoned and needed that much more.
I definitely picked the wrong name for this forum!

But thanks!

At what level or mathematics does one encounter tensors?
 
  • #14
pmb_phy said:
[tex]L^{\mu\nu} = x^{\mu}p^{\nu} - x^{\nu}p^{\mu} [/tex]

This is certainly not a general tensor since xu isn't a general tensor. However it is a Lorentz tensor and therefore Luv is a Lorentz tensor and not a general tensor.

Is there any chance you could enlighten me here - I follow what was said earlier regarding sub/superscript representing the vectors that are input and output. But, in the equation quoted there appears to be no input vector, but an output - what is going on there?

I'm basically asking for a clarification of the meaning of the notation when you just have superscripts, not for any particularly detailed mathematical explanation, as some of these maths threads are way over my head.

The other thing is I'm a bit unclear on the exact meanings of covariant and contravariant. Does anyone know a website, or can give an explanation where these terms are explained geometrically rather than in mathspeak. (To put it in context here - I love physics, but I'm not particularly good at maths when it is explained to me in terms of maths... I guess I just need to see what is happening physically when these terms are defined.)
 
  • #15
A tensor with just a single superscript is nothing more than an ordinary vector.

[tex]x^\mu = \left(
\begin{array}{c}
x^0\\
x^1\\
x^2\\
x^3
\end{array}
\right)[/tex]

where [itex]x^0[/itex] and so on are just numbers (scalars).

The contravariant vector is the normal vector you're used to working with. Covariant vectors are dual to contravariant vectors. Contravariant vectors are column vectors, while variant vectors are row vectors. If you multiply a covariant vector by a contravariant vector, you get a number out:

[tex]x^\mu x_\mu = a[/tex]

- Warren
 
Last edited:
  • #16
chroot said:
A tensor with just a single superscript is nothing more than an ordinary vector.

[tex]x^\mu = \left(
\begin{array}{c}
x^0\\
x^1\\
x^2\\
x^3
\end{array}
\right)[/tex]

where [itex]x^0[/itex] and so on are just numbers (scalars).
Note: Scalars are defined as tensors of rank one. All scalars are numbers, but not all numbers are scalars.

Pete
 
Last edited by a moderator:
  • #17
I'm actually interested in MoonUnit's last paragraph:
MoonUnit said:
The other thing is I'm a bit unclear on the exact meanings of covariant and contravariant. Does anyone know a website, or can give an explanation where these terms are explained geometrically rather than in mathspeak.
I'm not too clear on the geometrical meaning myself, though I can go through the motions and manipulate tensor expressions just fine.

A contravariant vector exists in the tangent space of a specific point in the manifold being considered. In other words, if you have a basketball (the surface of which is a 2D manifold), and you glue little toothpicks tangent to it, each toothpick is a contravariant vector defined at the point it is glued to the basketball. This much makes intuitive sense to me -- I can just as easily think "tangent" whenever someone says "contravariant." If you take any point on the basketball, the set of all tangent vectors you could glue to it there forms a (2-dimensional) tangent space at that point. Contravariant vectors at that point belong to that tangent space.

Now, I know covariant vectors live in cotangent spaces, but I'm not really clear on how to visualize a cotangent space. I have read most of John Baez' book "Gauge Fields, Knots, and Quantum Gravity," in which he makes an earnest attempt to help the reader visualize a covariant vector -- but it falls flat on me. I just can't understand what he's trying to say.

Does anyone have a clear explanation of how to "visualize" a covariant vector? Is it really even possible to visualize it?

- Warren
 
  • #18
chroot said:
Does anyone have a clear explanation of how to "visualize" a covariant vector? Is it really even possible to visualize it?

A covector can be visualized as a stack of parallel-planes. (Generally, a pair of parallel planes is sufficient. Visualize "twin blades".)
The spacing between planes is inversely-proportional to the size of the covector.
Think: local approximations of "equipotential surfaces".

The contraction of a vector and a covector can be interpreted in terms of the number of piercings (MTW's "bongs of a bell") of the stack by the vector.
 
Last edited:
  • #19
robphy,

That's basically how Baez tried to explain it, and I basically still don't get it. Where are these parallel (hyper)planes supposed to be? Equipotential of what?

Are the hyperplanes of the covariant vector perpendicular to the contravariant vector?

- Warren
 
  • #20
chroot said:
robphy,

That's basically how Baez tried to explain it, and I basically still don't get it. Where are these parallel (hyper)planes supposed to be? Equipotential of what?

Are the hyperplanes of the covariant vector perpendicular to the contravariant vector?

- Warren

Take the electrostatic field as a physical application.

The electric potential [itex]\phi [/itex] is a scalar field.
The object [itex]-\nabla_a \phi [/itex] is covector field (a field of one-forms).
Visually, the covectors are these twin-blades tangent to the equipotentials of [itex]\phi [/itex].

Consider a displacement vector [itex]d^a[/itex].
Suppose that [itex]d^a[/itex] is parallel to an equipotential. The contraction [itex]-\nabla_a \phi d^a[/itex] (that is [itex]\vec E\cdot \vec d[/itex]) is zero. Thus, the potential difference from tail to tip is zero.

Suppose that [itex]d^a[/itex] is perpendicular to an equipotential (i.e. parallel to the gradient vector). The contraction [itex]-\nabla_a \phi d^a[/itex] (that is [itex]\vec E\cdot \vec d[/itex]) is nonzero because [itex]d^a[/itex] pierces some planes in the stack. Thus, the potential difference from tail to tip is nonzero.

With [itex]d^a[/itex] fixed, suppose that electric potential [itex]\phi[/itex] is doubled in strength. We expect the potential difference from tail to tip to double. To obtain twice the number of piercings for the same [itex]d^a[/itex], the spacing between the twin-blades must be halved.

Here is a nice article by Jancewicz:
http://arxiv.org/abs/gr-qc/9807044

Other tensors (e.g., differential forms) can be visualized similarly.
 
Last edited:
  • #21
Thanks robphy,

I'll try to chew on this a little more. I feel odd that I have a good grasp of the mathematical difference between covariant and contravariant vectors, but am not be able to describe them in any other way... :redface:

- Warren
 
  • #22
chroot said:
Does anyone have a clear explanation of how to "visualize" a covariant vector? Is it really even possible to visualize it?
robphy's description is a specific one and, of course, is correct as that description goes. However such a description tends to let one confuse the tangent space with the dual space. It can give the false impression that covariant and contravariant vectors are the same kind of animals. e.g. in Euclidean geometry the covariant basis vectors are typically defined as

[tex]\bold e^i = \nabla x^i[/tex]

where the xi are coordinates. The contravariant vectors are typically defined as

[tex]\bold e_i = \frac{\partial \bold r}{\partial x^i}[/tex]

When you visualize these two objects it gives you the impression that you can represent one as a sum of the others since each are vectors in the same space. But that is not a true statement in general. In general, covariant and contravariant vectors lie in different spaces altogether (e.g. such an addition would be like trying to represent a bra as a linear sum of kets). It requires a metric just to relate the covariant vectors to the contravariant vectors. With no metric no such relationship exists.

I wish there were a general way to visualize 1-forms but I haven't seen one yet.

Pete
 
Last edited:
  • #23
chroot said:
A tensor with just a single superscript is nothing more than an ordinary vector.
Thank you Chroot!

I've been baffled by that for a while now - and I'm no longer baffled.

Yet again the curse of learning yet another notation... I really wish we could get some linguists into give mathematical language (symbolic and descriptive) an overhaul to redefine the way things are written so as to make a more precise language for maths, which makes intuitive sense without requiring quite as much raw indoctrination as we have to endure now :)

And thanks to everyone for the discussion on the geometrical meaning - it does make it somewhat clearer, although I'm still working my way through robphy's post.

I do have another query though, and that's just a clarification of the expressions of the type "(stuff) exists in the (other stuff) space" - like "contravariant vector exists in the tangent space" as chroot said. What IS the tangent space - is it a (2D) surface which basically follows the surface of the basketball? How do I generalise it so that when someone I come across a new one, I might be able to picture it :) What steps do you go through in your head for visualising (or just translating) "in a (whatever) space". What advantage does thinking of tangent space have over just thinking of all the tangential vectors on the surface? There must be an application of it, or some subtlety I'm missing that explains the need for tangent space and so on. In QM we touched on momentum space, but all that seemed to be was just a graph in which psi(x) vs. x became phi(p) vs. p, and momentum was the x-axis, yet it really didn't explain what "momentum space" was. I almost get it but not quite...

Finally definitions again - contravariant and covariant... they are contra/covariant with respect to what?

Thanks for your input - it is helpful! Sorry if I'm asking really dense questions, but most of my friends don't really care to discuss this kind of thing...
 
  • #24
If you take my basketball again, and consider some specific point p on it, you may glue an infinite number of different toothpicks to that point, all pointing in different directions. That's a 2D vector space -- a plane.

The term "tangent space" is just used to mean the 2D vector space tangent to the basketball at some point p.

The reason we refer to vectors living in different spaces is simple. If you imagine my basketball again, there's one tangent space at a specific point p, e.g. on the equator, and a different tangent space at e.g. a point q like the north pole. A vector at p and a vector at q are quite literally in totally different vector spaces. You can't add them, or otherwise relate them, because they literally exist in different spaces. Every point on the basketball's surface has its very own tangent space.

You can relate the tangent spaces at two distinct points with a special mechanism called "parallel transport." In parallel transport, you effectively "push" the vector along the surface of the manifold from one point p to another point q. The vector is naturally changed by this process; a vector tangent to the basketball's surface in one place remains tangent to its surface at all points when you push it around the surface, and may point in a totally different direction after being pushed around. Parallel transport provides a rigorous definition of a relationship between vectors in the tangent space at p and vectors in the tangent space at q.

- Warren
 
  • #25
pmb_phy said:
In general, covariant and contravariant vectors lie in different spaces altogether (e.g. such an addition would be like trying to represent a bra as a linear sum of kets). It requires a metric just to relate the covariant vectors to the contravariant vectors. With no metric no such relationship exists.

I wish there were a general way to visualize 1-forms but I haven't seen one yet.

Pete

Yes, it is true that covectors and vectors lie in different spaces, and, as you say, cannot be added together. (That is why these two directional quantities have different representations.) However, they can be contracted since a covector is a linear map on the space of vectors to the reals. No metric is required for this operation.

Although my example with the electrostatic field was a little loose (where I snuck in the Euclidean metric in a few places), I can make it more precise if requested.

As you suggest, given a metric, one can map a vector to a covector. If that metric is invertible, one can map a covector to a vector. (In index-speak, this is lowering and raising of an index.) In fact, there is a geometric construction that can take a vector (represented by an arrow) and a Euclidean metric (represented by an ellipse [generally, an ellipsoid]) to produce the covector (represented by the twin-blades) obtained by lowering the index of the vector. That same construction can be done in reverse to raise the index of the covector. Thus, this construction shows that these representations are duals of each other.

In addition, for covectors (represented as twin-blades), there is an analogue of the parallelogram rule for adding vectors.

Some references for this visualization of a 1-form is, in addition to MTW 2.5,
http://www.ucolick.org/~burke/forms/draw.ps (need ghostview)
http://www.ucolick.org/~burke/forms/tdf.ps
http://content.aip.org/JMAPAQ/v24/i1/65_1.html [Broken]
http://www.ee.byu.edu/ee/forms/forms-home.html [Broken]
which I believe all derive from Schouten's Tensor Analysis for Physicists.
 
Last edited by a moderator:
  • #26
Ohmigosh, I worked for over an hour on how to picture a covector and a differential form, and the web connection just shut down and lost it all.

Well, that spares you a long post.

Here is just the last sentence from it: Professor Bott (a famous engineer - turned - topologist] used to say a cocyle, i.e. differential form, is something that "hovers over the space, looking for a cycle [i.e. a path], and when it sees one, it pounces on it gobbles it up, and spits out a number".

What is more visual than that!? Sort of like a hungry bird of prey.

Most of my post was devoted to recalling how the familiar coordinate functions x and y on the euclidean plane are covectors, and the grid of parallel lines on a graph paper are the corresponding parallel families of "level sets" for those covectors, i.e. sets where the coordinate takes the same value.

This let's us picture at least the level sets of covectors within the vector space it self.
 
  • #27
Okay I've read this post and the other referring to tensors, and followed the links, and I'm still lost.

Could someone please tell me (without technical words and maths jargon :biggrin: ) what a tensor is, what a one form is and how they relate to each other and a vector. Also what is dual space?

Sorry to say this (and quite embarassed really) but somehow I've managed to get through 2 years of a physics degree and the maths involved (though linear algebra was never my strong suit) without understanding the meaning of words like homogenous, covector etc. Not quite sure how that happened. I can somehow understand what happening until someone uses technical mathematical terms.

So is it possible to get a qualitive conceptual picture of these terms (i.e not referring to it as a function that does such and such just yet, as that never seems to work for me). Perhaps I'm just an idiot :yuck: , or have a weird brain. Maybe someone could explain why we need these terms and what they describe in relation to GR and other physical situations. That seems to have worked with other things (though I could go through the motions, I never really understood vector calculus, line intergrals etc until I studied Maxwell's laws and EM field theory). One of my texts says that one-folds and dual space is like bra-kets in QM, so is a one fold like a complex vector (it exists in another form of space as a complex wavefunction exists in imaginary space)

Sorry if it seems like I'm asking a lot, but I would really like a deeper insight into GR and cosmology and seem to be getting no where. Anyway I'll leave it there coz I'm feeling rather stupid, right now. :wink:
 
  • #28
These aren't definitions... just ways one can think about things. (What I am saying is nothing new... you'll probably find these in the various threads on this topic.)

Hopefully, you remember matrix multiplication. We'll stick to three dimensions.

Think of a vector [itex]\overline v[/itex] as a "column vector" [itex]\left( \begin{array}{c} v_1 \\ v_2 \\ v_3 \end{array}\right)[/itex] with components [itex]v_1, v_2, v_3 [/itex]. Of course, you can add such column vectors together and do scalar-multiplication on them.

Think of a dual-vector (or covector) [itex]\underline u [/itex] as a "row vector" [itex]\left( u_1\quad u_2\quad u_3 \right)[/itex] with components [itex]u_1, u_2, u_3 [/itex]. Of course, you can add such row vectors together and do scalar-multiplication on them.

What makes the "space of dual-vectors" the dual to the "space of vectors" is that there is a rule that: given a vector [itex]\overline v[/itex],
  • there is a dual-vector [itex]\underline u[/itex] that can produce a scalar, denoted as [itex]\underline u \overline v[/itex], which you calculate by matrix multiplication: [itex]\underline u \overline v=\left( u_1\quad u_2\quad u_3 \right)\left( \begin{array}{c} v_1 \\ v_2 \\ v_3 \end{array}\right)=s[/itex]
  • such that a linear combination of dual-vectors [itex] a \underline t+ b\underline u[/itex] applied to [itex]\overline v[/itex] results in [itex]( a \underline t+ b\underline u)\overline v=(a \underline t)\overline v+ (b\underline u)\overline v=a(\underline t \overline v) + b(\underline u \overline v)[/itex]
    that is, a linear combination of the scalars that you get from each dual-vector applied to [itex]\overline v[/itex] separately.

In QM, the bra [itex]<\phi|[/itex] is a dual-vector and the ket [itex]|\psi>[/itex] is a vector.

Hopefully, this helps get you started. If so, maybe the other posts in these various threads are more digestible.
 
  • #29
I'm a bit confused now about the commutivity (if such a thing exists) of tensors. I think part of my confusion stems from my lack of familiarity with this summation notation.

For matrices, I know that AB != BA but what about for tensors?

More specifically, I'm curious about when indices don't necessarily repeat throughout the whole expression.

Does [tex] \Lambda^{\mu}_{\ \rho} H^{\rho\sigma}a_{\nu}\Lambda^{\nu}_{\sigma} [/tex] give the same thing as [tex] \Lambda^{\mu}_{\ \rho}\Lambda^{\nu}_{\sigma} H^{\rho\sigma}a_{\nu} [/tex] ?
 
  • #30
Yes! Because the summation convention requires you multiply elements with the same (dummy-)index (one sub- and one super-scripted) and sum over all allowed values of the dummy index. So it does not matter in what order the tensors appear; simply because you multiply normal numbers for which ab=ba !

Example: [tex]A_\nu B^\nu = A_0 B^0 + A_1 B^1 +A_2 B^2 +A_3 B^3 = B^\nu A_\nu [/tex]

Where the last equality comes from the fact that because e.g. [itex]A_0 B^0 = B^0 A_0 [/itex]. For they are just numbers.
 
Last edited:
  • #31
Ohhh... excellent. Thanks for the explanation, it's all starting to make sense now. :smile:

EDIT:

Oops, I had just one more question.

What the heck is [tex]T^{\mu\nu} [/tex]? How exactly is that different from [tex]T^\mu_{\ \nu}[/tex]? Are they basically the same thing or are we talking about differences in terms of contravariant and covariant?
 
Last edited:
  • #32
These are different, and indeed the difference is that the latter has a co- and a contra-variant component while the firts has two contravariant components. You can relate the two by the metric: [itex]T^a_b=g_{bc} T^{ac}[/itex] (This is called 'lowering an index'). In Euclidean space these are the same because the metric is the identity matrix, but in a general Riemannian space the components can be different.
 
  • #33
chroot, are you actually a cigar?
 
  • #34
chroot said:
I'm not too clear on the geometrical meaning myself, though I can go through the motions and manipulate tensor expressions just fine.

A contravariant vector exists in the tangent space of a specific point in the manifold being considered. In other words, if you have a basketball (the surface of which is a 2D manifold), and you glue little toothpicks tangent to it, each toothpick is a contravariant vector defined at the point it is glued to the basketball. This much makes intuitive sense to me -- I can just as easily think "tangent" whenever someone says "contravariant." If you take any point on the basketball, the set of all tangent vectors you could glue to it there forms a (2-dimensional) tangent space at that point. Contravariant vectors at that point belong to that tangent space.

Now, I know covariant vectors live in cotangent spaces, but I'm not really clear on how to visualize a cotangent space. I have read most of John Baez' book "Gauge Fields, Knots, and Quantum Gravity," in which he makes an earnest attempt to help the reader visualize a covariant vector -- but it falls flat on me. I just can't understand what he's trying to say.

Does anyone have a clear explanation of how to "visualize" a covariant vector? Is it really even possible to visualize it?

- Warren
First of all, covariant and contravariant vectors are not different vectors. They represent ONE VECTOR (an arrow :-) in two different coordinate systems (dual, or reciprocal, or skew, or...coordinates). The reciprocal system is equally satisfactory for representing vectors, but 'contravariant' vector looks exactly the same as 'covariant'. So "visualize" them as ONE tangent arrow (toothpick) if you wish. Two parallel blades, probably, mean direct and reciprocal coordinate planes, which may have complement scale or orientation, but, of course, should be parallel (no less no more).
Secondly, any quantity that we wish to define, be it scalar, vector, or tensor, must be independent of the special coordinate system. We shell adopt this as fundamental principal. However, its representation will depend on the particular system.
 
  • #35
The difference between a vector and a dual vector is not just a change of coordinate system, gvk. Vectors and dual vectors live in entirely different spaces, and are certainly not the same vector.

I know what you're trying to say: a vector is related to its dual via a one-to-one mapping.

- Warren
 
<h2>What is a Tensor?</h2><p>A tensor is a mathematical object that represents the relationships between different sets of data. It is a generalization of vectors and matrices, and can be used to describe physical quantities such as force, velocity, and stress.</p><h2>How is a Tensor different from a Vector or Matrix?</h2><p>A vector is a one-dimensional array of numbers, while a matrix is a two-dimensional array. Tensors can have any number of dimensions, making them more versatile for representing complex relationships between data.</p><h2>What are some real-world applications of Tensors?</h2><p>Tensors are used in various fields such as physics, engineering, and computer science. They are particularly useful in machine learning and artificial intelligence for tasks such as image recognition, natural language processing, and recommendation systems.</p><h2>How do Tensors relate to Einstein's theory of relativity?</h2><p>In Einstein's theory of relativity, tensors are used to describe the curvature of spacetime and the relationships between mass, energy, and gravity. This allows for a more accurate understanding of the universe and its physical laws.</p><h2>Are there different types of Tensors?</h2><p>Yes, there are different types of tensors such as scalars, vectors, matrices, and higher-order tensors. Each type has its own properties and rules for mathematical operations, making them useful for different applications.</p>

What is a Tensor?

A tensor is a mathematical object that represents the relationships between different sets of data. It is a generalization of vectors and matrices, and can be used to describe physical quantities such as force, velocity, and stress.

How is a Tensor different from a Vector or Matrix?

A vector is a one-dimensional array of numbers, while a matrix is a two-dimensional array. Tensors can have any number of dimensions, making them more versatile for representing complex relationships between data.

What are some real-world applications of Tensors?

Tensors are used in various fields such as physics, engineering, and computer science. They are particularly useful in machine learning and artificial intelligence for tasks such as image recognition, natural language processing, and recommendation systems.

How do Tensors relate to Einstein's theory of relativity?

In Einstein's theory of relativity, tensors are used to describe the curvature of spacetime and the relationships between mass, energy, and gravity. This allows for a more accurate understanding of the universe and its physical laws.

Are there different types of Tensors?

Yes, there are different types of tensors such as scalars, vectors, matrices, and higher-order tensors. Each type has its own properties and rules for mathematical operations, making them useful for different applications.

Similar threads

  • Special and General Relativity
Replies
22
Views
2K
  • STEM Academic Advising
Replies
12
Views
882
  • STEM Academic Advising
Replies
5
Views
677
  • STEM Academic Advising
Replies
8
Views
715
  • STEM Academic Advising
Replies
14
Views
1K
Replies
12
Views
4K
  • Science and Math Textbooks
Replies
4
Views
1K
  • Science and Math Textbooks
Replies
1
Views
992
  • Science and Math Textbooks
Replies
8
Views
2K
Back
Top