- 9
- 0
Is the matrix of a second order symmetric tensor always symmetric (ie. expressed in any coordinate system, and in any basis of the coordinate system)?
Please help!
~Bee
Please help!
~Bee
Yes. A 2nd order symmetric tensor is e.g. a bilinear form B(x,y)=B(y,x), where x,y are arbitrary vectors. This is a property that holds regardless of any basis or coordinate system. The matrix of components [tex]B_{\mu\nu}[/tex] is defined as [tex]B_{\mu\nu}=B(e_\mu,e_\nu)[/tex], where [tex]\{e_\mu\}[/tex] is a basis. So the matrix will be symmetric in any basis.Is the matrix of a second order symmetric tensor always symmetric (ie. expressed in any coordinate system, and in any basis of the coordinate system)?
Please help!
~Bee
Just as I thought, you are imagining that there exist covariant/contravariant components of the same vector.... Yes, indeed some books say that, and this is a perfectly fine way of thinking as long as you keep working in components and as long as the metric is always the same (which is normally the case in physics). But it is this thinking that got you confused in the first place. It might be worthwhile to change your thinking about vectors, once you have mastered calculations with components. In the new thinking, you imagine that there are two kinds of vectors: covariant and contravariant vectors, and that there exists only one kind of components for anything. Namely, if a "thing" [tex]\mathbf{X}[/tex] is decomposed into a basis of the same kind of "things" [tex]\mathbf{E}_\nu[/tex], where [tex]\nu=1,2,3,...[/tex], then the components of [tex]\mathbf{X}[/tex] in this basis are numbers [tex]X^\nu[/tex] defined by the identity [tex]\mathbf{X}=\sum_\nu X^\nu \mathbf{E}_\nu[/tex]. Here "thing" might be a vector, a tensor, etc. This is the only logical definition of components, and it assumes that there is a "thing" which is an element of a vector space, and a basis of similar "things", which is a linearly independent set, and then you can define components with respect to one basis or components with respect to another basis; and then you can easily see how these components are transformed when you change the basis. Then you say that there are ordinary vectors (also called "contravariant"), which make up a vector space V, and then you define dual vectors (also called "covariant" vectors). Dual vectors come from the dual space, a totally different and separate vector space. Elements of dual space are linear functions on vectors of the original space V. I have no time right now to say everything in detail (so if you don't understand, it's my fault), but you can look it up in books on linear algebra. This construction takes a while to get used to, but it might be worth while.Explain, thank you, I'm more assured now , even though I don't understand the thing about thinking of "contravariant vectors and covariant vectors as things coming from two different vector spaces" - as far as I know, a vector is a vector, and one can only talk of contravariant or covariant components of the same (the reason we distinguish between the contravariant and covariant basis vectors is just a question of nomenclature, but both sets are no more than specific sets of vectors in the same vector space). But maybe that is another issue .
~Bee
Have you seen the difference between a "vector" and an "axial vector" / "pseudovector"? That is very closely related to the difference between a "contravariant vector" and a "covariant vector".It's hard for me not to think of a vector, which is a geometrical entity that one can represent via an arrow (eg force, velocity etc)
Yes, as I said, some books tell you that there is one vector and two "types" of components. The problem with this thinking is that it is okay at some level (i.e. you can do correct calculations if you follow the rules), but confusing at another level. If you only need engineering applications, like computing tensors in flat space, - it's okay, but if you are going to study things like differential forms, it's probably not okay.Explain, I wonder if you can recommend me some text that explains this in detail, although I suppose that as an engineering student I shall have little need to delve further into it. However, I am curious as to this distinction between contravariant and covariant vectors (I actually read in some maths text that though the terms contravariant vector and covariant vector were being used, strictly speaking these terms should be applied to the components of the a vector, and thus I took it for granted that this was the case in all texts that referred to vectors as covariant or contravariant). It's hard for me not to think of a vector, which is a geometrical entity that one can represent via an arrow (eg force, velocity etc), as not being the same physical entity regardless of what type of components are used to describe it (ie regardless of the basis it is expressed in).