Symmetric Tensors and Matrices

In summary, a second order symmetric tensor is always symmetric, regardless of any coordinate system or basis.
  • #1
BobbyFluffyPric
9
0
Is the matrix of a second order symmetric tensor always symmetric (ie. expressed in any coordinate system, and in any basis of the coordinate system)?
Please help! :blushing:
~Bee
 
Physics news on Phys.org
  • #2
BobbyFluffyPric said:
Is the matrix of a second order symmetric tensor always symmetric (ie. expressed in any coordinate system, and in any basis of the coordinate system)?
Please help! :blushing:
~Bee

Yes. A 2nd order symmetric tensor is e.g. a bilinear form B(x,y)=B(y,x), where x,y are arbitrary vectors. This is a property that holds regardless of any basis or coordinate system. The matrix of components [tex]B_{\mu\nu}[/tex] is defined as [tex]B_{\mu\nu}=B(e_\mu,e_\nu)[/tex], where [tex]\{e_\mu\}[/tex] is a basis. So the matrix will be symmetric in any basis.
 
Last edited:
  • #3
I was hoping you'd say no!

Explain, thanks for your post - however, I was hoping you'd say the contrary, as I seem to have come to a conclusion of sorts of why the matrix of a second order tensor is not necessarily symmetric. I'll try and explain my reasoning:

From what I believe, a second order tensor A is said to be symmetric if it is equal to its transpose. The transpose of tensor A is defined as the tensor B / u . A = B . u, for all vectors u, where the point indicates the dot product (perhaps B is also known as the conjugate of A). Thus, A is symmetric iff u . A = A . u, for all u. I think this is the same definition you gave, as it implies that if A is symmetric then x . A . y = y . A . x for all vectors x, y, only you wrote it as A(x,y)=A(y,x) (I don't know whether to use boldface for the A in this case as obviously A(x,y) is a scalar).

Translating the above into component notation, it can be seen that affirming that A is symmetric is the same as any of the following statements:
1) Aij = Aji, which is the same as saying A=Aij ei *ej = Aij ej*ej, where Aij are the covariant components of tensor A in a certain coordinate system, ei are the (contravariant) basis vectors of the coordinate system, and * is the direct (or outer, or tensor - whatever you call it) product of vectors ei and ej. Thus the matrix [Aij] (ie. the matrix made up of the covariant components of A in the given coordinate system) is symmetric.
2)Exactly the same for the contravariant components of A. Thus, the matrix made up of the contravariant components of A in a given coordinate system is symmetric.
3)Ai j= Aj i, where the letter with the underscore represents a superscript, so that A=Ai j Ei*ej, where Ei are the covariant basis vectors (I'm sorry I don't know how to type in a superscript). However, this does not mean that Ai r=Ar i, thus, the matrix formed from the mixed components Ai j of A does not have to be symmetric.

Thus, when expressing a symmetric tensor via its mixed components, the matrix made up of these components does not have to be symmetric. I was working with a stress tensor in Cartesian coordinates (a symmetric tensor), and was surprised that when I expressed the tensor in an oblique coordinate system, in terms of the contravariant components the matrix was symmetric, but not so in terms of the mixed components. After looking for flaws in my calculations and finding none, I was motivated to present this question. Perhaps you were not considering the matrix formed from the mixed components when you posted your answer. Please let me know!~Bee
 
Last edited:
  • #4
Okay, simpler: you said that Buv=B(eu,ev). So it's what I said before, if Buv= B(eu,Ev), where the underscore under the subscript indicates that it is a superscript, and Ei are the covariant basis vectors (ei are the contravariant basis vectors), then, if B is symmetric, Buv=B(eu,Ev)= B(Ev,eu) = Bv u, but not equal to Bvu, so the matrix is not symmetric!
 
Last edited:
  • #5
BobbyFluffyPric
Well, of course if you are using "mixed components", the matrix does not have to be symmetric. This is easy to understand if you think about contravariant vectors and covariant vectors as things coming from two different vector spaces. Then A(x,y) is a function of two different types of arguments and there is no obvious symmetry like before A(x,y)=A(y,x).
 
  • #6
Thanks

Explain, thank you, I'm more assured now :smile: , even though I don't understand the thing about thinking of "contravariant vectors and covariant vectors as things coming from two different vector spaces" :confused: - as far as I know, a vector is a vector, and one can only talk of contravariant or covariant components of the same (the reason we distinguish between the contravariant and covariant basis vectors is just a question of nomenclature, but both sets are no more than specific sets of vectors in the same vector space). But maybe that is another issue :blushing: .

~Bee
 
Last edited:
  • #7
BobbyFluffyPric said:
Explain, thank you, I'm more assured now :smile: , even though I don't understand the thing about thinking of "contravariant vectors and covariant vectors as things coming from two different vector spaces" :confused: - as far as I know, a vector is a vector, and one can only talk of contravariant or covariant components of the same (the reason we distinguish between the contravariant and covariant basis vectors is just a question of nomenclature, but both sets are no more than specific sets of vectors in the same vector space). But maybe that is another issue :blushing: .

~Bee

Just as I thought, you are imagining that there exist covariant/contravariant components of the same vector... Yes, indeed some books say that, and this is a perfectly fine way of thinking as long as you keep working in components and as long as the metric is always the same (which is normally the case in physics). But it is this thinking that got you confused in the first place. It might be worthwhile to change your thinking about vectors, once you have mastered calculations with components. In the new thinking, you imagine that there are two kinds of vectors: covariant and contravariant vectors, and that there exists only one kind of components for anything. Namely, if a "thing" [tex]\mathbf{X}[/tex] is decomposed into a basis of the same kind of "things" [tex]\mathbf{E}_\nu[/tex], where [tex]\nu=1,2,3,...[/tex], then the components of [tex]\mathbf{X}[/tex] in this basis are numbers [tex]X^\nu[/tex] defined by the identity [tex]\mathbf{X}=\sum_\nu X^\nu \mathbf{E}_\nu[/tex]. Here "thing" might be a vector, a tensor, etc. This is the only logical definition of components, and it assumes that there is a "thing" which is an element of a vector space, and a basis of similar "things", which is a linearly independent set, and then you can define components with respect to one basis or components with respect to another basis; and then you can easily see how these components are transformed when you change the basis. Then you say that there are ordinary vectors (also called "contravariant"), which make up a vector space V, and then you define dual vectors (also called "covariant" vectors). Dual vectors come from the dual space, a totally different and separate vector space. Elements of dual space are linear functions on vectors of the original space V. I have no time right now to say everything in detail (so if you don't understand, it's my fault), but you can look it up in books on linear algebra. This construction takes a while to get used to, but it might be worth while.
 
Last edited:
  • #8
Explain, I wonder if you can recommend me some text that explains this in detail, although I suppose that as an engineering student I shall have little need to delve further into it. However, I am curious as to this distinction between contravariant and covariant vectors (I actually read in some maths text that though the terms contravariant vector and covariant vector were being used, strictly speaking these terms should be applied to the components of the a vector, and thus I took it for granted that this was the case in all texts that referred to vectors as covariant or contravariant). It's hard for me not to think of a vector, which is a geometrical entity that one can represent via an arrow (eg force, velocity etc), as not being the same physical entity regardless of what type of components are used to describe it (ie regardless of the basis it is expressed in).
 
  • #9
BobbyFluffyPric said:
It's hard for me not to think of a vector, which is a geometrical entity that one can represent via an arrow (eg force, velocity etc)
Have you seen the difference between a "vector" and an "axial vector" / "pseudovector"? That is very closely related to the difference between a "contravariant vector" and a "covariant vector".

I admit I strongly dislike the contravariant/covariant terminology. I much prefer "vector" / "tangent vector" and "covector" / "cotangent vector" / "dual vector". I forget which is contravariant and which is covariant.

It takes a while to get used to it, because traditionally, we make heavy use of two kinds of dualities so that we can turn "anything" into a scalar or a vector. The notions of "pseudoscalar" and "pseudovector" are an attempt to try and mitigate the damage done by this way of studying things, but it all makes more sense once you're used to the differential geometric viewpoint.
 
Last edited:
  • #10
BobbyFluffyPric said:
Explain, I wonder if you can recommend me some text that explains this in detail, although I suppose that as an engineering student I shall have little need to delve further into it. However, I am curious as to this distinction between contravariant and covariant vectors (I actually read in some maths text that though the terms contravariant vector and covariant vector were being used, strictly speaking these terms should be applied to the components of the a vector, and thus I took it for granted that this was the case in all texts that referred to vectors as covariant or contravariant). It's hard for me not to think of a vector, which is a geometrical entity that one can represent via an arrow (eg force, velocity etc), as not being the same physical entity regardless of what type of components are used to describe it (ie regardless of the basis it is expressed in).

Yes, as I said, some books tell you that there is one vector and two "types" of components. The problem with this thinking is that it is okay at some level (i.e. you can do correct calculations if you follow the rules), but confusing at another level. If you only need engineering applications, like computing tensors in flat space, - it's okay, but if you are going to study things like differential forms, it's probably not okay.

Let me try to avoid talking about different vector spaces and just concentrate on a consistent definition of "components". For example, you just replied that vector [is] ... the same physical entity regardless of what type of components are used to describe it (ie regardless of the basis it is expressed in). Yes, I also prefer to view vectors primarily as physical entities ("directed magnitudes" or "directed velocities" or simply just "vectors"), defined regardless of any components or a basis. I prefer to think that components of a vector are secondary and are defined with regard to a basis. Now, what is a basis? A basis is a set of vectors, i.e. a set of physical entities, right? So there cannot be a "covariant" or a "contravariant" basis, --- there is only one kind of basis. As we know, a basis is a linearly independent set of vectors through which all other vectors can be expressed as linear combinations. There is not a "contravariant" or "covariant" basis, but just a basis. A vector X has components [tex]X^\mu[/tex] in the basis [tex]\mathbf{E}_\mu[/tex] if [tex]\mathbf{X}=\sum_\mu X^\mu \mathbf{E}_\mu[/tex] and that's it. The components [tex]X^\mu[/tex] are those called "contravariant" in your terminology. These are the only kind of components one can define using a basis.

There is no consistent way to define "covariant" components using just a basis, because there is no such thing as a "covariant" basis. Covariant components are instead defined using a metric (i.e. a scalar product) and a basis. (I use the words "metric" and "scalar product" interchangeably.) Here is a definition: if [tex]g(\mathbf{X},\mathbf{Y})[/tex] is the scalar product of vectors X and Y, then the covariant components of a vector X are defined as [tex]X_\mu = g(\mathbf{X},\mathbf{E}_\mu)[/tex]. You see that these "covariant components" are quite another beast. They express the relationship between a vector, a basis, and a scalar product. No scalar product - no covariant components. I prefer to think that "covariant components" is really something auxiliary that helps when you need lots of calculations with scalar product. On the other hand, "contravariant components" are honest components of a vector in a basis.

In the Euclidean space, there is a standard scalar product [tex]g[/tex], and an orthonormal basis where the metric [tex]g[/tex] is given by a unit matrix, [tex]g(\mathbf{E}_\mu, \mathbf{E}_\nu)=\delta_{\mu\nu}[/tex]. Then covariant components of any vector are numerically equal to its contravariant components. In books where the only conceivable case is Euclidean, the metric [tex]g[/tex] is always present and always the same, so it is perhaps okay to say that every vector has a set of contravariant components and a set of covariant components. But even then, if you choose a non-orthogonal basis (which you are certainly allowed to do), then the metric [tex]g[/tex] will not be given by a unit matrix and covariant components of vectors will not be equal to contravariant components. Also, if you define a different metric, the covariant components will be completely different (because they depend also on the metric). For instance, in special relativity the metric in the standard basis is g=diag(1,-1,-1,-1) and covariant components always have some signs flipped, compared with contravariant components. This makes things more confusing than it's worth. So eventually one finds that these different components don't help with calculations and one abandons components. http://www.theorie.physik.uni-muenchen.de/~serge/T7/" [Broken] is a book-in-progress on advanced general relativity that does all vector and tensor calculations without using any components at all. (I already posted this link in a different thread.)

As for books - I don't really know what to suggest. You can look here:
http://www.cs.wisc.edu/~deboor/ma443.html
This page also lists some suggested books.
 
Last edited by a moderator:
  • #11
Guys, thanks for all the insight, but I'll need a little time to digest this stuff! I don't want to start asking questions off the cuff!
 
Last edited:

1. What is the difference between a symmetric tensor and a symmetric matrix?

A symmetric tensor is a multidimensional array of numbers that has the property of being invariant under certain transformations. In other words, it remains unchanged when the order of its indices is permuted. A symmetric matrix, on the other hand, is a square matrix that is equal to its own transpose. This means that the elements above the main diagonal are the same as the elements below the main diagonal.

2. How are symmetric tensors and matrices used in physics?

Symmetric tensors and matrices are commonly used in physics to represent physical quantities that are independent of the coordinate system used to describe them. For example, stress and strain tensors are symmetric because they are the same regardless of the direction of measurement. These tensors and matrices also play a crucial role in describing the symmetries of physical systems, such as in the study of crystal structures.

3. Can symmetric tensors and matrices be used in machine learning?

Yes, symmetric tensors and matrices are commonly used in machine learning algorithms, especially in tasks such as image and signal processing. They can be used to represent data that has inherent symmetry, such as images and audio signals. This allows for more efficient and accurate processing of the data.

4. How can one determine if a given tensor or matrix is symmetric?

A tensor is considered symmetric if it remains unchanged under a permutation of its indices. This means that if the order of the indices is changed, the values of the tensor remain the same. For a matrix to be symmetric, it must be equal to its own transpose, meaning that the elements above the main diagonal are the same as the elements below the main diagonal.

5. Are there any applications of symmetric tensors and matrices in engineering?

Yes, symmetric tensors and matrices have various applications in engineering, particularly in the fields of mechanics and structural engineering. They are used to represent physical quantities, such as stress and strain, in structures and materials. They also play a crucial role in analyzing the symmetries of structures, which can help in designing more efficient and stable systems.

Similar threads

  • Quantum Physics
Replies
6
Views
721
  • Differential Geometry
Replies
8
Views
2K
Replies
16
Views
2K
Replies
2
Views
2K
Replies
4
Views
1K
  • Differential Geometry
Replies
6
Views
2K
  • Differential Geometry
Replies
1
Views
1K
Replies
4
Views
1K
  • Special and General Relativity
Replies
22
Views
2K
Replies
15
Views
1K
Back
Top