Show that the following tensors have the same principal values

  • Thread starter TheFerruccio
  • Start date
  • Tags
    Tensors
In summary, the problem statement is to show that if T is a second order tensor with a non-zero determinant, then ##\textbf{T}^\top \textbf{T}## and ##\textbf{T}T^\top## have the same principal values. The attempt at solution involves using the three scalar invariants for a second order tensor and expanding out the cubic equation for determining principal values. However, this approach involves a lot of index notation and the asker is looking for a simpler way to solve the problem without getting lost in the index notation. They provide an example of their attempt at expanding the second scalar invariant, I_2, and ask for assistance in simplifying it.
  • #1
TheFerruccio
220
0
I apologize for the sheer volume of questions I am asking. I have never faced this with an assignment. I get 90% of the way then spent 8 hours on the last 10%. This is inefficient.

Problem Statement
If T is has a non-zero determinant and is second order, show that ##\textbf{T}^\top \textbf{T}## and ##\textbf{T}T^\top## have the same principal values.

Attempt at Solution

In order to determine principal values, the following equation must hold true for the tensor:

##\det{\textbf{A}-\lambda \textbf{1}}=0##

For a 2nd order tensor, this means solving for the following cubic equation:

##-\lambda^3+I_1(\textbf{A})\lambda^2+I_2(\textbf{A})\lambda+I_3(\textbf{A})=0##

Where I_1, I_2, and I_3 are the three scalar invariants for a second order tensor.


Since my A is ##\textbf{T}^\top \textbf{T}##, I will redefine A, and define B, as such:
##A_{ij}=(T)_{ij}##
##B_{ij}=(T^\top)_{ij}##

I am assuming anyone here who knows how to assist knows what the scalar invariants are. Typing up this problem alone has been a huge headache, and I haven't even gotten to the part where I have to express the invariant values in index notation.

So, my short question is: Is there a simpler way to solve this?

My current method is: Brute force expand the entire cubic equation, write out each of the three scalar invariants in index notation (16 indices show up, so I have to make sure I match and contract the right ones while still staying "legal" with this index notation), then show that, in both the cases of A and B, that the scalar invariants are equal. If they are equal, then the principal values will be equal, since the coefficients of the cubic equation are equal.

Is there any faster way to do this without getting lost in a huge sea of index notation? It is driving me up the wall, right now. I have spent hours on this and I keep messing up the numerous indices that I have to match up. I do not want to waste a day on a single problem like I did last time.

[edit] If it is indeed the case that I have to expand out the scalar invariants in index notation, how the heck do I do that while still preserving and collapsing the right indices? I'm just not seeing it. I fully expanded it out, and I am ending up with a mess that I cannot seem to resolve.

For instance, in the case of I_2, I expanded everything as such:

I_2:
in the case of ##\textbf{T}^{\top} \textbf{T}##...
##\frac{1}{2}((B_{kj}A_{jk})(B_{ab}A_{ba})-(B_{ij}A_{jk})(B_{ka}A_{ai}))##

in the case of
##\textbf{T}\textbf{T}^{\top}##...
##\frac{1}{2}((A_{kj}B_{jk})(A_{ab}B_{ba})-(A_{ij}B_{jk})(A_{ka}B_{ai}))##

That is what I have so far, for I_2, and I see absolutely no way to contract it. Even then, I am pretty sure I used too many matching indices in each term, but I have no idea how to write the indices in such a way that the summations are all preserved, as well as the relationships between the terms.
 
Last edited:
Physics news on Phys.org
  • #2
TheFerruccio said:
I apologize for the sheer volume of questions I am asking. I have never faced this with an assignment. I get 90% of the way then spent 8 hours on the last 10%. This is inefficient.

Problem Statement
If T is has a non-zero determinant and is second order, show that ##\textbf{T}^\top \textbf{T}## and ##\textbf{T}T^\top## have the same principal values.

Attempt at Solution

In order to determine principal values, the following equation must hold true for the tensor:

##\det{\textbf{A}-\lambda \textbf{1}}=0##

For a 2nd order tensor, this means solving for the following cubic equation:

##-\lambda^3+I_1(\textbf{A})\lambda^2+I_2(\textbf{A})\lambda+I_3(\textbf{A})=0##

Where I_1, I_2, and I_3 are the three scalar invariants for a second order tensor.


Since my A is ##\textbf{T}^\top \textbf{T}##, I will redefine A, and define B, as such:
##A_{ij}=(T)_{ij}##
##B_{ij}=(T^\top)_{ij}##

I am assuming anyone here who knows how to assist knows what the scalar invariants are. Typing up this problem alone has been a huge headache, and I haven't even gotten to the part where I have to express the invariant values in index notation.

So, my short question is: Is there a simpler way to solve this?

My current method is: Brute force expand the entire cubic equation, write out each of the three scalar invariants in index notation (16 indices show up, so I have to make sure I match and contract the right ones while still staying "legal" with this index notation), then show that, in both the cases of A and B, that the scalar invariants are equal. If they are equal, then the principal values will be equal, since the coefficients of the cubic equation are equal.

Is there any faster way to do this without getting lost in a huge sea of index notation? It is driving me up the wall, right now. I have spent hours on this and I keep messing up the numerous indices that I have to match up. I do not want to waste a day on a single problem like I did last time.

[edit] If it is indeed the case that I have to expand out the scalar invariants in index notation, how the heck do I do that while still preserving and collapsing the right indices? I'm just not seeing it. I fully expanded it out, and I am ending up with a mess that I cannot seem to resolve.

For instance, in the case of I_2, I expanded everything as such:

I_2:
in the case of ##\textbf{T}^{\top} \textbf{T}##...
##\frac{1}{2}((B_{kj}A_{jk})(B_{ab}A_{ba})-(B_{ij}A_{jk})(B_{ka}A_{ai}))##

in the case of
##\textbf{T}\textbf{T}^{\top}##...
##\frac{1}{2}((A_{kj}B_{jk})(A_{ab}B_{ba})-(A_{ij}B_{jk})(A_{ka}B_{ai}))##

That is what I have so far, for I_2, and I see absolutely no way to contract it. Even then, I am pretty sure I used too many matching indices in each term, but I have no idea how to write the indices in such a way that the summations are all preserved, as well as the relationships between the terms.

Forget the index notation. That's no help. Can you show that if v is an eigenvector of ##T^T T## with eigenvalue λ (i.e ##T^T Tv=\lambda v##), then ##Tv## is an eigenvector of ##T T^T## with eigenvalue λ?
 
  • #3
I am so glad that I do not need to do this mess with index notation.

However, wouldn't your suggestion lead me right back to using the characteristic polynomials and getting the coefficients (scalar invariants) to match up? I don't quite see how I would proceed beyond doing that.
 
  • #4
TheFerruccio said:
I am so glad that I do not need to do this mess with index notation.

However, wouldn't your suggestion lead me right back to using the characteristic polynomials and getting the coefficients (scalar invariants) to match up? I don't quite see how I would proceed beyond doing that.

Just think abstractly. If ##T^T Tv=\lambda v## then what is ## T T^T Tv##?? If you can show every principal value (or eigenvalue) of ##T^T T## corresponds to an principal value of ##T T^T## and vice versa then you are done. You know a principal value (or eigenvalue) of a matrix A is just a value of λ such that ##A v=\lambda v## for some vector v, right?
 
Last edited:
  • #5
You call these "tensors" but there is no use of general "tensor" properties. You are really doing a "matrix" problem. (Every tensor can be represented by a matrix in a given coordinate system.)
 
  • #6
HallsofIvy said:
You call these "tensors" but there is no use of general "tensor" properties. You are really doing a "matrix" problem. (Every tensor can be represented by a matrix in a given coordinate system.)

I am just using the vocabulary taught during class. "Show that the tensors have the same principal values." That is precisely what the assignment states.
 
  • #7
Dick said:
Just think abstractly. If ##T^T Tv=\lambda v## then what is ## T T^T Tv##?? If you can show every principal value (or eigenvalue) of ##T^T T## corresponds to an principal value of ##T T^T## and vice versa then you are done. You know a principal value (or eigenvalue) of a matrix A is just a value of λ such that ##A v=\lambda v## for some vector v, right?

Let's see...
if
##\textbf{T}^{\top}\textbf{T}\textbf{v}=\lambda\textbf{v}##
then
##\textbf{T}\textbf{T}^{\top}\textbf{T}\textbf{v}=\textbf{T} \lambda \textbf{v}##
then (this is where I am unsure)
##\textbf{T}\textbf{T}^\top\textbf{T}\textbf{T}^{-1}\textbf{v}=\textbf{T}\textbf{T}^{-1}\lambda\textbf{v}##
then
##\textbf{T}\textbf{T}^\top\textbf{v}=\lambda\textbf{v}##

So, both have the same principal values via this method. Am I doing this right?
 
  • #8
Actually, scratch all of that. I have to use index notation to solve this problem. That is the point of the exercise. So, I am not allowed to do it the short way. Thus, I still have the dilemma I wrote in the first post.
 
  • #9
Even though you're not supposed to solve the problem this way, you may be interested in seeing what Dick was getting at.
TheFerruccio said:
Let's see...
if
##\textbf{T}^{\top}\textbf{T}\textbf{v}=\lambda\textbf{v}##
then
##\textbf{T}\textbf{T}^{\top}\textbf{T}\textbf{v}=\textbf{T} \lambda \textbf{v}##
The idea here is to now rewrite it slightly as
$$\textbf{T}\textbf{T}^{\top}(\textbf{T}\textbf{v}) = \lambda (\textbf{T}\textbf{v}).$$ Can you see how this says that ##\lambda## is an eigenvalue of ##\textbf{T}\textbf{T}^{\top}##?
 
  • #10
vela said:
Even though you're not supposed to solve the problem this way, you may be interested in seeing what Dick was getting at.

The idea here is to now rewrite it slightly as
$$\textbf{T}\textbf{T}^{\top}(\textbf{T}\textbf{v}) = \lambda (\textbf{T}\textbf{v}).$$ Can you see how this says that ##\lambda## is an eigenvalue of ##\textbf{T}\textbf{T}^{\top}##?

Yep! That makes sense. The result of Tv is simply a vector anyway, so, that shows that Tv is the corresponding eigenvector for TT^T for the given eigenvalue.
 
  • #11
Hey everyone. I ended up figuring it out. When I put it all in indicial notation, I can regroup the terms. I just had to be extremely careful with assigning the indices for each tensor. I had to first establish the relationship between the "T-T-transpose" and "T-transpose-T"
 

What does it mean for tensors to have the same principal values?

When tensors have the same principal values, it means that they share the same set of eigenvalues. Eigenvalues are used to describe the behavior of a tensor and are important in understanding how the tensor transforms under different coordinate systems.

Why is it important to show that tensors have the same principal values?

Showing that tensors have the same principal values is important because it helps us determine the similarity or equivalence of different tensors. This is useful in applications such as physics and engineering, where tensors are used to describe the behavior of physical systems.

What are some methods for showing that tensors have the same principal values?

There are several methods for showing that tensors have the same principal values. One common method is to calculate the eigenvalues of the tensors and compare them. Another method is to use the properties of tensors, such as symmetry or orthogonality, to show that their eigenvalues must be the same.

Can tensors with different shapes have the same principal values?

Yes, tensors with different shapes can have the same principal values. The shape of a tensor does not affect its eigenvalues, as they are determined by the values of the elements within the tensor. However, the number of principal values may differ depending on the shape of the tensor.

What is the significance of principal values in tensor analysis?

Principal values play a crucial role in tensor analysis as they provide insight into the behavior of a tensor under different coordinate systems. They also help us understand the symmetry and properties of a tensor, which are important in many applications. Furthermore, principal values allow us to compare and classify different tensors, making them a fundamental concept in tensor analysis.

Similar threads

  • Calculus and Beyond Homework Help
Replies
2
Views
2K
  • Linear and Abstract Algebra
Replies
1
Views
790
Replies
1
Views
858
  • Calculus and Beyond Homework Help
Replies
4
Views
2K
  • Calculus and Beyond Homework Help
Replies
15
Views
1K
  • Linear and Abstract Algebra
Replies
5
Views
1K
  • Calculus and Beyond Homework Help
Replies
1
Views
1K
  • Special and General Relativity
Replies
10
Views
2K
  • Special and General Relativity
Replies
11
Views
1K
  • Special and General Relativity
Replies
6
Views
1K
Back
Top