Time Derivative of Rank 2 Tensor Determinant

Marcus95
Messages
50
Reaction score
2

Homework Statement



Show that for a second order cartesian tensor A, assumed invertible and dependent on t, the following holds:
## \frac{d}{dt} det(A) = det(a) Tr(A^{-1}\frac{dA}{dt}) ##

Homework Equations


## det(a) = \frac{1}{6} \epsilon_{ijk} \epsilon_{lmn} A_{il}A_{jm}A_{kn} ##

The Attempt at a Solution


My approach is the following:

we know that ## det(a) = \frac{1}{6} \epsilon_{ijk} \epsilon_{lmn} A_{il}A_{jm}A_{kn} ##

Applying the time derivative to this:
## \frac{d}{dt} det(A) = \frac{1}{6} \epsilon_{ijk} \epsilon_{lmn} (A_{il}'A_{jm}A_{kn}+ A_{il}A_{jm}'A_{kn}+ A_{il}A_{jm}A_{kn}') ##
## = det(A) (\frac{1}{A_{il}} A_{il}' +\frac{1}{A_{jm}} A_{jm}' + \frac{1}{A_{kn}} A_{kn}') ##

This is starting to look somewhat like the expression we look for, but from here on I am stuck. Any ideas on how to continue? Many Thanks!
 
Physics news on Phys.org
I suggest that you do not use index expressions. They will just cluster up, you gain no real insight here, and you are making your proof specific to the 3x3 case.

Instead, I suggest that you use the relation ##\operatorname{det}(e^B) = e^{\operatorname{tr}(B)}##.
 
  • Like
Likes StoneTemplePython
I think trying to work it out in terms of the definition of determinant is the hard way. Instead, I think that the quickest way to the answer---assuming that you've already learned these things--uses the following facts:
  • If ##A## is invertible, then there is an invertible matrix ##Q## and a diagonal matrix ##\Lambda## such that ##A = Q \Lambda Q^{-1}##
  • The trace of a matrix is the sum of its diagonal entries.
  • The determinant of a diagonal matrix is the product of its diagonal entries.
  • Both trace and determinant are invariant under cyclic permutations of matrices: ##det(XYZ) = det(YZX)## and ##tr(XYZ) = tr(YZX)##
I think that those facts are enough to get you to the answer.
 
I presume that scalars are in ##\mathbb C## for this post

stevendaryl said:
If ##A## is invertible, then there is an invertible matrix ##Q## and a diagonal matrix ##\Lambda## such that ##A = Q \Lambda Q^{-1}##

This is news to me. In general the rank of ##\mathbf A##'s collection of eigenvectors determines diagonalizability. The rank of ##\mathbf A## itself really has no bearing on diagonalizability (ignoring nits like how to think about rank zero matrices).

Note that ##\mathbf A## can always be triangularized i.e. ##\mathbf A = \mathbf{STS}^{-1}## where the triangularization is either Jordan Form or Schur Form, depending on needs and tastes.

stevendaryl said:
Both trace and determinant are invariant under cyclic permutations of matrices: ##det(XYZ) = det(YZX)## and ##tr(XYZ) = tr(YZX)##

For this problem, this holds I think, but the statement strikes me as dangerous. For convenience I'll define ##\mathbf B := \mathbf{YZ}##

Technically trace has a provable cyclic property over square matrices, but determinants don't -- they have a multiplicative property. In general these operations are valid when ##\mathbf B## is ##n ## x ##m## and ##\mathbf X## is ##m## x ##n##.

i.e. ##trace\big(\mathbf {BX}\big) = trace\big(\mathbf {XB}\big)##, but we can only be sure that ##det\big(\mathbf {BX}\big) = det\big(\mathbf {XB}\big)## if ##m = n## (or if we know there is a rank deficiency for both, then both have determinants of zero). Another way to think about it is, if ##m = n## we can write ##det\big(\mathbf {XB}\big) = det\big(\mathbf X\big) det\big(\mathbf B\big) = det\big(\mathbf {BX}\big)## i.e. we get commutativity--which gives you a cyclic property for free.

A better statement in my view is that ##\big(\mathbf {XB}\big)## and ##\big(\mathbf {BX}\big)## have the same non-zero eigenvalues-- or equivalently, the same characteristic polynomials, after factoring out zero roots. Trace gives the sum of eigenvalues -- equivalently, the sum of non-zero eigenvalues. Determinant is product of eigenvalues -- and this in general is impacted by the presence of extra zero roots (which must occur if ##m \neq n##).
 
StoneTemplePython said:
I presume that scalars are in ##\mathbb C## for this post

This is news to me. In general the rank of ##\mathbf A##'s collection of eigenvectors determines diagonalizability. The rank of ##\mathbf A## itself really has no bearing on diagonalizability (ignoring nits like how to think about rank zero matrices).

My mistake. I leapt to the assumption that the matrix ##A## was square and diagonalizable.
 
stevendaryl said:
My mistake. I leapt to the assumption that the matrix ##A## was square and diagonalizable.
Since the OP mentions "Cartesian tensor", I think it is safe to assume that we are talking about a square matrix.
 
  • Like
Likes Marcus95
Orodruin said:
Since the OP mentions "Cartesian tensor", I think it is safe to assume that we are talking about a square matrix.
Yes, I think it is safe to assume that, even if previous posts are interesting.

I have now seemingly solved the problem with help of the exponential form:
## det(A) = e^{lnA} = e^{Tr(lnA)} ##
applying differentiation with product rule and assuming it can be taken inside the trace:
## \frac{d(det(A)}{dt} = e^{Tr(lnA)} \frac{d}{dt} Tr(lnA) = det(A) Tr(\frac{1}{A} \frac{dA}{dt}) ##

However, this creates some questions for me. Are the standard differentiation rules really applicable to matrices like this? Can the inverse really be taken to be 1/A for the matrix? Isn't "matrix division" sort of undefined or does the negative exponent have some more general meaning here?

Moreover, whereas I have seen the exponential form used in certain "extra questions", it is not strictly a part of my maths course. Do you have any suggestions on how to solve the problem using the index expression? It might not be very general indeed, but previous parts of this question used it so I think I might be expected to do it here too.

Many thanks to you all this far! :)
 
Back
Top