MHB How Do You Calculate the Determinant of a Matrix Using Index Notation?

Click For Summary
The determinant of a matrix can be calculated using index notation, with the formulas for 3x3 and 4x4 matrices expressed as sums involving the Levi-Civita symbol and matrix elements. Conjugating each term is valid for the determinant, and the adjoint operation yields the same result. The transpose of a determinant, however, does not apply since determinants are scalars and cannot be transposed. The discussion also highlights the use of Einstein summation convention, which simplifies notation by assuming summation over repeated indices. Mathematicians often prefer a different notation for determinants that reduces the number of terms to compute compared to the physicist's approach.
ognik
Messages
626
Reaction score
2
Making sure I have this right, $ |A| = \sum_{i}\sum_{j}\sum_{k} \epsilon_{ijk}a_{1i}a_{2j}a_{3k} $ (for a 3 X 3)

and a 4 X 4 would be $ |A| = \sum_{i}\sum_{j}\sum_{k} \sum_{l} \epsilon_{ijkl} a_{1i} a_{2j} a_{3k} a_{4l} $ ?

Is there any special algebra for these terms? (they could be anything from scalars to complex functions)

Especially for $|A|^*$ may I just conjugate each term? Is that the same for adjoint, $\dagger$ each term?
Less clear is what to do with $|A|^T$?

Finally I see the n X n formula done without any summation signs, why is that?
Thanks
 
Last edited:
Physics news on Phys.org
ognik said:
Making sure I have this right, $ |A| = \sum_{i}\sum_{j}\sum_{k} \epsilon_{ijk}a_{1i}a_{2j}a_{3k} $ (for a 3 X 3)

and a 4 X 4 would be $ |A| = \sum_{i}\sum_{j}\sum_{k} \sum_{l} \epsilon_{ijkl} a_{1i} a_{2j} a_{3k} a_{4l} $ ?
Yes.

ognik said:
Is there any special algebra for these terms? (they could be anything from scalars to complex functions)
No clue.

ognik said:
Especially for $|A|^*$ may I just conjugate each term?
Yes.

ognik said:
Is that the same for adjoint, $\dagger$ each term?
Less clear is what to do with $|A|^T$?
"T" is the transpose of a matrix. |A| and the components of A are scalars. How can you transpose a number?

ognik said:
Finally I see the n X n formula done without any summation signs, why is that?
Thanks
I don't know who actually came up with it but we usually use the "Einstein summation" convention. Basically if we have two "like" indices summation is assumed. So in place of [math]\sum_i a_i~b_i[/math] we just write [math]a_i~b_i[/math]. Warning: the summation convention is actually used for repeating indices that are "upper' and "lower." [math]\sum_i a_i~b^i = a_i~b^i[/math]. In Euclidean space [math]b^i = b_i[/math] so we can ignore this detail in the present case.

-Dan
 
ognik said:
Making sure I have this right, $ |A| = \sum_{i}\sum_{j}\sum_{k} \epsilon_{ijk}a_{1i}a_{2j}a_{3k} $ (for a 3 X 3)

and a 4 X 4 would be $ |A| = \sum_{i}\sum_{j}\sum_{k} \sum_{l} \epsilon_{ijkl} a_{1i} a_{2j} a_{3k} a_{4l} $ ?

Is there any special algebra for these terms? (they could be anything from scalars to complex functions)

Especially for $|A|^*$ may I just conjugate each term? Is that the same for adjoint, $\dagger$ each term?
Less clear is what to do with $|A|^T$?

Finally I see the n X n formula done without any summation signs, why is that?
Thanks
Mathematicians tend to prefer the following notation:

$\det(A) = \sum\limits_{\sigma \in S_n} \text{sgn}(\sigma) a_{1\sigma(1)}\cdots a_{n\sigma(n)}$

Note that for $i \neq j \neq k \neq i$, $\epsilon_{ijk} = \text{sgn}(\sigma)$ where:

$\sigma(1) = i$
$\sigma(2) = j$
$\sigma(3) = k$

For all "other" values of $i,j,k$ (that is, when two are equal), $\epsilon_{ijk} = 0$, so the "mathematician"s" sum has fewer terms to compute ($n!$) over the "physicist's" sum ($n^n$), but "not really".

If what you meant was:

$\det(A^{\dagger})$, it is not hard to show that this equals $\overline{\det(A)}$, and in the real case, this becomes:

$\det(A^T) = \det(A)$

The elements of a matrix can be quite general, but generally we require they be elements of a commutative ring, in order to take determinants.

To get "more" general (than matrices with elements in a commutative ring), one has to start talking about $R$-module homomorphisms (basically $R$-modules are abelian groups acted on by a ring $R$, and $R$-module homomorphisms are the "structure-preserving maps" - linear operators are a subset of these), and when $R$ is no longer commutative, one can no longer speak of "components" or "basis elements" so meaningfully (these concepts still exist, but aren't as useful in the more general setting).

Einstein summation is due to (surprise!) Albert Einstein, who used it in his 1916 paper: ""The Foundation of the General Theory of Relativity" (Annalen der Physik).
 
Thread 'How to define a vector field?'
Hello! In one book I saw that function ##V## of 3 variables ##V_x, V_y, V_z## (vector field in 3D) can be decomposed in a Taylor series without higher-order terms (partial derivative of second power and higher) at point ##(0,0,0)## such way: I think so: higher-order terms can be neglected because partial derivative of second power and higher are equal to 0. Is this true? And how to define vector field correctly for this case? (In the book I found nothing and my attempt was wrong...

Similar threads

  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 15 ·
Replies
15
Views
5K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 11 ·
Replies
11
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 20 ·
Replies
20
Views
3K