How Do You Calculate the Determinant of a Matrix Using Index Notation?

Click For Summary
SUMMARY

The calculation of the determinant of a matrix using index notation involves the use of the Levi-Civita symbol, represented as $\epsilon_{ijk}$ for a 3x3 matrix and $\epsilon_{ijkl}$ for a 4x4 matrix. The determinant can also be expressed using the Einstein summation convention, which simplifies notation by assuming summation over repeated indices. For complex matrices, the relationship $\det(A^{\dagger}) = \overline{\det(A)}$ holds, and for real matrices, $\det(A^T) = \det(A)$. The discussion highlights the differences between physicist and mathematician approaches to determinant calculation.

PREREQUISITES
  • Understanding of matrix algebra and determinants
  • Familiarity with the Levi-Civita symbol and its properties
  • Knowledge of the Einstein summation convention
  • Basic concepts of complex conjugation and adjoint operations
NEXT STEPS
  • Study the properties of the Levi-Civita symbol in detail
  • Learn about the Einstein summation convention and its applications in physics
  • Explore the relationship between determinants and matrix operations such as transposition and adjoint
  • Investigate the generalization of determinants in the context of $R$-modules
USEFUL FOR

Mathematicians, physicists, and students studying linear algebra who seek to deepen their understanding of determinants and their applications in various mathematical contexts.

ognik
Messages
626
Reaction score
2
Making sure I have this right, $ |A| = \sum_{i}\sum_{j}\sum_{k} \epsilon_{ijk}a_{1i}a_{2j}a_{3k} $ (for a 3 X 3)

and a 4 X 4 would be $ |A| = \sum_{i}\sum_{j}\sum_{k} \sum_{l} \epsilon_{ijkl} a_{1i} a_{2j} a_{3k} a_{4l} $ ?

Is there any special algebra for these terms? (they could be anything from scalars to complex functions)

Especially for $|A|^*$ may I just conjugate each term? Is that the same for adjoint, $\dagger$ each term?
Less clear is what to do with $|A|^T$?

Finally I see the n X n formula done without any summation signs, why is that?
Thanks
 
Last edited:
Physics news on Phys.org
ognik said:
Making sure I have this right, $ |A| = \sum_{i}\sum_{j}\sum_{k} \epsilon_{ijk}a_{1i}a_{2j}a_{3k} $ (for a 3 X 3)

and a 4 X 4 would be $ |A| = \sum_{i}\sum_{j}\sum_{k} \sum_{l} \epsilon_{ijkl} a_{1i} a_{2j} a_{3k} a_{4l} $ ?
Yes.

ognik said:
Is there any special algebra for these terms? (they could be anything from scalars to complex functions)
No clue.

ognik said:
Especially for $|A|^*$ may I just conjugate each term?
Yes.

ognik said:
Is that the same for adjoint, $\dagger$ each term?
Less clear is what to do with $|A|^T$?
"T" is the transpose of a matrix. |A| and the components of A are scalars. How can you transpose a number?

ognik said:
Finally I see the n X n formula done without any summation signs, why is that?
Thanks
I don't know who actually came up with it but we usually use the "Einstein summation" convention. Basically if we have two "like" indices summation is assumed. So in place of [math]\sum_i a_i~b_i[/math] we just write [math]a_i~b_i[/math]. Warning: the summation convention is actually used for repeating indices that are "upper' and "lower." [math]\sum_i a_i~b^i = a_i~b^i[/math]. In Euclidean space [math]b^i = b_i[/math] so we can ignore this detail in the present case.

-Dan
 
ognik said:
Making sure I have this right, $ |A| = \sum_{i}\sum_{j}\sum_{k} \epsilon_{ijk}a_{1i}a_{2j}a_{3k} $ (for a 3 X 3)

and a 4 X 4 would be $ |A| = \sum_{i}\sum_{j}\sum_{k} \sum_{l} \epsilon_{ijkl} a_{1i} a_{2j} a_{3k} a_{4l} $ ?

Is there any special algebra for these terms? (they could be anything from scalars to complex functions)

Especially for $|A|^*$ may I just conjugate each term? Is that the same for adjoint, $\dagger$ each term?
Less clear is what to do with $|A|^T$?

Finally I see the n X n formula done without any summation signs, why is that?
Thanks
Mathematicians tend to prefer the following notation:

$\det(A) = \sum\limits_{\sigma \in S_n} \text{sgn}(\sigma) a_{1\sigma(1)}\cdots a_{n\sigma(n)}$

Note that for $i \neq j \neq k \neq i$, $\epsilon_{ijk} = \text{sgn}(\sigma)$ where:

$\sigma(1) = i$
$\sigma(2) = j$
$\sigma(3) = k$

For all "other" values of $i,j,k$ (that is, when two are equal), $\epsilon_{ijk} = 0$, so the "mathematician"s" sum has fewer terms to compute ($n!$) over the "physicist's" sum ($n^n$), but "not really".

If what you meant was:

$\det(A^{\dagger})$, it is not hard to show that this equals $\overline{\det(A)}$, and in the real case, this becomes:

$\det(A^T) = \det(A)$

The elements of a matrix can be quite general, but generally we require they be elements of a commutative ring, in order to take determinants.

To get "more" general (than matrices with elements in a commutative ring), one has to start talking about $R$-module homomorphisms (basically $R$-modules are abelian groups acted on by a ring $R$, and $R$-module homomorphisms are the "structure-preserving maps" - linear operators are a subset of these), and when $R$ is no longer commutative, one can no longer speak of "components" or "basis elements" so meaningfully (these concepts still exist, but aren't as useful in the more general setting).

Einstein summation is due to (surprise!) Albert Einstein, who used it in his 1916 paper: ""The Foundation of the General Theory of Relativity" (Annalen der Physik).
 

Similar threads

  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 15 ·
Replies
15
Views
5K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 11 ·
Replies
11
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 20 ·
Replies
20
Views
3K