MHB How Do You Calculate the Determinant of a Matrix Using Index Notation?

ognik
Messages
626
Reaction score
2
Making sure I have this right, $ |A| = \sum_{i}\sum_{j}\sum_{k} \epsilon_{ijk}a_{1i}a_{2j}a_{3k} $ (for a 3 X 3)

and a 4 X 4 would be $ |A| = \sum_{i}\sum_{j}\sum_{k} \sum_{l} \epsilon_{ijkl} a_{1i} a_{2j} a_{3k} a_{4l} $ ?

Is there any special algebra for these terms? (they could be anything from scalars to complex functions)

Especially for $|A|^*$ may I just conjugate each term? Is that the same for adjoint, $\dagger$ each term?
Less clear is what to do with $|A|^T$?

Finally I see the n X n formula done without any summation signs, why is that?
Thanks
 
Last edited:
Physics news on Phys.org
ognik said:
Making sure I have this right, $ |A| = \sum_{i}\sum_{j}\sum_{k} \epsilon_{ijk}a_{1i}a_{2j}a_{3k} $ (for a 3 X 3)

and a 4 X 4 would be $ |A| = \sum_{i}\sum_{j}\sum_{k} \sum_{l} \epsilon_{ijkl} a_{1i} a_{2j} a_{3k} a_{4l} $ ?
Yes.

ognik said:
Is there any special algebra for these terms? (they could be anything from scalars to complex functions)
No clue.

ognik said:
Especially for $|A|^*$ may I just conjugate each term?
Yes.

ognik said:
Is that the same for adjoint, $\dagger$ each term?
Less clear is what to do with $|A|^T$?
"T" is the transpose of a matrix. |A| and the components of A are scalars. How can you transpose a number?

ognik said:
Finally I see the n X n formula done without any summation signs, why is that?
Thanks
I don't know who actually came up with it but we usually use the "Einstein summation" convention. Basically if we have two "like" indices summation is assumed. So in place of [math]\sum_i a_i~b_i[/math] we just write [math]a_i~b_i[/math]. Warning: the summation convention is actually used for repeating indices that are "upper' and "lower." [math]\sum_i a_i~b^i = a_i~b^i[/math]. In Euclidean space [math]b^i = b_i[/math] so we can ignore this detail in the present case.

-Dan
 
ognik said:
Making sure I have this right, $ |A| = \sum_{i}\sum_{j}\sum_{k} \epsilon_{ijk}a_{1i}a_{2j}a_{3k} $ (for a 3 X 3)

and a 4 X 4 would be $ |A| = \sum_{i}\sum_{j}\sum_{k} \sum_{l} \epsilon_{ijkl} a_{1i} a_{2j} a_{3k} a_{4l} $ ?

Is there any special algebra for these terms? (they could be anything from scalars to complex functions)

Especially for $|A|^*$ may I just conjugate each term? Is that the same for adjoint, $\dagger$ each term?
Less clear is what to do with $|A|^T$?

Finally I see the n X n formula done without any summation signs, why is that?
Thanks
Mathematicians tend to prefer the following notation:

$\det(A) = \sum\limits_{\sigma \in S_n} \text{sgn}(\sigma) a_{1\sigma(1)}\cdots a_{n\sigma(n)}$

Note that for $i \neq j \neq k \neq i$, $\epsilon_{ijk} = \text{sgn}(\sigma)$ where:

$\sigma(1) = i$
$\sigma(2) = j$
$\sigma(3) = k$

For all "other" values of $i,j,k$ (that is, when two are equal), $\epsilon_{ijk} = 0$, so the "mathematician"s" sum has fewer terms to compute ($n!$) over the "physicist's" sum ($n^n$), but "not really".

If what you meant was:

$\det(A^{\dagger})$, it is not hard to show that this equals $\overline{\det(A)}$, and in the real case, this becomes:

$\det(A^T) = \det(A)$

The elements of a matrix can be quite general, but generally we require they be elements of a commutative ring, in order to take determinants.

To get "more" general (than matrices with elements in a commutative ring), one has to start talking about $R$-module homomorphisms (basically $R$-modules are abelian groups acted on by a ring $R$, and $R$-module homomorphisms are the "structure-preserving maps" - linear operators are a subset of these), and when $R$ is no longer commutative, one can no longer speak of "components" or "basis elements" so meaningfully (these concepts still exist, but aren't as useful in the more general setting).

Einstein summation is due to (surprise!) Albert Einstein, who used it in his 1916 paper: ""The Foundation of the General Theory of Relativity" (Annalen der Physik).
 
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...

Similar threads

Back
Top