Determinant of 3x3 matrix equal to scalar triple product?

Erithacus
Messages
3
Reaction score
0
The determinant of a 3x3 matrix can be interpreted as the volume of a parallellepiped made up by the column vectors (well, could also be the row vectors but here I am using the columns), which is also the scalar triple product.
I want to show that:
##det A \overset{!}{=} a_1 \cdot (a_2 \times a_3 ) ##
with ## A = \begin{pmatrix} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{pmatrix} ##

I started to develop det A like this:
## det A = a_{11} (a_{22}a_{33}-a_{23}a_{32})-a_{12} (a_{21}a_{33}-a_{23}a_{31})+a_{13} (a_{21}a_{32}-a_{22}a_{31}) ##

With ## e_i ## as unit vectors, this should equal:
## det A = a_1e_1\cdot (a_2 \times a_3)e_1-a_2e_1\cdot (a_1 \times a_3)e_1+a_3e_1\cdot (a_1 \times a_2)e_1 = a_1\cdot (a_2 \times a_3)-a_2\cdot (a_1 \times a_3)+a_3\cdot (a_1 \times a_2) ##

With the rules for cross products I get:
## det A = a_1\cdot (a_2 \times a_3)+a_1\cdot (a_2 \times a_3)+a_1\cdot (a_2 \times a_3) = 3 \cdot a_1 (a_2 \times a_3) ##

Shouldn't I get only ## a_1 (a_2 \times a_3) ##? Not times three. Please, can someone say what I am doing wrong? Or is it right? In that case, how should I interpret it?
 
Physics news on Phys.org
Erithacus said:
With ## e_i ## as unit vectors, this should equal:
## det A = a_1e_1\cdot (a_2 \times a_3)e_1-a_2e_1\cdot (a_1 \times a_3)e_1+a_3e_1\cdot (a_1 \times a_2)e_1 = a_1\cdot (a_2 \times a_3)-a_2\cdot (a_1 \times a_3)+a_3\cdot (a_1 \times a_2) ##
Your notation here is not very clear and likely the reason you are not getting the correct result, what do you mean by ##a_1 e_1 \cdot (a_2\times a_3)e_1##? The vector structure here makes no sense.
 
what do you mean by ##a_1 e_1 \cdot (a_2\times a_3)e_1##? The vector structure here makes no sense.

Thank you for answering!
My thought was, that to get ##a_{11}## I could do the multiplication
## a_1 \cdot e_1 = \begin{pmatrix} a_{11} \\ a_{21} \\ a_{31} \end{pmatrix} \cdot \begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix} = a_{11} ##

And for the first row in the cross product I do the same thing:
## (a_2\times a_3) \cdot e_1= \begin{pmatrix} a_{12} \\ a_{22} \\ a_{32} \end{pmatrix} \times \begin{pmatrix} a_{13} \\ a_{23} \\ a_{33} \end{pmatrix}\cdot \begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix} = a_{22}a_{33}-a_{23}a_{32}##

Or doesn't it work like that?
 
Last edited:
Erithacus said:
Or doesn't it work like that?
It does, but you must be careful because you are mixing different types of multiplications. The quantity you call ##a_1e_1 \cdot (a_2 \times a_3)e_1## should really be ##(a_1\cdot e_1)((a_2\times a_3)\cdot e_1)## and is not a triple vector product. It is the product of two numbers.
 
The quantity you call ##a_1e_1 \cdot (a_2 \times a_3)e_1## should really be ##(a_1\cdot e_1)((a_2\times a_3)\cdot e_1)## and is not a triple vector product.
Okay, I see. Can I get around that or do I have to use a completely different approach to show what I want to show? Could I use the Levi-Civita symbol and do it all in components?
 
You can write ##\vec a_i = a_{i1} \vec e_1 + a_{i2}\vec e_2 + a_{i3}\vec e_3## and just perform the triple product. Of course, this is equivalent to doing it in components using the Levi-Civita symbol.
 
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top