Finding determinant through Gaussian elimination

purplecows
Messages
6
Reaction score
0
If I switch 2 rows, do I have to multiple by -1 each time?

For example, I have

0%20%26%20-1%5C%5C%203%20%26%201%20%26%201%5C%5C%200%20%26%20-1%20%26%20-1%20%5Cend%7Bvmatrix%7D.gif


If I switch row 2 and 3, will it become this:
0%20%26%20-1%5C%5C%200%20%26%20-1%20%26%20-1%5C%5C%203%20%26%201%20%26%201%20%5Cend%7Bvmatrix%7D.gif

Or this?
0%20%26%20-1%5C%5C%200%20%26%20-1%20%26%20-1%5C%5C%203%20%26%201%20%26%201%20%5Cend%7Bvmatrix%7D.gif


Each time I make a switch, do I have to also put a negative sign?

Edit: Not really related to Gaussian elimination, but this is from a Gaussian elimination problem; I just wanted to know whether I have to put a negative each time I make the switch.
 
Physics news on Phys.org
It's not clear what switching rows does for you in terms of finding the determinant of this matrix.

Using the original matrix, you would want to eliminate element a21 = 3 first. After that, you can work on eliminating a32, leaving you with an upper triangular matrix. The determinant of an upper triangular matrix is easy to calculate.

FYI, the elementary row operations affect the determinant as discussed in the following article:

http://en.wikipedia.org/wiki/Gaussian_elimination

See the section 'Computing determinants' under Applications.
 
Actually, it is "clear what switching rows does for you in terms of finding the determinant of this matrix."

Anytime you swap two rows in a determinant, you multiply the determinant by -1.

In this case, the original determinant is \left|\begin{array}{ccc}2 & 0 & -1 \\ 3 & 1 & 1 \\ 0 & -1 & -1 \end{array}\right|. If you "expand by minors" on the first row, you get 2\left|\begin{array}{cc}1 & 1 \\ -1 & -1\end{array}\right|- 1\left|\begin{array}{cc} 3 & 1 \\ 0 & -1\end{array}\right|= 2(-1+ 1)- (-3)= 3

Switching the first two rows you get \left|\begin{array}{ccc}3 & 1 & 1 \\ 2 & 0 & -1 \\ 0 & -1 & -1 \end{array}\right|. If you "expand by minors" on the second row, you get exactly the same thing except that, because your leading coefficients are from the second row, their sign is changed: -2\left|\begin{array}{cc}1 & 1 \\ -1 & -1\end{array}\right|+ 1\left|\begin{array}{cc} 3 & 1 \\ 0 & -1\end{array}\right|= -2(-1+ 1)+ (-3)= -3.

Switching the first and third rows and expanding on the third row will give you almost exactly the same thing. This time the leading coefficients will be the same as the first time but the two rows in the sub-determinants are reversed, reversing the sign there:
\left|\begin{array}{ccc}0 & -1 & -1\\ 3 & 1 & 1 \\ 2 & 0 & -1 \end{array}\right|. If you "expand by minors" on the third row, you get 2\left|\begin{array}{cc}-1 & -1 \\ 1 & 1\end{array}\right|- 1\left|\begin{array}{cc} 0 & -1 \\ 3 & 1\end{array}\right|= 2(-1+ 1)- (3)= -3
 
HallsofIvy said:
Actually, it is "clear what switching rows does for you in terms of finding the determinant of this matrix."

That's fine, but the OP wanted to find the determinant of the matrix by using elimination, not by expanding minors. Switching rows as he did originally did not make his transformed matrix closer to upper triangular form; it complicated the elimination by adding extra steps.

To find the determinant of a 3x3 matrix is a trivial exercise anyway; one can always calculate the determinant using the formula of Sarrus.
 
I asked online questions about Proposition 2.1.1: The answer I got is the following: I have some questions about the answer I got. When the person answering says: ##1.## Is the map ##\mathfrak{q}\mapsto \mathfrak{q} A _\mathfrak{p}## from ##A\setminus \mathfrak{p}\to A_\mathfrak{p}##? But I don't understand what the author meant for the rest of the sentence in mathematical notation: ##2.## In the next statement where the author says: How is ##A\to...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...

Similar threads

Back
Top