Computing determinants: Allowed shortcuts?

MrMultiMedia
Messages
9
Reaction score
0
I had a question about computing determinants and just was wondering what was allowed. So I know that for an n x n matrix, you can go across a row and choose the matrix element as your determinant coefficient for the (n-1) x (n-1) determinant and you go across the row and do this until you're finished with the row. I also know that this process is recursive until you get down to 2x2 from which point you work your way out of this nested maze of determinants you've created for yourself and found the determinant.

I also know that you can go down a column and use its matrix elements as coefficients instead of going down the row. This is useful if there are more zeros down the column than across the row and it simplifies the calculation.

I am aware that the sign of the coefficient depends on its location in the matrix. The top left element is positive and the rest of the signs are arranged like a checkerboard; no same signs have edges touching.

What I wanted to know is, if instead of using the rows or columns as the coefficients for your lower order determinants, could you use diagonals? If you got the signs correct, canceled out the row and column that it's in and used the remaining matrix elements for your lower order matrix. Could you do this to find the determinant? I do realize that if you used the diagonal, all the coefficients would have the same sign. But does this matter? Is this against the rules? As long as every row or every column is represented, it shouldn't matter right? Or is that incorrect?
 
Physics news on Phys.org
Hey, you can compute determinants like so. What I think you are suggesting can be generalized by Laplace's formula.
 
No, you cannot use diagonals. That should have been clear if you had looked at some simple examples.
For example, the determinant
\left|\begin{array}{cc}3 & 1 \\ 2 & 3\end{array}\right|= 3(3)- 1(2)= 7

If you "expand by the first column" you get 3(3)- 2(1)= 7. If you "expand by the second column" you get -1(2)+ 3(3)= 7. If you "expand by the first row" you get 3(3)- 1(2)= 7. If you "expand by the second row" you get -2(1)+ 3(3)= 7.

But if you "expand by the diagonal" you get 3(3)- 3(3)= 0.

One good method of finding a determinant is to "row reduce" to a triangular matrix. The determinant of a triangular matrix is, of course, the product of the numbers on the diagonal.

There are three row operations:
1) Swap two rows. That will multiply the determinant by -1.
2) Add a multiple of one row to another. That does not change the determinant.
3) Multiply a row by a number. That will multiply the determinant by by that number.

For example, I can easily reduce the matrix
\begin{bmatrix}3 & 1 \\ 2 & 3\end{bmatrix}
to a triangular matrix by adding -2/3 times the first row to the second:
\begin{bmatrix}3 & 1 \\ 2- (2/3)(3) & 3- (2/3)(1)\end{bmatrix}= \begin{bmatrix} 3 & 1 \\0 & 7/3 \end{bmatrix}
Which is very easily seen to have determinant 3(7/3)= 7.

A slightly more difficult example is
\begin{bmatrix} 2 & 1 & 3 \\ 1 & 1 & 2 \\ 0 & 1 & 1\end{bmatrix}
There is already a 0 in the first column, third row, so we can get the first column in the form we want by adding -1/2 to the second row:
\begin{bmatrix} 2 & 1 & 3 \\ 1- 1 & 1- 1/2 & 2- 3/2 \\ 0 & 1 & 1\end{bmatrix}= \begin{bmatrix}2 & 1 & 3 \\ 0 & 1/2 & 1/2 \\ 0 & 1 & 1\end{bmatrix}
and then add -2 times the second row to the third row to get
\begin{bmatrix}2 & 1 & 3 \\ 0 & 1/2 & 1/2 \\ 0 & 1- 2(1/2) & 1- 2(1/2)\end{bmatrix}= \begin{bmatrix}2 & 1 & 3 \\ 0 & 1/2 & 1/2 \\ 0 & 0 & 0\end{bmatrix}

Since the only row operations used were "add a multiple of one row to another", the determinant of the original matrix is exactly the same as the determinant of this "upper triangular matrix" which is obviously 2(1/2)(0)= 0.

If, in the row operations, you "swap rows" an odd number of times, you multiply the determinant of the final triangular matrix by -1. If you multiply a row by a number, nou have to divide the determinant of the final trianguar matrix by that number.
 
Last edited by a moderator:
I asked online questions about Proposition 2.1.1: The answer I got is the following: I have some questions about the answer I got. When the person answering says: ##1.## Is the map ##\mathfrak{q}\mapsto \mathfrak{q} A _\mathfrak{p}## from ##A\setminus \mathfrak{p}\to A_\mathfrak{p}##? But I don't understand what the author meant for the rest of the sentence in mathematical notation: ##2.## In the next statement where the author says: How is ##A\to...
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
Back
Top