Linear Algebra, Find the Determinant

kuahji
Messages
390
Reaction score
2
Find the determinant of C by first row reducing it to a matrix with first column 1,0,0,0. Show the row operations and explain how all this tells you the value of the determinant of C when you are done.

C=(2,0,-6,8;3,1,0,3;-5,1,7,-8;0,0,5,1) where ; indicates a new row.
We're suppose to use the theorem DET(Ek)DET(Ek-1)...DET(E1)DET(A)=DET(C)

The problem that I'm having is that I know the determinate of C is -244 (calculator). But when I use the theorem I get (1/2)(1)(1)(-144) for the determinant. It really appears to be the first row operations, if it was 2 instead of 1/2 it'd work. I can't figure out how to resolve this, below is my work.

The first row operation I did was 1/2R1->R1, then -3R1+R2->R2, and finally 5R1+R3->R3. This left me with the matrix
A=(1,0,-3,4;0,1,9,-9;0,1,-8,12;0,0,5,1)
Hence the (1/2)(1)(1)(-144) for the determinant.
 
Physics news on Phys.org
Multiplying a row in the matrix by 1/2 gives you 1/2 the original determinant, so to get the same number at the end, you want to multiply by 2 to cancel the 1/2

2*(1/2)det(A) = 2*(det(A with the first row multiplied by 1/2))
 
When finding the determinant, it is better NOT to multiply or divide a line by anything. Use only the row operations of "swap two rows" and "add a multiple of one row to another".

Here you have
\left|\begin{array}{cccc}2 & 0 & -6 & 8 \\ 3 & 1 & 0 & 3 \\ -5 & 1 & 7 & -8\\ 0 & 0 & 5 & 1\end{array}\right|
Subtract 3/2 the first row (or add -3/2 the first row) from the second and add 5/2 the first row to the third to get
\left|\begin{array}{cccc}2 & 0 & -6 & 8 \\ 0 & 1 & 9 & -9 \\ 0 & 1 & -8 & 12\\ 0 & 0 & 5 & 1\end{array}\right|
 
HallsofIvy said:
When finding the determinant, it is better NOT to multiply or divide a line by anything. Use only the row operations of "swap two rows" and "add a multiple of one row to another".

Thanks this works out just great.
 
There are two things I don't understand about this problem. First, when finding the nth root of a number, there should in theory be n solutions. However, the formula produces n+1 roots. Here is how. The first root is simply ##\left(r\right)^{\left(\frac{1}{n}\right)}##. Then you multiply this first root by n additional expressions given by the formula, as you go through k=0,1,...n-1. So you end up with n+1 roots, which cannot be correct. Let me illustrate what I mean. For this...
Back
Top