Row/column operation on matrices and determinants

  • #1
1,007
76
How we cannot apply row and column operation simultaneously on matrix when finding its inverse by elementary transformation but can apply it in determinant?
I think kernel and image gets disturbed in a matrix, though I don't know what it actually is.
Why not in determinant case?
 

Answers and Replies

  • #2
Svein
Science Advisor
Insights Author
2,129
687
How we cannot apply row and column operation simultaneously on matrix when finding its inverse by elementary transformation
You can - but only when you do it on the right hand side simultaneously. Assume you want to find [itex]B [/itex] such that [itex]A \cdot B = I [/itex]. You do the same row operations on [itex] A[/itex] and [itex]I [/itex]. After a while (excluding singularities etc.) you end up with [itex] U \cdot B = R[/itex], where [itex] U[/itex] is upper triangular. Now substitute back until you end up with [itex]I \cdot B = H [/itex] and you are done.
 
  • #3
1,007
76
But doesn't the kernel or image gets disturbed?
 
  • #4
Svein
Science Advisor
Insights Author
2,129
687
But doesn't the kernel or image gets disturbed?
I am not an expert on matrices so, I cannot answer that. What I know, is how to find the inverse (as detailed above). Useful facts:
  • The matrix can be inverted if its determinant is different from 0.
  • The determinant of the inverse matrix is the inverse of the determinant of the original matrix.
See also http://en.wikipedia.org/wiki/Invertible_matrix .
 
  • #5
177
61
How we cannot apply row and column operation simultaneously on matrix when finding its inverse by elementary transformation but can apply it in determinant?
I think kernel and image gets disturbed in a matrix, though I don't know what it actually is.
Why not in determinant case?
Row operations are equivalent to left multiplication by corresponding elementary matrices, column operations are equivalent to right multiplication. So, when you perform row operations on a (square) matrix ##A##, you get a matrix ##E## (the product of corresponding elementary matrices) such that ##EA=I##. For square matrices that means that ##E=A^{-1}##; performing the same row operations on the right hand side ##I## you get ##EI=E=A^{-1}##.

Again, you can find the inverse performing only column operations: you get a matrix ##E## (the product of corresponding elementary matrices) such that ##AE=I##, which again means that ##E=A^{-1}##. Performing the same column operations on the right hand side ##I## you get there ##IE=E=A^{-1}##.

If you perform both row and column operations you get two matrices ##E_1## and ##E_2## such that ##E_1AE_2=I##, so in the right hand side you get ##E_1IE_2=E_1E_2##. For ##E_1E_2## to be the inverse of ##A## you need ##AE_1E_2=I## (or equivalently ##E_1E_2A =I##), and you have ##E_1AE_2=I##. Matrix multiplication is not commutative, so there is no reason for the latter identity imply the former.

"No reason" is not a formal proof that the method does not work, for the formal proof you need a counterexample. You can find it just by playing with applying row and column operations to ##2\times 2## matrices. There are also more scientific method of constructing a counterexample.

The reason that applying both row and column operations is that while ##E_1AE_2=I## does not imply that ##E_1E_2=A^{-1}## it implies that ##\operatorname{det} (E_1E_2)=\operatorname{det} A^{-1}##. Namely, $$1=\operatorname{det}(E_1AE_1) = \operatorname{det} E_1 \operatorname{det} A \operatorname{det}E_2,$$ so $$\operatorname{det}(E_1E_2) \operatorname{det}E_1\operatorname{det}E_2= (\operatorname{det}A)^{-1} =\operatorname{det}A^{-1}.$$

Finally, for invertible matrices the image is all ##\mathbb R^n## and the kernel is always trivial (i.e. ##\{\mathbf 0\}##), they cannot be "disturbed" by row/column operations, they will remain the same.
 
  • #6
34,687
6,394
But doesn't the kernel or image gets disturbed?
If you're talking about a matrix you get by applying a row operation to another matrix, the answer is no. If you start with a matrix A, and apply one of the three row operations to it to get A1, the matrices A and A1 are equivalent. They have exactly the same kernel and image.
Edit: The image can change. See my later post in this thread.

If you're talking about applying column operations, I don't know -- I have never needed to apply column operations to reduce a matrix. However, if you swap the columns of a matrix, you are swapping the roles of the variables these columns represent.
 
Last edited:
  • #7
177
61
If you're talking about a matrix you get by applying a row operation to another matrix, the answer is no. If you start with a matrix A, and apply one of the three row operations to it to get A1, the matrices A and A1 are equivalent. They have exactly the same kernel and image.
That is not true. Row operations preserve kernel, column operations preserve image (column space).

In the case of invertible matrices, however, the image is always all ##\mathbb R^n## and the kernel is always ##\{\mathbf 0\}##, so we can say that in this case the image and kernel are "preserved" user row and column operations.
 
  • #8
177
61
A bit of specification. Row operations preserve kernel (but generally not image), column operations preserve image (but generally not kernel).
 
  • #9
34,687
6,394
That is not true. Row operations preserve kernel, column operations preserve image (column space).
A bit of specification. Row operations preserve kernel (but generally not image), column operations preserve image (but generally not kernel).
I mispoke. Row operations don't necessarily preserve the range, as I said. A simple example shows this:
$$A = \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ 1 & 1\end{bmatrix}$$
Using row reduction, we get an equivalent matrix.
$$B = \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ 0 & 0\end{bmatrix}$$
Although the dimensions of the column spaces of A and B are equal (2), they span different subspaces of R3.

The columns of A represent a plane in R3 that is perpendicular to <-1, -1, 1>. The columns of B represent a different plane in R3 that is perpendicular to the z-axis.
 
  • #10
1,007
76
I mispoke. Row operations don't necessarily preserve the range, as I said. A simple example shows this:
$$A = \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ 1 & 1\end{bmatrix}$$
Using row reduction, we get an equivalent matrix.
$$B = \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ 0 & 0\end{bmatrix}$$
I see you have applied R3--------> R3 - R1
But there should be 1 in row 3 column 2 or more simply B32 should be equal to 1?
 
  • #11
34,687
6,394
I see you have applied R3--------> R3 - R1
But there should be 1 in row 3 column 2 or more simply B32 should be equal to 1?
No. Here's what I did: -R1 + R3 --> R3 and then -R2 + R3 --> R3.

IOW, add -R1 to R3, and then add -R2 to R3.
 
  • #12
1,007
76
No. Here's what I did: -R1 + R3 --> R3 and then -R2 + R3 --> R3.

IOW, add -R1 to R3, and then add -R2 to R3.
Got it. Thanks to all of you - HawkEye 18, Mark44 and Svein.
 

Related Threads on Row/column operation on matrices and determinants

  • Last Post
Replies
1
Views
821
  • Last Post
Replies
4
Views
5K
Replies
3
Views
20K
Replies
4
Views
4K
Replies
4
Views
2K
Replies
1
Views
2K
  • Last Post
Replies
6
Views
2K
Replies
25
Views
3K
  • Last Post
2
Replies
26
Views
5K
Replies
1
Views
636
Top