Row/column operation on matrices and determinants

Click For Summary
Simultaneous row and column operations on a matrix can disrupt the process of finding its inverse through elementary transformations, as these operations do not maintain the necessary relationships for matrix inversion. While row operations preserve the kernel, column operations preserve the image, but applying both does not guarantee that the resulting product is the inverse of the original matrix. The determinant remains consistent through these operations, allowing for some preservation of properties, but the kernel and image can change, particularly with row operations. In the case of invertible matrices, the kernel is trivial and the image spans all of R^n, which simplifies the discussion of preservation. Ultimately, careful application of operations is crucial to maintain the integrity of the matrix's properties.
Raghav Gupta
Messages
1,010
Reaction score
76
How we cannot apply row and column operation simultaneously on matrix when finding its inverse by elementary transformation but can apply it in determinant?
I think kernel and image gets disturbed in a matrix, though I don't know what it actually is.
Why not in determinant case?
 
Physics news on Phys.org
Raghav Gupta said:
How we cannot apply row and column operation simultaneously on matrix when finding its inverse by elementary transformation
You can - but only when you do it on the right hand side simultaneously. Assume you want to find B such that A \cdot B = I. You do the same row operations on A and I. After a while (excluding singularities etc.) you end up with U \cdot B = R, where U is upper triangular. Now substitute back until you end up with I \cdot B = H and you are done.
 
But doesn't the kernel or image gets disturbed?
 
Raghav Gupta said:
But doesn't the kernel or image gets disturbed?

I am not an expert on matrices so, I cannot answer that. What I know, is how to find the inverse (as detailed above). Useful facts:
  • The matrix can be inverted if its determinant is different from 0.
  • The determinant of the inverse matrix is the inverse of the determinant of the original matrix.
See also http://en.wikipedia.org/wiki/Invertible_matrix .
 
Raghav Gupta said:
How we cannot apply row and column operation simultaneously on matrix when finding its inverse by elementary transformation but can apply it in determinant?
I think kernel and image gets disturbed in a matrix, though I don't know what it actually is.
Why not in determinant case?

Row operations are equivalent to left multiplication by corresponding elementary matrices, column operations are equivalent to right multiplication. So, when you perform row operations on a (square) matrix ##A##, you get a matrix ##E## (the product of corresponding elementary matrices) such that ##EA=I##. For square matrices that means that ##E=A^{-1}##; performing the same row operations on the right hand side ##I## you get ##EI=E=A^{-1}##.

Again, you can find the inverse performing only column operations: you get a matrix ##E## (the product of corresponding elementary matrices) such that ##AE=I##, which again means that ##E=A^{-1}##. Performing the same column operations on the right hand side ##I## you get there ##IE=E=A^{-1}##.

If you perform both row and column operations you get two matrices ##E_1## and ##E_2## such that ##E_1AE_2=I##, so in the right hand side you get ##E_1IE_2=E_1E_2##. For ##E_1E_2## to be the inverse of ##A## you need ##AE_1E_2=I## (or equivalently ##E_1E_2A =I##), and you have ##E_1AE_2=I##. Matrix multiplication is not commutative, so there is no reason for the latter identity imply the former.

"No reason" is not a formal proof that the method does not work, for the formal proof you need a counterexample. You can find it just by playing with applying row and column operations to ##2\times 2## matrices. There are also more scientific method of constructing a counterexample.

The reason that applying both row and column operations is that while ##E_1AE_2=I## does not imply that ##E_1E_2=A^{-1}## it implies that ##\operatorname{det} (E_1E_2)=\operatorname{det} A^{-1}##. Namely, $$1=\operatorname{det}(E_1AE_1) = \operatorname{det} E_1 \operatorname{det} A \operatorname{det}E_2,$$ so $$\operatorname{det}(E_1E_2) \operatorname{det}E_1\operatorname{det}E_2= (\operatorname{det}A)^{-1} =\operatorname{det}A^{-1}.$$

Finally, for invertible matrices the image is all ##\mathbb R^n## and the kernel is always trivial (i.e. ##\{\mathbf 0\}##), they cannot be "disturbed" by row/column operations, they will remain the same.
 
Raghav Gupta said:
But doesn't the kernel or image gets disturbed?
If you're talking about a matrix you get by applying a row operation to another matrix, the answer is no. If you start with a matrix A, and apply one of the three row operations to it to get A1, the matrices A and A1 are equivalent. They have exactly the same kernel and image.
Edit: The image can change. See my later post in this thread.

If you're talking about applying column operations, I don't know -- I have never needed to apply column operations to reduce a matrix. However, if you swap the columns of a matrix, you are swapping the roles of the variables these columns represent.
 
Last edited:
Mark44 said:
If you're talking about a matrix you get by applying a row operation to another matrix, the answer is no. If you start with a matrix A, and apply one of the three row operations to it to get A1, the matrices A and A1 are equivalent. They have exactly the same kernel and image.

That is not true. Row operations preserve kernel, column operations preserve image (column space).

In the case of invertible matrices, however, the image is always all ##\mathbb R^n## and the kernel is always ##\{\mathbf 0\}##, so we can say that in this case the image and kernel are "preserved" user row and column operations.
 
A bit of specification. Row operations preserve kernel (but generally not image), column operations preserve image (but generally not kernel).
 
Hawkeye18 said:
That is not true. Row operations preserve kernel, column operations preserve image (column space).
Hawkeye18 said:
A bit of specification. Row operations preserve kernel (but generally not image), column operations preserve image (but generally not kernel).
I mispoke. Row operations don't necessarily preserve the range, as I said. A simple example shows this:
$$A = \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ 1 & 1\end{bmatrix}$$
Using row reduction, we get an equivalent matrix.
$$B = \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ 0 & 0\end{bmatrix}$$
Although the dimensions of the column spaces of A and B are equal (2), they span different subspaces of R3.

The columns of A represent a plane in R3 that is perpendicular to <-1, -1, 1>. The columns of B represent a different plane in R3 that is perpendicular to the z-axis.
 
  • #10
Mark44 said:
I mispoke. Row operations don't necessarily preserve the range, as I said. A simple example shows this:
$$A = \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ 1 & 1\end{bmatrix}$$
Using row reduction, we get an equivalent matrix.
$$B = \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ 0 & 0\end{bmatrix}$$
I see you have applied R3--------> R3 - R1
But there should be 1 in row 3 column 2 or more simply B32 should be equal to 1?
 
  • #11
Raghav Gupta said:
I see you have applied R3--------> R3 - R1
But there should be 1 in row 3 column 2 or more simply B32 should be equal to 1?
No. Here's what I did: -R1 + R3 --> R3 and then -R2 + R3 --> R3.

IOW, add -R1 to R3, and then add -R2 to R3.
 
  • #12
Mark44 said:
No. Here's what I did: -R1 + R3 --> R3 and then -R2 + R3 --> R3.

IOW, add -R1 to R3, and then add -R2 to R3.

Got it. Thanks to all of you - HawkEye 18, Mark44 and Svein.
 

Similar threads

  • · Replies 6 ·
Replies
6
Views
4K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 12 ·
Replies
12
Views
2K
  • · Replies 14 ·
Replies
14
Views
3K
Replies
13
Views
4K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 4 ·
Replies
4
Views
8K
  • · Replies 48 ·
2
Replies
48
Views
6K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K