# Kernels and Images for matrix

Consider a matrix A, and let B = rref(A).

Is ker(A) necesarily equal to ker(B), and is im(A) necessarily equal to im(B)?

I want to say that the answer to both questions are yes because A and B are the same matrix, i.e. there are a finite number of elementary operations that can change A to B, and vice versa. Therefore, if they are the same matrices, then they necessarily will have the same kernel and image as each other.

Is my reasoning correct?

Last edited:

Related Linear and Abstract Algebra News on Phys.org
Hurkyl
Staff Emeritus
Gold Member
Nope; they aren't the same matrix: they differ by the application of elementary operations.

On the other hand, you do know that there exists an invertible matrix E such that B = EA...

So, I thought about it, and I decided that the two images will NOT be equal, but the two kernels will be equal.

Reasoning: The image of a matrix is the span of its column vectors. Therefore, since the two matrices are not equal, then their column vectors, and consequently their image will not necessarily be equal.

And I'm not really sure why the two kernels are equal, I just thought that intuitively it should be. Am I right on both counts? Thanks.

No. Different sets of vectors can span the same vector space without being equal.

Anyway, why are we told in elementary linear algebra we are "allowed" to do elementary row operations. Answering that should answer your questions.

Consider a matrix A, and let B = rref(A).

Is ker(A) necesarily equal to ker(B), and is im(A) necessarily equal to im(B)?

I want to say that the answer to both questions are yes because A and B are the same matrix, i.e. there are a finite number of elementary operations that can change A to B, and vice versa. Therefore, if they are the same matrices, then they necessarily will have the same kernel and image as each other.

Is my reasoning correct?

EA=B, where E represents a sequence of elementary rows operations.

Elementary row operations leave the row space and the null space of A unchanged.
But they don't preserve the column space of A.
Also, you might try to verify them, in general.

You mentioned A and B being the "same".
Well, I think it'll depend on what you mean by "same".

At the risk of introducting some confusion, here's one way to look at it. There are several important equivalence relations on matrices.
In your example A and B are "left associate" (thanks to E.Nering for the name).
The name is nonstandard (but that's really irrelevant).
What's important is that it's an equivalence relation. (Verification is not difficult.)
So from this point of view, I suppose you could say that A and B are the "same" (they are members of
the "same" equivalence class under the equivalence relation, left associate).

Last edited:
EA=B, where E represents a sequence of elementary rows operations.

Elementary row operations leave the row space and the null space of A unchanged.
But they don't preserve the column space of A.
Also, you might try to verify them, in general.
The difference between a matrix and it's rref is a sequence of elementary row operations. And since that leaves the null space unchanged, then the kernels of the two matrices are the same. But because they don't preserve the column space of A, then the images of A and B are not necessarily the same. These are the results I arrived at in post 3 - but someone said they are wrong -- why?

I simply meant that your reasoning about the column space was wrong, or at least wrongly stated.

"These are the results I arrived at in post 3"

OK. I took a look at post 3.
To be honest, I didn't see too much there.

As I said before, it might be a good idea to verify the two critical statements (i.e., the null space remains unchanged, the column space does not).

In order to convince yourself about the null space claim, you might ask yourself, why are we doing elimination in the first place? Well, if I had to answer the question, I'd think I'd say it's because we're trying to simplify a system of linear equations *without changing any of the solutions*.
So the system Ax=0 is reduced to say Ux=0, *and this process is reversible*.
Therefore the null space of A must be the same as the null space of U.

Thanks a lot for your replies fopc and DeadWolfe. I think I may have come across as a little rude in post 6, but that wasn't my intention - I was just wondering.

I'll go ahead and attempt to verify the statements about column space and null space. Thanks again.

I take it that by kernel and image, you mean nullspace and columnspace?

If so, then the kernel is the same for both matrices, but the image is not.

The nullspace is orthogonal to the rowspace of the matrix. When you perform the rref operation, the resulting matrix still has the same rowspace as the old one, because all you did was add up combinations of rows. So whatever nullspace rref(A) has, A has that same nullspace.

The same does not hold true for the columnspace. Find the reduced row echelon form does not preserve the columnspace as was mentioned above.

Actually, getting the rref of a matrix a method by which a basis for the nullspace, or kernel, can be found, if I remember correctly.