# Null space of A = null space of A'A?

Hihi

I've been working on this problem for some time: if A is a (m x n) matrix, and A' denotes its transpose, then the null space of A is equal to the null space of A'A. Is this always true? I thought of a proof for the special case where A is given in reduced row echelon form, but fail to see how the general case can be proved or disproved. My proof runs like this:

1) Suppose A is in reduced row echelon form. Then, every column of A either i) contains a single 1 element, while the other entries are 0, or ii) its first i rows consist of elements which are not 1 and not necessarily 0, where i < column number.

2) Now take A' and multiply it by A. Then we have i) every row of A is contained in A'A, and ii) every row of A'A is a linear combination of rows from A.

3) Thus if A'A(X) = 0, then A(X) = 0. Similarly, if A(X) = 0, then A'A(X) = 0. Hence the null spaces of A and of A'A are the same.

There is also the weaker general result: the null space of A is a subset of the null space of A'A, because if A(X) = 0, then A'A(X) = A'(0) = 0.

i think the solution to the problem lies in the column space of A, but i'm not sure exactly how. Will be really grateful to anyone who can share his/her insight on this problem. Thanks. :)

ok, it seems that i've answered the question myself, but i'm still not quite satisfied with the solution.

My proof that null space of A = null space of A'A runs roughly as follows:

1) Suppose we have reduced A to its reduced row echelon form. Call this reduced row echelon form matrix E. Then, every row of A can be expressed as a linear combination of the rows of E.

2) Since the rows of A'A are in fact linear combinations of the rows of A, they are also linear combinations of the rows of E. Further, since the rows of E are independent, they form a basis for the row space of A'A. Thus the reduced row echelon form of A'A is equal to E. Hence if A'A(X) = 0, it must necessarily follow that A(X) = 0.

I sort of think that's the way to prove the result, yet it doesn't seem quite elegant.

I was hoping to demonstrate the result that the null space of A'A is a subset of the null space of A by writing the equations for A'A(X) = 0, equating each row to zero, and performing row operations on the equations to obtain the result AX = 0. it's pretty difficult for me.

0rthodontist
Well, there is a simpler way. What you are trying to show (the second direction of the subset symbol) is the same as showing that AT never maps Ax to 0 when Ax is nonzero.

Use the fact that
$$(col A)^\perp = nul (A^T)$$

Edit: I see a problem with your proof.
2) Since the rows of A'A are in fact linear combinations of the rows of A, they are also linear combinations of the rows of E. Further, since the rows of E are independent, they form a basis for the row space of A'A. Thus the reduced row echelon form of A'A is equal to E. Hence if A'A(X) = 0, it must necessarily follow that A(X) = 0.
The rows of E are not necessarily independent--for example what if A is square but not invertible? Then E will have a zero row. Anyway to be a basis of the row space of ATA the rows of E have to be more than independent; they have to span a space of the same dimension as the row space of ATA, which you haven't shown.

Last edited:
Hurkyl
Staff Emeritus
Gold Member
Further, since the rows of E are independent, they form a basis for the row space of A'A.
0rthodontist already pointed out that the rows of E don't have to be independent.

There's another problem -- the rows of E might not even be in the row space of A'A. (At least, you haven't yet given a reason to think they would be)

Your argument is sufficient to show that the rowspace of A'A is a subspace of the row space of A, but you certainly have not shown that they are equal.

Now... if A'Ax = 0, then what is x'A'Ax? Does that give you any ideas?

hi

Thanks for taking time off to look at my question.

I'll just try to fill in the gaps of my original proof and correct the errors:

1) E is not necessarily independent. Yes, it was a misstatement - i actually meant to say that the non-zero rows of E are independent, unless every row of E is zero.

2) Hence the rows of A'A are linear combinations of the non-zero rows of E. I didn't find the dimension of A'A, because every row of A'A is already a linear combination of the independent vectors in the rows of E, so that would seem to imply that the dimension of the row space of A'A is equal to the dimension of the row space of E.

3) In sum I tried to deduce from 1) and 2) that the row space of A'A is equal to the row space of E; this would mean that the null space of A'A and of A are also the same, or that the reduced row echelon form of A'A is equal to the reduced row echelon form of A.

Also, I showed that the null space of A is a subspace of the null space of A'A in my first post; my second post was done in a bit of a rush. ;)

As to Hurkly's suggestion, many thanks..
I don't know if I interpreted your hint correctly...but does it imply that the
elements of the original matrix A and the solution space of X are all real numbers?

thanks once again

Last edited:
0rthodontist
2) Hence the rows of A'A are linear combinations of the non-zero rows of E. I didn't find the dimension of A'A, because every row of A'A is already a linear combination of the independent vectors in the rows of E, so that would seem to imply that the dimension of the row space of A'A is equal to the dimension of the row space of E.
Not necessarily. The rows of the zero matrix are linear combinations of the rows in the identity matrix. The only thing you can say from this is that dim row (ATA) is less than or equal to dim row E.

Using $$(col A)^\perp = nul (A^T)$$, what can you say about whether AT maps a nonzero vector in col A to 0?

Last edited:
hi...been quite a while since I last posted.

To 0rthodontist...hope I got your hint right. As I've only just started studying orthogonality, I'm not totally familiar with the concept. Here's how I reasoned:

1) Since null space of A' = space of vectors orthogonal to column space
of A, AX must be orthogonal to every column of A, and every vector formed by linearly combining the columns of A.

2) But AX lies in the column space of A, i.e. it is a linear combination of the columns of A.

3) Hence AX must be orthogonal to itself, which is impossible unless AX = 0.

4) Thus X must lie in the null space of A.

QED?

Anyhow, I looked through the 'proof' I presented in the second post and realised my mistake. Is it the (unwarranted) assumption that all of the non-zero rows of E will occur at least once in A'A? If so, I hope the following steps will correct it:

1) If every non-zero row of E occurs at least once in the rows of A'A, then it means that the row spaces of E and of A are spanned by the same set
of independent vectors, and have therefore a common basis.

2) Suppose A is a (m x n) matrix.

3) Call the non-zero rows of E {E1, E2, ..., Ei}, in order of their row number.

4) Let the column numbers of columns of E in which a single 1 occurs as an element be k1, k2, ...ki, where k1 < k2 < ... < ki.

5) Represent the element of A in the ith row and jth column as a(i)(j).

6) Then the coefficient of E1 in the first row of A'A is equal to

a(1)(1)a(1)(k1) + a(2)(1)a(2)(k1) + ... + a(m)(1)a(m)(k1)

In general, the coefficient of E1 in the rth row of A'A is given by

a(1)(r)a(1)(k1) + a(2)(r)a(2)(k1) + ... + a(m)(r)a(m)(k1).

Consider the (k1)th row of A'A:

The coefficient of E1 is then given by

C1 = [a(1)(k1)]^2 + [a(2)(k1)]^2 + ... + [a(m)(k1)]^2

a(1)(k1), a(2)(k1), ... a(m)(k1) cannot be all zero, for otherwise all the elements of the (k1)th column of E would be zero, which contradicts (4).

Thus a sufficient condition for E1 to occur at least once in the rows of A'A is that, if k1 be the column number of the first '1' entry in the reduced row echelon form matrix of A, a(1)(k1), a(2)(k1), ... a(m)(k1) are all real.

7) Repeating this argument for E2, E3, ... Ei, we conclude that a sufficient condition for the null space of A to be the same as that of A'A is that, given that the leading '1' entries in the reduced row echelon form of A are in the columns k1, k2, ...ki, where k1 < k2 < k3 < ... < ki, a(1)(kr), a(2)(kr), ... a(m)(kr), where 1=< r =< i, are all real entries.

thanks for helping so far...hope there are no errors this time...

0rthodontist
1) Since null space of A' = space of vectors orthogonal to column space
of A, AX must be orthogonal to every column of A, and every vector formed by linearly combining the columns of A.

2) But AX lies in the column space of A, i.e. it is a linear combination of the columns of A.

3) Hence AX must be orthogonal to itself, which is impossible unless AX = 0.

4) Thus X must lie in the null space of A.

QED?
Yes, that's correct. I think you should explicitly state in the first step the assumption that ATAx = 0.

I'm not sure about your other proof. What do you mean by "If every non-zero row of E occurs at least once in the rows of A'A"?

Last edited:
0rthodontist said:
Yes, that's correct. I think you should explicitly state in the first step the assumption that ATAx = 0.

I'm not sure about your other proof. What do you mean by "If every non-zero row of E occurs at least once in the rows of A'A"?

i meant it like this:
Since every row of A can be expressed as a linear combination of the non-zero rows of E, and every row of A'A can be expressed as a linear combination of the rows of A, every row of A'A can also be expressed as a linear combination of the non-zero rows of E. By a non-zero row of E, say Ei, 'occurring' at least once in the rows of A'A I mean there is at least one row of A'A where the coefficient of Ei is not zero, after expressing that row as a linear combination of all the non-zero rows of E.

Hurkyl
Staff Emeritus