# Proof concerning the Four Fundamental Spaces

• I
Hello all,

I am currently working on the four fundamental spaces of a matrix. I have a question about the orthogonality of the
1. row space to the null space
2. column space to the left null space
------------------------------------------------

In the book of G. Strang there is this nice picture of the matrices that clarifies the orthogonality. This is what I mean: With the help of these expressions, this is then argued:

$$Ax = \begin{pmatrix}\text{row 1} \\ ... \\ \text{row m}\end{pmatrix}\begin{pmatrix} \vdots \\ x \\ \vdots \end{pmatrix} = 0$$
$$A^Ty = \begin{pmatrix}\text{transposed column 1 of A} \\ ... \\ \text{transposed column n of A}\end{pmatrix}\begin{pmatrix} \vdots \\ y \\ \vdots \end{pmatrix} = 0$$

------------------------------------------------

There is also a matrix proof, and this is what I am very interested in. You can read my thoughts about it in the explanations below.

I have shown the whole thing as a picture. Using this picture I try to show the matrix proof.

1. proof for the orthogonality of the row space to the null space:
• The row space vectors are combinations ##A^Ty## of the rows, therefore the dot product of ##A^Ty## with any ##x## of the null space must be zero:

$$\langle\,A^Ty,x\rangle = x^T(A^Ty) = (Ax)^Ty = 0^Ty = 0$$

2. proof of the orthogonality of the column space to the left null space:
• The column space vectors are combinations ##Ax## of the columns, therefore the dot product of ##Ax## with any ##y## of the left null space must be zero:

$$\langle\, Ax, y\rangle = (Ax)^Ty = x^T(A^Ty) = x^T0 = 0$$

I think that is correct so far, but I am not quite sure. Therefore I would be very happy about a confirmation or contra position :)

Note 1: I would like to be honest enough to say that I have also asked this question to others HERE. However, I did not receive an answer there, but I would like to refer to this fact. If Someone here gives an answer, I would like to refer to it. I hope that's ok, because I can no longer delete my question in the other forum, but here I promise myself more adequate help (At least this was always so in the past :) )...

Note 2: This is not a homework assignment. I'm just reading the book in parallel and am interested in this question.

Office_Shredder
Staff Emeritus
Gold Member
I think the easiest way to see the null space is orthogonal to the row space is to just noticed that if ##A x = 0##, then the first entry of ##Ax## is the dot product of ##x## and the first row of ##A##, and it must be zero. Similar to the other rows of ##A##.

• Peter_Newman
Hello and thank you for your answer. Yes, it is similar to that in Strang's book (see matrix part between the dashed lines in my first post). ##Ax = 0## is almost the definition from which this comes out. But how do you argue regarding the Left Nullspace and the Column Space?

I am especially interested in the proof with the help of the dot product.

Office_Shredder
Staff Emeritus
Gold Member
What is your definition of left null space?

It is ##y \in N(A^T)## and ##Ax## is in the column space of Matrix ##A##. The definition for the left null space is ##A^Ty = 0##

fresh_42
Mentor
The picture looks as a graphic formulation, that every short exact sequence in ##\operatorname{Vec}## splits. In other words, we can find a basis of the kernel (or the image) and extend it to a basis of the entire spaces. Matrices require some bases anyway, so why not choose one which makes the matrices especially easy?

• Peter_Newman
Office_Shredder
Staff Emeritus
Gold Member
Peter, ##A^t y =0## looks a lot like the ##Ax=0## formula we used to show the null space was orthogonal to the row space. What are the rows of ##A^t##?

• Peter_Newman
@Office_Shredder, the rows of ##A^T## are the columns of ##A##, I guess. With this you probably want to point out that this behaves almost like ##Ax = 0## (which we know is orthogonal. But only "rotated")?.

@fresh_42, the picture more or less summarizes the "action" of a matrix. This is in my opinion a nice way to visualize it (Strang's idea :) )
-----
What I want to point out:

This ##Ax = \begin{pmatrix}\text{row 1} \\ ... \\ \text{row m}\end{pmatrix}\begin{pmatrix} \vdots \\ x \\ \vdots \end{pmatrix} = 0## ist clear for me. I understand that the dot product is zero, means they are orthogonal.

This ##A^Ty = \begin{pmatrix}\text{transposed column 1 of A} \\ ... \\ \text{transposed column n of A}\end{pmatrix}\begin{pmatrix} \vdots \\ y \\ \vdots \end{pmatrix} = 0## is also understandable.

BUT:

I'am interested if this is right:
Let ##y \in N(A^T)## and ##Ax## is in the column space of Matrix ##A##. I want to prove that every vector ##y## is orthogonal to ##Ax##. Then ##\langle\, Ax, y\rangle = (Ax)^Ty = x^T(A^Ty) = x^T0 = 0##

Last edited:
Office_Shredder
Staff Emeritus
Gold Member
I think that your proof is correct at the end of your post.

• Peter_Newman
@Office_Shredder, thanks a lot!

I have one last question:

I have seen another prove (which comes not from me but is also interesting:) ) that derives this fact by starting from this statement ##A^Ty = 0##. I want to quote the steps to derive the orthogonality ##y^T\cdot Ax = 0##, to show what is unclear to me...

Step 1: ##A^Ty = 0##
Step 2: ##A^Ty = 0|x## multiply equation from left
Step 3: ##x^TA^Ty = 0|^T## transpose equation
Step 4: ##y^TAx = 0##

In this prove the step from 2 to 3 is unclear for me. What does "multiply equation from left" mean? Personally I like my prove more than this but, this prove here is in the meaning the same as my but in the derivation different.

Office_Shredder
Staff Emeritus
Gold Member
I think it's saying multiply both sides on the left by ##x^T##, and then obviously ##x^T 0= 0## so they don't even write the x on the right hand side I think.

• Peter_Newman
Thanks for your answer! I also see it that way, correctly it should then be called as follows in step 2:
Step 2: ##A^Ty = 0\quad|\cdot x^T##, multiply ##x^T## from left, (##x^TA^Ty = x^T0 = 0##)

fresh_42
Mentor
Thanks for your answer! I also see it that way, correctly it should then be called as follows in step 2:
Step 2: ##A^Ty = 0\quad|\cdot x^T##, multiply ##x^T## from left, (##x^TA^Ty = x^T0 = 0##)
If you are so picky, then you must write: ##A^Ty = 0\quad| x^T\cdot ##

• Peter_Newman
mathwonk
Homework Helper
2020 Award

If A:V-->W is a linear map from one vector space V to another W, and f:W-->k is a linear "functional" on W, i.e. a linear map to the scalars, then their composition fA = A*(f):V-->k is a linear functional on V. Thus a linear map A from V to W yields by composition a linear map A* in the other direction, i.e. from functionals on W to functionals on V.

The obvious subspaces associated to A are its kernel in V and its image in W. Then we can also ask for the kernel and image of A*. As to the kernel, i.e. what functionals f map to zero under A*, we ask when is fA = 0? This happens when f vanishes on the image of A, so the kernel of A* is orthogonal to the image of A.

As to the image of A*, we ask what functionals g on V have the form g = fA for some f on W? I.e. which functionals on V factor through A? This happens exactly when g vanishes on the kernel of A, (note at least if Ax=0, then certainly gx = fAx = 0). Thus the image of A* is orthogonal to the kernel of A.

The connection with your version is that the matrix of A* is the transpose of the matrix of A. Note e.g. that the ith row of A defines the scalar valued map fiA on V, which equals the composition of A followed by the scalar valued map fi on W which is projection on the ith "axis" in W defined by the given basis of W. Thus the rows of A span the image of A*, just as the columns of A span the image of A.

This may seem lengthy when written out, but it is all summarized in the diagram fA:V-->W-->k, preferably drawn as a triangle with sides A:V-->W, f:W-->k, fA:V-->k. Then you just look at it and ask yourself, given A, for which f is fA = 0, and which maps V-->k can occur as fA for some f? The answers almost force themselves on you.

• Peter_Newman