- #1

FrogPad

- 810

- 0

Find: [tex] R(A^T),\,\,N(A),\,\,\,R(A),\,\,N(A^T)[/tex]

I'm having trouble finding N(A^T). Here is how I'm doing it.

[tex]A = \left[ \begin{array}{cccc}

1 & 0 & 0 & 0 \\

0 & 1 & 1 & 1 \\

0 & 0 & 1 & 1 \\

1 & 1 & 2 & 2

\end{array} \right]

[/tex]

thus:

[tex]

A^T = \left[ \begin{array}{cccc}

1 & 0 & 0 & 1 \\

0 & 1 & 0 & 1 \\

0 & 1 & 1 & 2 \\

0 & 1 & 1 & 2

\end{array} \right]

[/tex]

so...

[tex]

rref(A^T) = \left[ \begin{array}{cccc}

1 & 0 & 0 & 1 \\

0 & 1 & 0 & 1 \\

0 & 0 & 1 & 1 \\

0 & 0 & 0 & 0

\end{array} \right]

[/tex]

then we are left with...

[tex]

\begin{array}{c}

x_1+\alpha = 0 \\

x_2 + \alpha = 0 \\

x_3 + \alpha = 0 \\

x_4 = \alpha

\end{array}

[/tex]

which gives:

[tex]

\alpha\left[

\begin{array}{c}

-1\\

-1\\

-1\\

1

\end{array} \right]

[/tex]

The book gives:

[tex] \left[

\begin{array}{c}

1 \\

1 \\

1 \\

-1

\end{array} \right]

[/tex]

as the basis for [tex]N(A^T) [/tex]

Is this the same? And why?

I mean [tex] \alpha [/tex] can be anything, so if [tex] \alpha = -1 [/tex] then I get the same answer as the book. So spanning the set with either "my" vector, or the books accomplishes the same thing. It's just confusing to me why the book would not follow the algorithm to get the answer. Thanks in advance.