I'm not entirely clear what you are asking. But here are a few (non-mathematician's) comments...
DumpmeAdrenaline said:
\begin{pmatrix}
2 & 4 & 6 \\
3 & 5 & 8 \\
1 & 2 & 3
\end{pmatrix}
To get a matrix to display using LateX (which is what is used on this site) don't use ICODE. Use double hash-tags as delimiters to produce this:
##\begin{pmatrix}
2 & 4 & 6 \\
3 & 5 & 8 \\
1 & 2 & 3
\end{pmatrix}##
If you want the matrix to be enclosed in square brackets, use 'bmatrix' rather than 'pmatrix' to get this:
##\begin{bmatrix}
2 & 4 & 6 \\
3 & 5 & 8 \\
1 & 2 & 3
\end{bmatrix}##
LaTex guide is here:
https://www.physicsforums.com/help/latexhelp/
Use the preview icon (top right of editing window) to check formatting is correct before posting.
DumpmeAdrenaline said:
Using the row operations, R2<-- R2-3R1 R3<-- R3-R1 we find the row echelon form of the matrix.
It's more usual to use right-pointing arrows. Also you missed out the first operation: ##\frac {R_1}2 \rightarrow {R_1}##.
DumpmeAdrenaline said:
\begin{pmatrix}
1 & 2 & 3 \\
0 & -1 & -1 \\
0 & 0 & 0
\end{pmatrix}
Row echelon form requires that (for rows which do not contain all zeroes) the first non-zero entry is 1 (not -1). So (using ##-R_2 \rightarrow R_2##) the row echelon form is:
##\begin{bmatrix}
1 & 2 & 3 \\
0 & 1 & 1 \\
0 & 0 & 0
\end{bmatrix}##
DumpmeAdrenaline said:
Based on the definition of row space in the book Í am studying from, the row space is a subspace that comprises an infinitecollection of linearly independent rows of X.
What you mean is '...the infinite set of
linear combinations of the linearly independent rows.
DumpmeAdrenaline said:
To check if the row vectors [1,2,3] and [0,-1,-1] are linearly independent we write
$$ \beta_{1} [1,2,3]+\beta_{2} [0,-1,-1]=[\beta_{1}, 2\beta_{1}-\beta_{2}, 3\beta_{1}-\beta_{2}]=[0,0,0] $$
where β1 and β2 are scalars that belong to the field of real numbers.
If we consider the above, the only scalars (solution) that yields the 0 row vector are β1=β2=0.
The 2 non-zero rows can immediately be seen to be linearly independent. One is not a scalar multiple of the other.
DumpmeAdrenaline said:
Therefore, the row vectors are independent. How to determine if the independent row vectors span the subspace and they form the basis vector for that subspace. Is the subspace considered here the infinite collection of 3*1 row vectors?
It helps to think geometrically. In 3D space, [1 2 3] and [0 1 1] are 2 vectors pointing in different directions; they lie in some (2D) plane. Linear combinations of these 2 vectors can produce any vector in this plane. So this 2D plane is a subspace of 3D space; the 2 vectors span this 2D subspace and hence are basis vectors.
DumpmeAdrenaline said:
If they are basis vectors for subspace does this imply if we add a new row vector to the given matrix we can write it in terms of the identified LI vectors or do we have to go through a new LU decomposition?
The 2 vectors are basis vectors (because they span the subspace, as noted above). Note that these 2 basis vectors are not othogonal and not normalised.
It is not clear what you mean by 'add a new row vector to the given matrix'. If you replace the [0 0 0] row by a row which is linearly independent of the other 2 rows, this new row vector does not lie in the 2D subspace discussed above. In this case all 3 rows can be considered as a set basis vectors which will span the whole 3D space.