Is simultaneous row operation allowed in reducing a matrix to row echelon form?

  • Thread starter Thread starter MathewsMD
  • Start date Start date
  • Tags Tags
    Matrix
Click For Summary
Only elementary row operations—such as row swapping, scaling a row, and adding a multiple of one row to another—are allowed when reducing a matrix to row echelon form because they preserve the solution set of the system. Operations like 3R1 + 2R2 can be achieved through a combination of these elementary steps. Simultaneous operations, such as R1 + R2 and R2 + R1, are not permitted as they can lead to different solution sets, particularly in cases where the rows become linearly dependent. However, independent operations that do not affect each other can be performed simultaneously. Ultimately, maintaining the integrity of the solution set is crucial in matrix reduction.
MathewsMD
Messages
430
Reaction score
7
For any given 2 x 3 matrix,

Why are only elementary steps allowed (i.e. aR1, R1 +/- R2, R1 <--> R2) and not any other operation (e.g. 3R1 + 2R2) when reducing the matrix to row echelon form?

Also, is the operation R1 + R2 and R2 + R1 allowed simultaneously? I realize this would just output the exact same equation, which isn't incredibly useful, but is it allowed?
 
Physics news on Phys.org
The operations plus and times are closed over linear spaces that are mapped by matrices.
 
j
MathewsMD said:
For any given 2 x 3 matrix,

Why are only elementary steps allowed (i.e. aR1, R1 +/- R2, R1 <--> R2) and not any other operation (e.g. 3R1 + 2R2) when reducing the matrix to row echelon form?
The way I've usually seen these elementary row operations is like so (http://en.wikipedia.org/wiki/Elementary_matrix#Operations):
Ri <--> Rj Switch two rows
Ri <-- kRi (k a nonzero scalar) Replace a row with a nonzero multiple of itself
Ri <-- Ri +kRj (k a nonzero scalar) Replace a row by itself plus a nonzero multiple of another row

3R1 + 2R2 could be effected by replacing R1 by itself plus 2/3 R2, followed by replacing R1 by 3 times itself.
MathewsMD said:
Also, is the operation R1 + R2 and R2 + R1 allowed simultaneously?
I don't understand what your notation means. Which row gets changed?
MathewsMD said:
I realize this would just output the exact same equation, which isn't incredibly useful, but is it allowed?
 
Last edited:
If you apply the "row operations", "multiply a row by a constant", "swap two rows", and "add a multiple of one row to another" to the identity matrix, you get an "elementary matrix". Applying that row operation to any matrix, A, is the same as multiplying the corresponding elementary matrix by A. For example, if you "add 4 times the first row to the third row" of the identity matrix you get
\begin{bmatrix}1 &amp; 0 &amp; 0 \\ 0 &amp; 1 &amp; 0 \\ 4 &amp; 0 &amp; 1\end{bmatrix}
and adding four times the first row to the second row of
\begin{bmatrix}a_{11} &amp; a_{12} &amp; a_{13} \\ a_{21} &amp; a_{22} &amp; a_{23} \\ a_{31} &amp; a_{32} &amp; a_{33} \end{bmatrix}
gives
\begin{bmatrix}a_{11} &amp; a_{12} &amp; a_{13} \\ a_{21} &amp; a_{22} &amp; a_{23} \\ a_{31}+ 4a_{11} &amp; a_{32}+ 4a_{12} &amp; a_{33}+ a_{13} \end{bmatrix}

That is exactly the same as
\begin{bmatrix}1 &amp; 0 &amp; 0 \\ 0 &amp; 1 &amp; 0 \\ 4 &amp; 0 &amp; 1\end{bmatrix}\begin{bmatrix}a_{11} &amp; a_{12} &amp; a_{13} \\ a_{21} &amp; a_{22} &amp; a_{23} \\ a_{31} &amp; a_{32} &amp; a_{33}\end{bmatrix}

In other words, those row operations are exactly the same as multiplying by matrices. If I apply row operations to reduce matrix A to the identity matrix, then multiplying together the corresponding matrices would give the inverse matrix. Applying those row operations to the identity matrix is the same as multiplying all those elementary matrices together, which gives the inverse matrix.
 
Last edited by a moderator:
  • Like
Likes platetheduke
MathewsMD said:
For any given 2 x 3 matrix,

Why are only elementary steps allowed (i.e. aR1, R1 +/- R2, R1 <--> R2) and not any other operation (e.g. 3R1 + 2R2) when reducing the matrix to row echelon form?

Also, is the operation R1 + R2 and R2 + R1 allowed simultaneously? I realize this would just output the exact same equation, which isn't incredibly useful, but is it allowed?

Because these operations are the only ones that preserve the solution to the system. This is obvious for swaps and scaling, but a bit harder for linear combinations. It is also hard (at least for me ; can't think of a way of doing it) to show these are the only operations with those properties.
Still, you can get ##3R_1+2R_2## as a combination of elementary operations.
 
MathewsMD said:
Also, is the operation R1 + R2 and R2 + R1 allowed simultaneously? I realize this would just output the exact same equation, which isn't incredibly useful, but is it allowed?
No, the operations
1. R1 <-- R1 + R2
2. R2 <-- R2 + R1
(I assume that this was what you meant)
cannot be performed simultaneously.

This is easily seen for the matrix

\begin{bmatrix}1 &amp; 1 \\ -1 &amp; -1 \end{bmatrix}
which is then transformed into the zero matrix

\begin{bmatrix}0 &amp; 0 \\ 0&amp; 0 \end{bmatrix}
and these two matrices are clearly not row equivalent: the corresponding homogeneous systems have different solution sets, the latter is satisfied by any pair (x,y), which the first one is not.

So, row operations must in principle be performed sequentially. But in many cases, the operations are independent of the operations immediately before, and then they can be performed simultaneously, for example:

1. R2 <-- R2 + R1
2. R3 <-- R3 + R1

Since 1 does not change R1, 1 and 2 can be performed simultaneously. This is a difference from the previous case.
 
Thread 'How to define a vector field?'
Hello! In one book I saw that function ##V## of 3 variables ##V_x, V_y, V_z## (vector field in 3D) can be decomposed in a Taylor series without higher-order terms (partial derivative of second power and higher) at point ##(0,0,0)## such way: I think so: higher-order terms can be neglected because partial derivative of second power and higher are equal to 0. Is this true? And how to define vector field correctly for this case? (In the book I found nothing and my attempt was wrong...

Similar threads

  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 9 ·
Replies
9
Views
2K
Replies
3
Views
1K
  • · Replies 8 ·
Replies
8
Views
9K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K