Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Matrix operation questions

  1. Sep 29, 2014 #1
    For any given 2 x 3 matrix,

    Why are only elementary steps allowed (i.e. aR1, R1 +/- R2, R1 <--> R2) and not any other operation (e.g. 3R1 + 2R2) when reducing the matrix to row echelon form?

    Also, is the operation R1 + R2 and R2 + R1 allowed simultaneously? I realize this would just output the exact same equation, which isn't incredibly useful, but is it allowed?
     
  2. jcsd
  3. Sep 29, 2014 #2

    Doug Huffman

    User Avatar
    Gold Member

    The operations plus and times are closed over linear spaces that are mapped by matrices.
     
  4. Sep 29, 2014 #3

    Mark44

    Staff: Mentor

    j
    The way I've usually seen these elementary row operations is like so (http://en.wikipedia.org/wiki/Elementary_matrix#Operations):
    Ri <--> Rj Switch two rows
    Ri <-- kRi (k a nonzero scalar) Replace a row with a nonzero multiple of itself
    Ri <-- Ri +kRj (k a nonzero scalar) Replace a row by itself plus a nonzero multiple of another row

    3R1 + 2R2 could be effected by replacing R1 by itself plus 2/3 R2, followed by replacing R1 by 3 times itself.
    I don't understand what your notation means. Which row gets changed?
     
    Last edited: Sep 29, 2014
  5. Sep 29, 2014 #4

    HallsofIvy

    User Avatar
    Staff Emeritus
    Science Advisor

    If you apply the "row operations", "multiply a row by a constant", "swap two rows", and "add a multiple of one row to another" to the identity matrix, you get an "elementary matrix". Applying that row operation to any matrix, A, is the same as multiplying the corresponding elementary matrix by A. For example, if you "add 4 times the first row to the third row" of the identity matrix you get
    [tex]\begin{bmatrix}1 & 0 & 0 \\ 0 & 1 & 0 \\ 4 & 0 & 1\end{bmatrix}[/tex]
    and adding four times the first row to the second row of
    [tex]\begin{bmatrix}a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{bmatrix}[/tex]
    gives
    [tex]\begin{bmatrix}a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31}+ 4a_{11} & a_{32}+ 4a_{12} & a_{33}+ a_{13} \end{bmatrix}[/tex]

    That is exactly the same as
    [tex]\begin{bmatrix}1 & 0 & 0 \\ 0 & 1 & 0 \\ 4 & 0 & 1\end{bmatrix}\begin{bmatrix}a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33}\end{bmatrix}[/tex]

    In other words, those row operations are exactly the same as multiplying by matrices. If I apply row operations to reduce matrix A to the identity matrix, then multiplying together the corresponding matrices would give the inverse matrix. Applying those row operations to the identity matrix is the same as multiplying all those elementary matrices together, which gives the inverse matrix.
     
    Last edited: Sep 29, 2014
  6. Sep 29, 2014 #5

    WWGD

    User Avatar
    Science Advisor
    Gold Member

    Because these operations are the only ones that preserve the solution to the system. This is obvious for swaps and scaling, but a bit harder for linear combinations. It is also hard (at least for me ; can't think of a way of doing it) to show these are the only operations with those properties.
    Still, you can get ##3R_1+2R_2## as a combination of elementary operations.
     
  7. Oct 1, 2014 #6

    Erland

    User Avatar
    Science Advisor


    No, the operations

    1. R1 <-- R1 + R2
    2. R2 <-- R2 + R1
    (I assume that this was what you meant)
    cannot be performed simultaneously.

    This is easily seen for the matrix

    [tex]\begin{bmatrix}1 & 1 \\ -1 & -1 \end{bmatrix}[/tex]
    which is then transformed into the zero matrix

    [tex]\begin{bmatrix}0 & 0 \\ 0& 0 \end{bmatrix}[/tex]
    and these two matrices are clearly not row equivalent: the corresponding homogeneous systems have different solution sets, the latter is satisfied by any pair (x,y), which the first one is not.

    So, row operations must in principle be performed sequentially. But in many cases, the operations are independent of the operations immediately before, and then they can be performed simultaneously, for example:

    1. R2 <-- R2 + R1
    2. R3 <-- R3 + R1

    Since 1 does not change R1, 1 and 2 can be performed simultaneously. This is a difference from the previous case.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Matrix operation questions
  1. Matrix Operations (Replies: 1)

Loading...