1. Not finding help here? Sign up for a free 30min tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Linear Algebra: Augmented matrix echelon form y-space?

  1. Feb 5, 2014 #1
    I'm doing my homework but I'm lost on one thing. Lets say that we have a systems of equations like so:

    2x1+3x2=y1
    4x1+2x2=y2

    Instead of setting it to a constant our teacher sets it to a variable, he says that to be able to compute this, the augmented matrix should look like:

    2 3|1 0
    4 1|0 1

    Then of course we find the echelon form. Does this have a name? The fact that he sets it to another variable or the way the augmented matrix is set up? I'm trying to learn more about them but my professor almost never follows the regular curriculum. Thank you in advance.
     
  2. jcsd
  3. Feb 5, 2014 #2
    Gauss elimation? Or row reduction?
     
  4. Feb 5, 2014 #3

    Ray Vickson

    User Avatar
    Science Advisor
    Homework Helper

    Basically, he is doing a form of LU-decomposition. If you start with ##B = [A|I]## (##A## = your original matrix and ##I##= unit matrix), then after some row-reduction steps you end up with
    ##B_{new} = [U|L]##. Here, ##U## is the usual row- reduced matrix, and ##L## is what happens to the matrix ##I## that you used to augment ##A##. The matrix ##L## will be lower-triangular (unless you performed row-interchanges); it is a fact that ##L \cdot A = U##; in other words, the matrix ##L## encapsulates the row-operations you used to get from ##A## to ##U##. Multiplying by ##L## on the left produces exactly the same results as row-reduction.

    Why might this be useful? Well, suppose that for some reason you needed to solve several (separate) equations of the form ##Ax = b_1, \: A x = b_2, \: \ldots ##, all having the same left-hand-side but different right-hand-side vectors ##b##. You can use the matrix ##L## to conclude that the equations reduce to the simple, triangular forms ##Ux = Lb_1, \: Ux = L b_2, \ldots ##, each of which is easily solvable by successive evaluation and back-substitution. (It would, of course, be even simpler if you had the inverse ##C = A^{-1}##, but we often try to avoid computing the inverse for reasons of efficiency and numerical stability, etc. The upper-triangular form is almost as fast to deal with.)
     
  5. Feb 6, 2014 #4

    HallsofIvy

    User Avatar
    Staff Emeritus
    Science Advisor

    Setting the augmented matrix to
    2 3 | 1 0
    4 1 | 0 1
    and completely row-reducing to get
    1 0 | -1/10 3/10
    0 1 | 2/5 -1/5

    give the inverse matrix to the original matrix. Multiplying that inverse matrix by the (y1, y2) will then give the solution to the original matrix. If you had only the one problem with given values for y1 and y2, row reducing directly with y1, y2 would be simpler. But it often happens in applications that you have an equation like Ax= y with the same "A" but many different "y". In that case it would be simpler to find the inverse to A first, the multiply it by the various y.

    More likely, your teacher is using this as a way to introduce the "inverse" matrix.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted



Similar Discussions: Linear Algebra: Augmented matrix echelon form y-space?
Loading...