Linear Algebra: Augmented matrix echelon form y-space?

Matriculator
Messages
51
Reaction score
0
I'm doing my homework but I'm lost on one thing. Let's say that we have a systems of equations like so:

2x1+3x2=y1
4x1+2x2=y2

Instead of setting it to a constant our teacher sets it to a variable, he says that to be able to compute this, the augmented matrix should look like:

2 3|1 0
4 1|0 1

Then of course we find the echelon form. Does this have a name? The fact that he sets it to another variable or the way the augmented matrix is set up? I'm trying to learn more about them but my professor almost never follows the regular curriculum. Thank you in advance.
 
Physics news on Phys.org
Gauss elimation? Or row reduction?
 
Matriculator said:
I'm doing my homework but I'm lost on one thing. Let's say that we have a systems of equations like so:

2x1+3x2=y1
4x1+2x2=y2

Instead of setting it to a constant our teacher sets it to a variable, he says that to be able to compute this, the augmented matrix should look like:

2 3|1 0
4 1|0 1

Then of course we find the echelon form. Does this have a name? The fact that he sets it to another variable or the way the augmented matrix is set up? I'm trying to learn more about them but my professor almost never follows the regular curriculum. Thank you in advance.

Basically, he is doing a form of LU-decomposition. If you start with ##B = [A|I]## (##A## = your original matrix and ##I##= unit matrix), then after some row-reduction steps you end up with
##B_{new} = [U|L]##. Here, ##U## is the usual row- reduced matrix, and ##L## is what happens to the matrix ##I## that you used to augment ##A##. The matrix ##L## will be lower-triangular (unless you performed row-interchanges); it is a fact that ##L \cdot A = U##; in other words, the matrix ##L## encapsulates the row-operations you used to get from ##A## to ##U##. Multiplying by ##L## on the left produces exactly the same results as row-reduction.

Why might this be useful? Well, suppose that for some reason you needed to solve several (separate) equations of the form ##Ax = b_1, \: A x = b_2, \: \ldots ##, all having the same left-hand-side but different right-hand-side vectors ##b##. You can use the matrix ##L## to conclude that the equations reduce to the simple, triangular forms ##Ux = Lb_1, \: Ux = L b_2, \ldots ##, each of which is easily solvable by successive evaluation and back-substitution. (It would, of course, be even simpler if you had the inverse ##C = A^{-1}##, but we often try to avoid computing the inverse for reasons of efficiency and numerical stability, etc. The upper-triangular form is almost as fast to deal with.)
 
Setting the augmented matrix to
2 3 | 1 0
4 1 | 0 1
and completely row-reducing to get
1 0 | -1/10 3/10
0 1 | 2/5 -1/5

give the inverse matrix to the original matrix. Multiplying that inverse matrix by the (y1, y2) will then give the solution to the original matrix. If you had only the one problem with given values for y1 and y2, row reducing directly with y1, y2 would be simpler. But it often happens in applications that you have an equation like Ax= y with the same "A" but many different "y". In that case it would be simpler to find the inverse to A first, the multiply it by the various y.

More likely, your teacher is using this as a way to introduce the "inverse" matrix.
 
There are two things I don't understand about this problem. First, when finding the nth root of a number, there should in theory be n solutions. However, the formula produces n+1 roots. Here is how. The first root is simply ##\left(r\right)^{\left(\frac{1}{n}\right)}##. Then you multiply this first root by n additional expressions given by the formula, as you go through k=0,1,...n-1. So you end up with n+1 roots, which cannot be correct. Let me illustrate what I mean. For this...
Back
Top