Can elementary matrix operations change the solutions to a system of equations?

In summary, a matrix is a way of representing a system of equations. Elementary matrix operations, such as replacing a row with the sum of itself and a constant multiple of another row, do not change the solutions to the system of equations. This can be seen by examining the intersection of the equations and noticing that the solutions still satisfy both equations. This is similar to performing operations on equations in standard form.
  • #1
opus
Gold Member
717
131
To my understanding, a matrix is just a way of representing a system of equations in an organized format.
So for example, if we have some system of equations, we can get them into standard form, and translate them into what's known as an augmented matrix. This is similar to using synthetic division for dividing polynomials.

Now one rule for elementary matrix operations is that:
Some row, ##j##, can be replace with the sum of itself and a constant multiple of row ##i##- denoted as ##(cRi+Rj)##.

Now my questions is, why doesn't this change the solutions to the system of equations?

Take for example the matrix in my attached picture. We are multiplying ##R_1## by -5, and then adding that result to ##R_2##. Rows 1 and 3 remain the same as when we started, but Row 2 has changed in a way that is not a multiple of itself in the given matrix. I understand that this process is to get the matrix into row echelon form so that we can perform Gaussian Elimination, but I don't understand why our solutions haven't changed now that we have a different multiple of Row 2.
 

Attachments

  • Screen Shot 2018-07-24 at 4.15.16 PM.png
    Screen Shot 2018-07-24 at 4.15.16 PM.png
    13.2 KB · Views: 630
Mathematics news on Phys.org
  • #2
opus said:
To my understanding, a matrix is just a way of representing a system of equations in an organized format.
Yes, as it can be seen as such, no, as it is not the only way to view what's going on.
So for example, if we have some system of equations, we can get them into standard form, and translate them into what's known as an augmented matrix. This is similar to using synthetic division for dividing polynomials.

Now one rule for elementary matrix operations is that:
Some row, ##j##, can be replace with the sum of itself and a constant multiple of row ##i##- denoted as ##(cRi+Rj)##.

Now my questions is, why doesn't this change the solutions to the system of equations?
If the ##i-##th row is the equation ##f_i(x_1,\ldots , x_n)=b_i## and likewise ##f_j##, can you show, that a certain set of numbers ##a_k## for the ##x_k## which satisfies ##f_i(a_k)=b_i## and ##f_j(a_k)=b_j## also satisfies ##(\alpha f_i + \beta f_j)(a_k) = \alpha f_i(a_k) + \beta f_j(a_k)= \alpha b_i + \beta b_j\,?##
Take for example the matrix in my attached picture. We are multiplying ##R_1## by -5, and then adding that result to ##R_2##. Rows 1 and 3 remain the same as when we started, but Row 2 has changed in a way that is not a multiple of itself in the given matrix. I understand that this process is to get the matrix into row echelon form so that we can perform Gaussian Elimination, but I don't understand why our solutions haven't changed now that we have a different multiple of Row 2.
 
  • Like
Likes opus
  • #3
fresh_42 said:
If the i−i-th row is the equation fi(x1,…,xn)=bif_i(x_1,\ldots , x_n)=b_i and likewise fjf_j, can you show, that a certain set of numbers aka_k for the xkx_k which satisfies fi(ak)=bif_i(a_k)=b_i and fj(ak)=bjf_j(a_k)=b_j

This would just be the intersection of ##f_i## and ##f_j##. So ##a_k## could be said to be something like ##(x,y,z)##

fresh_42 said:
also satisfies (αfi+βfj)(ak)=αfi(ak)+βfj(ak)=αbi+βbj?

A little lost on this part.
It looks like you're adding the solutions of each equation together?
 
  • #4
opus said:
Now my questions is, why doesn't this change the solutions to the system of equations?
Because you're adding equal quantities to both sides of the equation you modify.

Here's a simple example, using a system of equations:
##2x + y = 5##
##x + y = 3##

If I replace the first equation by itself plus -2 times the second equation, I get a new system:
##0x - y = -1##
##x + y = 3##

Note that the zero coefficient in the first equation isn't necessary.

From the first equation, it is easy to see that y = 1. By back-substituting this value into the second equation, it's also easy to see that x = 2, so the solution to the system is the point (2, 1). Notice that this pair of numbers satisfies the first system and the altered second system.

As an alternative, I could replace the second equation by itself plus the first equation, obtaining the system
##0x - y = -1##
##x + 0y = 2##
So again, the solution is the point (2, 1).

Working with matrices, you're doing the same sorts of operations, but don't have to keep track of the variables. That's really the only difference.

Keep in mind there are three row operations that result in an equivalent system (i.e., one with the same solution(s)).
  1. Swapping two rows: ##R_i## <--> ##R_j##
  2. Replacing a row by a nonzero multiple of itself: ##R_i## <-- ##kR_i##
  3. Replacing a row by itself plus a nonzero multiple of another row: ##R_i## <-- ##R_i + k R_j##
 
  • Like
Likes opus
  • #5
Let's make an example. Say we have the equations ##x+y=1## and ##x-y=2\,.## These two represent the solution ##x=\frac{3}{2}## and ##y=-\frac{1}{2}\,.## Now if we make e.g. ##4\cdot (i) + 3\cdot (ii)## out of it, we get ##7x+y=10\,.## You have asked why this still carries the same solution, but
$$
4\cdot \left( \frac{3}{2} - \frac{1}{2} \right) + 3\cdot \left( \frac{3}{2} + \frac{1}{2} \right) = 4\cdot 1 + 3 \cdot 2 = 10
$$
is still correct. So if we replaced the variables by the numbers they finally will have, nothing changes: we only add and multiply true equations. So there is no chance to create something wrong. Another question is, do we still have the same information? And here's where @symbolipoint 's "good equations" came in. E.g. we can replace the first equation by our new one ##4(i)+3(ii)## but we must not simultaneously replace the second equation by say ##-4(i)-3(ii)##. In this case we would have created a "bad equation", because we basically substituted both original equations by the same new one. This loses information and would not be allowed. That's why one has to be careful not to lose information this way.
 
  • Like
Likes opus
  • #6
Let's try the (somewhat abbreviated) text version:
  1. 'equation' means that the expression at the left of the '=' sign is the same (has the same value) as the expression at the right of the '='
  2. an operation, if performed on both sides of the equation, does not invalidate the equation

Therefore, multiplying the first equation in your example by -5 does not change its validity.
It also follows that adding something to both sides of the second equation does not alter its validity either.
The multiplier for the first equation is astutely chosen so that the result, upon addition to the second equation, cancels a coefficient to Zero.

Cheers,
Tom
 
  • Like
Likes opus
  • #7
Mark44 said:
Because you're adding equal quantities to both sides of the equation you modify.

So let me ask this. Is the reason that we multiply by a variation of another row, and add that to the row we want to change because the numbers in the matrices represent variable coefficients? So say I wanted to change ##a_{21}## from 5 to 1. I cannot simply subtract everything in the row by 4 because those numbers are variable coefficients. It would be like saying ##5x-4=1x## which is not true.
To change ##a_{21}## from 5 to 1, I would need to multiply a row by a factor that, when added to row 2, would change the 5 to a 1.
 
  • #8
So let me just isolate one equation of the matrix that we'll say is of row 2.
This row consists of numbers, and these numbers correspond to the variable's coefficients of the terms in its respective equation in the system of equations.
In a normal equation, I can add or multiply whatever I want, as long as it's done on both sides.

In the case of the matrix, it is the same except for one aspect. Every number of the LHS of the matrix is being multiplied by some variable. Since this is the case, I can't just add some number to it, even if I do it on both sides of the equation. To add the the LHS, what really needs to be done is to find a multiple of another equation in the matrix and add that to the row we want to change. In doing this, we are adding a multiple of ##x## to ##x##, a multiple of ##y## to ##y##, and a multiple of ##z## to ##z##.
 
  • #9
Yes to both of your above posts.
 
  • Like
Likes opus
  • #10
Ok then! Thanks!
 
  • Like
Likes Tom.G
  • #11
if you are given the equation 3X = 7, why do the solutions not change if you multiply through by 1/3 ? The exact same reason explains the answer to your matrix question. I.e. performing a matrix row operation is the same as matrix multiplying by an invertible matrix, so it does not change solutions.
 

What are elementary matrix operations?

Elementary matrix operations are basic mathematical operations that can be performed on matrices, including addition, subtraction, and multiplication by a scalar.

Why are elementary matrix operations important?

Elementary matrix operations are important because they allow us to manipulate matrices in specific ways, which is essential in solving linear equations and performing various calculations in fields such as physics, engineering, and computer science.

How do I perform elementary matrix operations?

To perform elementary matrix operations, you will need to follow specific rules depending on the operation. For example, to add two matrices, you must ensure that they have the same dimensions, and then add the corresponding elements. To multiply a matrix by a scalar, you simply multiply each element of the matrix by the scalar value.

What is the purpose of elementary row operations?

The purpose of elementary row operations is to simplify a matrix and make it easier to solve. These operations include swapping two rows, multiplying a row by a nonzero scalar, and adding a multiple of one row to another row. By performing these operations, we can transform a matrix into an equivalent one that is easier to work with.

How are elementary matrix operations used in real-life applications?

Elementary matrix operations are used in various real-life applications, such as image processing, data compression, and economic modeling. They are also essential in solving systems of linear equations, which are used to model and analyze real-world situations in fields such as economics, physics, and engineering.

Similar threads

  • Linear and Abstract Algebra
Replies
7
Views
1K
  • Precalculus Mathematics Homework Help
Replies
25
Views
966
  • General Math
Replies
16
Views
3K
Replies
3
Views
236
Replies
2
Views
1K
Replies
1
Views
5K
  • Linear and Abstract Algebra
Replies
4
Views
847
  • Engineering and Comp Sci Homework Help
Replies
18
Views
2K
  • Calculus and Beyond Homework Help
Replies
7
Views
1K
  • Linear and Abstract Algebra
Replies
12
Views
1K
Back
Top