Why does Gaussian elimination work?

The only question is how to do the operations most efficiently and accurately. If you do the same thing on both sides, you don't change the solution values. So you can apply the same operations to a matrix. The operations are also such that you can easily solve the equations.
  • #1
Mathematicsss
Post moved by moderator from Homework section
Hello, I was curious as to why Gaussian elimination works. I know that if we have two ( or more) systems of two (or more) linear equations, we can write then in terms of a matrix. However, what does it mean when I get the identity on the left hand side and the solution on the right?
 
Last edited by a moderator:
Physics news on Phys.org
  • #2
Mathematicsss said:
Post moved by moderator from Homework section
Hello, I was curious as to why Gaussian elimination works. I know that if we have two ( or more) systems of two (or more) linear equations, we can write then in terms of a matrix. However, what does it mean when I get the identity on the left hand side and the solution on the right?
How would you solve a system like
$$
2x+3y = 4 \\
x-3y = 0
$$
or
$$
-x+ 4y = -3 \\
y= 2
$$
Gauß elimination is basically the same process in general terms, so it fits all possible linear equation systems.
 
  • #3
Mathematicsss said:
Post moved by moderator from Homework section
Hello, I was curious as to why Gaussian elimination works. I know that if we have two ( or more) systems of two (or more) linear equations, we can write then in terms of a matrix. However, what does it mean when I get the identity on the left hand side and the solution on the right?
You don't have "two (or more) systems" of linear equations -- you have a system of one or more linear equations, which you can write in a shorthand form as an augmented matrix.

The operations you do on the matrix are the same that you can do with the system of equations: exchange two equations, add a multiple of one equation to another equation, replace one equation by a nonzero multiple of itself. In each of these operations, there is no change in the solution.

Suppose you have completely reduced a matrix equation so that the augmented matrix looks like this:
##\begin{bmatrix} 1 & 0 & 0 & | & 3 \\ 0 & 1 & 0 & | & 1 \\ 0 & 0 & 1 & | & -3\end{bmatrix}##
The corresponding system of equations would be
1x + 0y + 0z = 3
0x + 1y + 0z = 1
0x + 0y + 1z = -1

Or more simply,
x = 3
y = 1
z = -1
 
  • #4
Mathematicsss said:
Post moved by moderator from Homework section
Hello, I was curious as to why Gaussian elimination works. I know that if we have two ( or more) systems of two (or more) linear equations, we can write then in terms of a matrix. However, what does it mean when I get the identity on the left hand side and the solution on the right?

Gaussian elimination has been around hundreds of years before the invention of matrices, so matrices are not needed at all (but they make the work cleaner and easier). The method just solves equations successively, each time eliminating a new independent variable.

For example, if we have the system
$$\begin{array}{crcc}
(1):& x + 2y - 3z &=& 10\\
(2):& 2x - y + 4z &=& 20 \\
(3):& 2x - z &=& 8
\end{array}
$$
We can use eq (1) to express ##x## in terms of ##y## and ##z##:
$$(1a): \; x = 10 - 2y + 3z. $$
Substitute (1a) into (2) and (3), to get
$$\begin{array}{crrcc}
(2a):& 20-5y+10z = 20 & \Rightarrow& 5y - 10 z = 0 \\
(3a): & 20-4y+5z = 8 & \Rightarrow& 4y - 5z = 12
\end{array}
$$
So, we have a new system of 2 equations in the two unknowns ##y,z##. We have "eliminated" ##x## and made the problem smaller and easier to deal with. We can continue, for example by using eq. (2a) to express ##y## in terms of ##z##. That will leave us with a single equation (3b) that contains ##y## alone.

The reason it "works" is that each step involves a valid re-writing of the equation or equations, using only standard algebraic rules, and never, ever, performing an illegal step such as dividing by 0. For example, we obtained eq. (2a) by re-writing (2) as ##2(10 - 2y + 3z) -y + 4z = 20## and simplifying.
 
  • Like
Likes mfb
  • #5
To rephrase, the three operations used in Gaussian elimination are precisely those that preserve the solution set. It is a nice exercise to show those three are precisely the ones that preserve solutions (EDIT: Trivial for all but one; adding linear combinations of one row to another)..
 
  • #6
You start with a set of equations, with formulas on the left side and constants on the right. You are looking for variable values that give equality. As long as you do the same thing on both sides, they remain equal and valid for those variable values. On one side you can perform operations in terms of the formulas and on the other side you can perform the same operations on the constants -- always the same operation, just in a different form. The variable solution values never change because the two sides always remain equal given those values. Finally, you end up with a set of simple equations like x1=c1; x2=c2; x3=c3; ... They give the solution values of the original equations and of every step in between.
 

1. Why is Gaussian elimination considered an efficient method for solving linear equations?

Gaussian elimination is considered efficient because it reduces the number of operations required to solve a system of linear equations. It eliminates variables one by one, reducing the size of the problem until a solution is found. This method also takes advantage of the properties of matrices, making it a powerful tool for solving large systems of equations.

2. How does Gaussian elimination work?

Gaussian elimination works by using a series of row operations to transform a system of linear equations into an equivalent system in row echelon form. These operations include adding a multiple of one row to another, swapping rows, and multiplying a row by a constant. Once the matrix is in row echelon form, it can be easily solved using back substitution.

3. Why is it called "Gaussian" elimination?

The method is named after the famous mathematician Carl Friedrich Gauss, who is credited with its development. Gauss used this method to solve systems of linear equations in the early 19th century, and it has since been refined and extended by other mathematicians.

4. Can Gaussian elimination fail to find a solution?

Yes, Gaussian elimination can fail to find a solution if the system of equations is inconsistent or if there are infinitely many solutions. Inconsistency occurs when there are no values for the variables that satisfy all of the equations, and infinite solutions occur when there are multiple sets of values that satisfy the equations.

5. How is Gaussian elimination related to other methods for solving linear equations?

Gaussian elimination is a generalization of other methods for solving linear equations, such as Cramer's rule and the method of substitution. It can also be extended to solve systems of equations with more variables than equations, whereas these other methods cannot. Additionally, Gaussian elimination is used as a key step in other advanced methods for solving large systems of equations, such as LU decomposition and Cholesky decomposition.

Similar threads

  • Linear and Abstract Algebra
Replies
1
Views
1K
  • Calculus and Beyond Homework Help
Replies
7
Views
828
  • Linear and Abstract Algebra
Replies
6
Views
4K
  • Linear and Abstract Algebra
Replies
4
Views
3K
  • Linear and Abstract Algebra
Replies
4
Views
1K
Replies
4
Views
3K
  • Precalculus Mathematics Homework Help
Replies
5
Views
2K
  • Precalculus Mathematics Homework Help
Replies
4
Views
2K
  • Linear and Abstract Algebra
Replies
1
Views
4K
  • Precalculus Mathematics Homework Help
Replies
4
Views
1K
Back
Top