Solving Simultaneous equations using Matrices

AI Thread Summary
The discussion focuses on solving simultaneous equations using matrices, specifically through Gaussian elimination with augmented matrices. The original poster seeks clarification on the process of adjusting elements in the matrix, particularly whether the first element should equal zero or one. Participants clarify that the goal is to achieve a form where the leading coefficients are one, not zero, and emphasize the importance of row operations such as multiplication, switching, and addition. They also differentiate Gaussian elimination from Gauss-Jordan elimination, which further reduces the matrix to the identity form. The conversation highlights the need for a clear understanding of these methods to effectively solve systems of equations.
theJorge551
Messages
66
Reaction score
0
I have to teach myself pre-calculus and basic calculus over the summer, and whilst covering matrices the chapter on solving simultaneous systems of equations using matrices puts forth several methods, one of which being the method of Gaussian elimination with augmented matrices. I understand why the first element of the newly augmented matrix has to now equal zero, but the formula for adjusting every other element on the first row wasn't clearly defined in my book, and they show the result without going through how to evaluate the other elements. Is it basically making a function like "Row 1 minus 3 x (Row 2)" to somehow make the first element equal zero, or is there an ironclad method for each row reduction?
 
Physics news on Phys.org
Do you mean the first element of the matrix has to equal one?

In any event, you can read more about Gaussian elimination here:

http://en.wikipedia.org/wiki/Gaussian_elimination

Basically, though, you can take a row in an augmented matrix and do a few different things to it.

1) Multiply it by a constant.
2) Switch it with another row.
3) Add another row to it.

Gaussian elimination just consists in performing a set of these "row operations" such that your matrix is reduced to "echelon form", where all nonzero rows are above all zero rows, and the first nonzero number in each row is a) 1, b) in a column further to the right than the row above it.
 
Well, with the method the book puts forth, basically the first row (of a 2 x 3 augmented matrix) reads { a b : c} in which a is the first x coefficient, b is the first y coefficient, and c is the independent constant. It then says that a must equal zero, in order to leave a newly transformed b and transformed c, to find the value of an independent y. Hence, one can find other values for x using the other equation...I've been doing some practice problems and it seems that one simply need add another row to it (sort of like what the book describes) and multiplying one of the rows by a constant. Thanks
 
Are you sure that is what your book says? I have never seen it done that way. You want to reduce something like "ax+ by= c, dx+ ey= f" "x= p, y= q" or, in terms of matrices,
\begin{bmatrix}a & b & c \\ d & e & f\end{bmatrix}
to
\begin{bmatrix}1 & 0 & p \\ 0 & 1 & q\end{bmatrix}

That is, you want the first number in the first column (more generally the numbers on the main diagonal) to be one, not zero.
 
I am quite positive that it's what my book says (I apologize, I'm hopeless at using LaTex to make matrices and don't quite have a grasp on it yet) basically here's the process that the book describes. It involves no use of the identity matrix in any way, that I can see.

For the equations ax+by = c, and px + qy = d

a b : c
p q : d

Creating some sort of row operation, for example: If a=3 and p=3, then the operation would be Row 1 - Row 2. Then, the new augmented matrix is

0 b-q : c-d
p q : d

And, the new equation to determine the value of Y is (b-q)y = (c-d) and solving for y is simple at that point. To determine the value of x, the book then goes on to demonstrate that one only need plug in the value for y, and solve for x in the px + qy = d equation.

I'm aware that there is a method for using the identity matrix, but my book isolates that completely from the method for Gaussian elimination.
 
That's a bit weird. The way I've usually seen it, you use row operations to transform the coefficient matrix into an upper triangular matrix or row-echelon form, and then solve from the bottom up.

Gauss-Jordan elimination goes a bit further and transforms the coefficient matrix into the identity matrix.
 
I picked up this problem from the Schaum's series book titled "College Mathematics" by Ayres/Schmidt. It is a solved problem in the book. But what surprised me was that the solution to this problem was given in one line without any explanation. I could, therefore, not understand how the given one-line solution was reached. The one-line solution in the book says: The equation is ##x \cos{\omega} +y \sin{\omega} - 5 = 0##, ##\omega## being the parameter. From my side, the only thing I could...
Back
Top