# What's the use of interchanging two rows when solving a matrix?

1. Feb 4, 2010

### Juwane

Interchanging two rows would just mean to exchange places of two equations in a system. What's the use of this when no operation has been done on them, only their places have been changed? Is it done only in matrices, just so as to make the operations on the rows easier?

2. Feb 5, 2010

It depends what you mean by "solving" a matrix. For example, if you're calculating the determinant, then it's useful to know what relates elementary row/column operations to the value of the determinant, since after applying them, the determinant may be easier to solve.

3. Feb 5, 2010

### Juwane

By solving a matrix, I mean applying elementary row operations on a matrix to solve system of linear equations, such as in the Gaussian elimination method.

I understand that multiplying an equation with a non-zero constant and/or then adding it to or subtracting it from another equation will help us eliminate the unknowns, but how does interchanging the places of any two equations help us in this?

4. Feb 5, 2010

### elibj123

It depends on the matrix you are trying to bring to cannonic form. If it's easier for you to normalize the third row before the first row, then you would like to bring the third row up, which row interchanging allows you.

F.e. a pretty trivial matrix:

[0 1]
[1 0]

You can start adding and substracting rows from each other,
But wouldn't it be easier to just switch them?

(Of course this an idiotic example, but I hope you get it)

5. Feb 5, 2010

### Juwane

Yes, this is what I'm asking. If we have a system of linear equations like:

$$2x + 3y = 8$$
$$4x - 7y = 3$$

--then if we write the above as:

$$4x - 7y = 3$$
$$2x + 3y = 8$$

How will this help us in anything? I mean, the whole business of interchanging rows is useful in matrices only, and not when we don't have the equations in matrix form, right?

6. Feb 8, 2010

### Dosmascerveza

Imagine you have a 3x3 matrix

-3 2 4
0 0 -3
0 -3 0

Ok find the determinant...

...

If you took more than 10 seconds to find the determinant then you probably don't know linear algebra...

my solution ... R2<->R3 factor out a -3(3) and notice matrix is triangular. It follows that det A is (3)(-3)(1)(1)(1)=-27and remember that you swapped two rows so the sign of the determinant must change. So det A is 27. Expansion by cofactors would have taken a while. Remembering some simple theorems for understanding the effects of row operations on determinants saves time.

Last edited: Feb 8, 2010
7. Feb 9, 2010

### Rasalhague

Griffel has an interesting example in Linear Algebra and its Applications (1989), Vol. 1, ยง 4.1, p. 119.

0.01x + 10y = 10
0.1x -0.1y = 0

The answer is x = y = 0.999000999. But working correct to 3 significant figures, and rounding 100.1 to 100 in the first step:

$$\begin{bmatrix} 0.01 & 10 & 10\\ 0.1 & -0.1 & 0 \end{bmatrix} \rightarrow\begin{bmatrix} 0.01 & 10 & 10\\ 0 & -100 & -100 \end{bmatrix} \rightarrow\begin{bmatrix} 1 & 0 & 0\\ 0 & 1 & 1 \end{bmatrix}$$

x = 0
y = 1

An error of 100%! But if we swap rows to bring the largest element in the relevant column into pivot position, to avoid the problem of having a small pivot, and again round to 3 significant figures where more appear:

$$\begin{bmatrix}0.1 & -0.1 & 0\\ 0.01 & 10 & 10\end{bmatrix} \rightarrow\begin{bmatrix}1 & -1 & 0\\ 0 & 999 & 1000\end{bmatrix} \rightarrow\begin{bmatrix}1 & 0 & 1\\ 0 & 1 & 1\end{bmatrix}$$

x = y = 1

This method he calls partial pivoting. He says there's also a more accurate but more complicated technique called complete pivoting where the columns are exchanged. But he says partial pivoting is "nearly always used in practice." And that's all I know about that...