What's the use of interchanging two rows when solving a matrix?

  • Context: Undergrad 
  • Thread starter Thread starter Juwane
  • Start date Start date
  • Tags Tags
    Matrix
Click For Summary
SUMMARY

Interchanging two rows in a matrix is a crucial operation in solving systems of linear equations, particularly in methods like Gaussian elimination. This operation simplifies calculations, especially when determining the determinant of a matrix, as it can lead to a triangular form that makes the determinant easier to compute. The discussion highlights the importance of row interchanges in achieving numerical stability through techniques such as partial pivoting, which helps avoid errors associated with small pivot elements. The reference to Griffel's "Linear Algebra and its Applications" underscores the theoretical foundation of these operations.

PREREQUISITES
  • Elementary row operations in linear algebra
  • Understanding of Gaussian elimination
  • Concept of determinants in matrices
  • Partial pivoting technique for numerical stability
NEXT STEPS
  • Study the Gaussian elimination method in detail
  • Learn about determinants and their properties in linear algebra
  • Research partial pivoting and its implementation in numerical methods
  • Explore complete pivoting and its advantages over partial pivoting
USEFUL FOR

Students and professionals in mathematics, particularly those focusing on linear algebra, numerical analysis, and anyone involved in solving systems of linear equations efficiently.

Juwane
Messages
86
Reaction score
0
Interchanging two rows would just mean to exchange places of two equations in a system. What's the use of this when no operation has been done on them, only their places have been changed? Is it done only in matrices, just so as to make the operations on the rows easier?
 
Physics news on Phys.org
It depends what you mean by "solving" a matrix. For example, if you're calculating the determinant, then it's useful to know what relates elementary row/column operations to the value of the determinant, since after applying them, the determinant may be easier to solve.
 
radou said:
It depends what you mean by "solving" a matrix. For example, if you're calculating the determinant, then it's useful to know what relates elementary row/column operations to the value of the determinant, since after applying them, the determinant may be easier to solve.

By solving a matrix, I mean applying elementary row operations on a matrix to solve system of linear equations, such as in the Gaussian elimination method.

I understand that multiplying an equation with a non-zero constant and/or then adding it to or subtracting it from another equation will help us eliminate the unknowns, but how does interchanging the places of any two equations help us in this?
 
It depends on the matrix you are trying to bring to cannonic form. If it's easier for you to normalize the third row before the first row, then you would like to bring the third row up, which row interchanging allows you.

F.e. a pretty trivial matrix:

[0 1]
[1 0]

You can start adding and substracting rows from each other,
But wouldn't it be easier to just switch them?

(Of course this an idiotic example, but I hope you get it)
 
Yes, this is what I'm asking. If we have a system of linear equations like:

2x + 3y = 8
4x - 7y = 3<br />

--then if we write the above as:

4x - 7y = 3
2x + 3y = 8

How will this help us in anything? I mean, the whole business of interchanging rows is useful in matrices only, and not when we don't have the equations in matrix form, right?
 
Imagine you have a 3x3 matrix

-3 2 4
0 0 -3
0 -3 0

Ok find the determinant...

...If you took more than 10 seconds to find the determinant then you probably don't know linear algebra...

my solution ... R2<->R3 factor out a -3(3) and notice matrix is triangular. It follows that det A is (3)(-3)(1)(1)(1)=-27and remember that you swapped two rows so the sign of the determinant must change. So det A is 27. Expansion by cofactors would have taken a while. Remembering some simple theorems for understanding the effects of row operations on determinants saves time.
 
Last edited:
Griffel has an interesting example in Linear Algebra and its Applications (1989), Vol. 1, § 4.1, p. 119.

0.01x + 10y = 10
0.1x -0.1y = 0

The answer is x = y = 0.999000999. But working correct to 3 significant figures, and rounding 100.1 to 100 in the first step:

\begin{bmatrix} 0.01 &amp; 10 &amp; 10\\ 0.1 &amp; -0.1 &amp; 0 \end{bmatrix} \rightarrow\begin{bmatrix} 0.01 &amp; 10 &amp; 10\\ 0 &amp; -100 &amp; -100 \end{bmatrix} \rightarrow\begin{bmatrix} 1 &amp; 0 &amp; 0\\ 0 &amp; 1 &amp; 1 \end{bmatrix}

x = 0
y = 1

An error of 100%! But if we swap rows to bring the largest element in the relevant column into pivot position, to avoid the problem of having a small pivot, and again round to 3 significant figures where more appear:

\begin{bmatrix}0.1 &amp; -0.1 &amp; 0\\ 0.01 &amp; 10 &amp; 10\end{bmatrix} \rightarrow\begin{bmatrix}1 &amp; -1 &amp; 0\\ 0 &amp; 999 &amp; 1000\end{bmatrix} \rightarrow\begin{bmatrix}1 &amp; 0 &amp; 1\\ 0 &amp; 1 &amp; 1\end{bmatrix}

x = y = 1

This method he calls partial pivoting. He says there's also a more accurate but more complicated technique called complete pivoting where the columns are exchanged. But he says partial pivoting is "nearly always used in practice." And that's all I know about that...
 

Similar threads

  • · Replies 6 ·
Replies
6
Views
4K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 33 ·
2
Replies
33
Views
2K
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 9 ·
Replies
9
Views
5K
  • · Replies 19 ·
Replies
19
Views
4K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 25 ·
Replies
25
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K