Linear least-squares method and row multiplication of matrix

Click For Summary
SUMMARY

The discussion centers on the linear least-squares method applied to an overdetermined system of equations represented as Ax = b. When a specific row of the matrix A and the corresponding vector b is multiplied by a constant k, the resulting system Bx = d yields a different least-squares solution compared to the original system. This discrepancy arises because the least-squares method weighs the modified equation more heavily, affecting the optimal solution. In contrast, consistent systems yield the same solution regardless of such row operations, as all equations can be satisfied identically.

PREREQUISITES
  • Understanding of linear algebra concepts, specifically overdetermined systems.
  • Familiarity with the least-squares method for solving linear equations.
  • Knowledge of matrix operations, including row multiplication.
  • Basic proficiency in mathematical notation and vector representation.
NEXT STEPS
  • Study the derivation and applications of the least-squares method in various contexts.
  • Explore the implications of row operations on matrix equations in linear algebra.
  • Learn about consistent versus inconsistent systems and their solutions.
  • Investigate numerical methods for solving overdetermined systems, such as QR decomposition.
USEFUL FOR

Mathematicians, data scientists, and engineers involved in optimization problems, particularly those working with linear regression and numerical analysis.

Mesud1
Messages
2
Reaction score
0
Suppose that I have an overdetermined equation system in matrix form:

Ax = b

Where x and b are column vectors, and A has the same number of rows as b, and x has less rows than both.

The least-squares method could be used here to obtain the best possible approximative solution. Let's call this solution "c".

Now, suppose I multiply some row of the equation system with a constant k. Let's say this row is the second row. In that case, I must multiply the 2nd row of A with k, as well as the 2nd row of b. This yields a new equation system, let's write it as:

Bx = d

If I use the method of least squares on the second system, I get a new solution that is different from c. Why is the solution different? Since I performed an elementary row operation on the first system to obtain the second system, shouldn't the two systems be equivalent, and therefore have the same least-squares solution?

When I did the same thing with a consistent system, I got the same solution for both systems.
 
Physics news on Phys.org
The least squares method minimizes the sum of the deviations of the left hand from the right hand side. If you multiply one equation by c, this equation gets more weight in the sum and the optimal solution will be different. This doesn't happen if all equations can be fulfilled identically.
 
  • Like
Likes   Reactions: Mesud1
DrDu said:
The least squares method minimizes the sum of the deviations of the left hand from the right hand side. If you multiply one equation by c, this equation gets more weight in the sum and the optimal solution will be different. This doesn't happen if all equations can be fulfilled identically.

Makes perfect sense, thank you.
 

Similar threads

  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 6 ·
Replies
6
Views
4K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 5 ·
Replies
5
Views
2K
Replies
2
Views
2K
  • · Replies 9 ·
Replies
9
Views
5K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 10 ·
Replies
10
Views
2K