I Linear least-squares method and row multiplication of matrix

AI Thread Summary
In an overdetermined system represented by Ax = b, the least-squares method provides the best approximate solution, denoted as "c." When a specific row of the system is multiplied by a constant k, both the corresponding row in matrix A and vector b must be adjusted, resulting in a new system Bx = d. This transformation alters the weighting of the equations, leading to a different least-squares solution than c. Unlike consistent systems, where all equations can be satisfied simultaneously, the modified equation's increased weight affects the overall solution. Thus, the least-squares method's objective of minimizing deviations results in different outcomes when rows are scaled.
Mesud1
Messages
2
Reaction score
0
Suppose that I have an overdetermined equation system in matrix form:

Ax = b

Where x and b are column vectors, and A has the same number of rows as b, and x has less rows than both.

The least-squares method could be used here to obtain the best possible approximative solution. Let's call this solution "c".

Now, suppose I multiply some row of the equation system with a constant k. Let's say this row is the second row. In that case, I must multiply the 2nd row of A with k, as well as the 2nd row of b. This yields a new equation system, let's write it as:

Bx = d

If I use the method of least squares on the second system, I get a new solution that is different from c. Why is the solution different? Since I performed an elementary row operation on the first system to obtain the second system, shouldn't the two systems be equivalent, and therefore have the same least-squares solution?

When I did the same thing with a consistent system, I got the same solution for both systems.
 
Mathematics news on Phys.org
The least squares method minimizes the sum of the deviations of the left hand from the right hand side. If you multiply one equation by c, this equation gets more weight in the sum and the optimal solution will be different. This doesn't happen if all equations can be fulfilled identically.
 
  • Like
Likes Mesud1
DrDu said:
The least squares method minimizes the sum of the deviations of the left hand from the right hand side. If you multiply one equation by c, this equation gets more weight in the sum and the optimal solution will be different. This doesn't happen if all equations can be fulfilled identically.

Makes perfect sense, thank you.
 
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. In Dirac’s Principles of Quantum Mechanics published in 1930 he introduced a “convenient notation” he referred to as a “delta function” which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/ by...
Suppose ,instead of the usual x,y coordinate system with an I basis vector along the x -axis and a corresponding j basis vector along the y-axis we instead have a different pair of basis vectors ,call them e and f along their respective axes. I have seen that this is an important subject in maths My question is what physical applications does such a model apply to? I am asking here because I have devoted quite a lot of time in the past to understanding convectors and the dual...
Thread 'Imaginary Pythagoras'
I posted this in the Lame Math thread, but it's got me thinking. Is there any validity to this? Or is it really just a mathematical trick? Naively, I see that i2 + plus 12 does equal zero2. But does this have a meaning? I know one can treat the imaginary number line as just another axis like the reals, but does that mean this does represent a triangle in the complex plane with a hypotenuse of length zero? Ibix offered a rendering of the diagram using what I assume is matrix* notation...
Back
Top