Best fit curve using Q-R Factorization?

Click For Summary

Homework Help Overview

The discussion revolves around using Q-R Factorization to determine a best fit curve, specifically in the context of minimizing the residuals in a least squares problem. The original poster expresses confusion regarding the connection between Q-R Factorization and best-fit lines while working on related homework.

Discussion Character

  • Exploratory, Conceptual clarification, Mathematical reasoning

Approaches and Questions Raised

  • Participants discuss the formulation of the problem, including the minimization of ||r=b-Ax||^2 and the implications of using Q-R Factorization. There are attempts to clarify the relationship between the components of the factorization and the best fit solution. Questions arise about the process of solving the resulting equations and the role of orthonormal matrices.

Discussion Status

The discussion is ongoing, with participants providing insights into the mathematical framework and correcting previous statements. Some guidance has been offered regarding the necessary steps to solve for the best fit parameters, though confusion remains about specific implementation details and the handling of matrices in coding contexts.

Contextual Notes

There are indications of potential challenges related to coding and matrix operations, particularly in the context of using non-square matrices and ensuring proper dimensions for multiplication. The original poster is working under a deadline, which may influence the urgency of the discussion.

Jamin2112
Messages
973
Reaction score
12
Best fit curve using Q-R Factorization?

Homework Statement



screen-capture-2-15.png


Homework Equations



screen-capture-4-8.png


The Attempt at a Solution



So ... It's part (a) that is confusing me. I already factored it into Q and R. But does the Q-R Factorization have to do with best-fit lines? (To be fair, I'm working on homework that's due next friday; so this might be something we go over later in the week.)
 
Physics news on Phys.org


You will want to find an x that minimizes [tex]||r=b-Ax||^2[/tex] where A is an mxn matrix, and m>n. But A=QR, so r=b-QRx which is equivalent to Q^T = Q^T b - Rx. Observe then that:
[tex]Q^T r = \left[ {\begin{array}{cc}<br /> (Q^Tb)_n - R_n x \\<br /> Q^T b_{m-n} \\<br /> \end{array} } \right] = \left[ {\begin{array}{cc}<br /> u \\<br /> v \\<br /> \end{array} } \right][/tex]
where
[tex]R=\left[ {\begin{array}{cc}<br /> R_n \\<br /> 0 \\<br /> \end{array} } \right][/tex]
i.e. R is padded with zeros so that it will multiply.
Also observe that
||r||^2 = r^T r = r^T QQ^T r = u^T u + v^T v. But since v is independent of x we find the minimal value of ||r||^2 when u=0 or when
[tex]R_n x = (Q^T b)_n[/tex].
 
Last edited:


Wingeer said:
You will want to find an x that minimizes [tex]||r=b-Ax||^2[/tex] where A is an mxn matrix, and m>n. But A=QR, so r=b-QRx which is equivalent to Q^T = Q^T b - Rx. Observe then that:
[tex]Q^T r = \left[ {\begin{array}{cc}<br /> Q^Tb_n - R_n x \\<br /> Q^T b_{m-n} \\<br /> \end{array} } \right] = \left[ {\begin{array}{cc}<br /> u \\<br /> v \\<br /> \end{array} } \right][/tex]
where
[tex]R=\left[ {\begin{array}{cc}<br /> R_n \\<br /> 0 \\<br /> \end{array} } \right][/tex]
i.e. R is padded with zeros so that it will multiply.
Also observe that
||r||^2 = r^T r = r^T QQ^T r = u^T u + v^T v. But since v is independent of x we find the minimal value of ||r||^2 when u=0 or when
[tex]R_n x = Q^T b_n[/tex].

Still a little confused herre. I'm minimizing ||b - QRx||, of course. I know that an orthonormal matrix's inverse is equal to its transpose, I know that an the inverse of an upper triangular matrix is easy to calculate ... how does this all come together?
 


It all comes together to solving the equation [tex]R_n x = (Q^T b)_n[/tex] for x as I derived. Observe my mistake in the last equation in the previous post which I have corrected in this post. The solution to that system will be the best fit, in your case x is alpha, beta, gamma. R_n is the part of R without zeros, Q transposed is Q transposed and b is the values for y.
 


Here me out, brah.

So this one website gave a simple explanation:

screen-capture-4-9.png





Simple enough. I decided to use it on the following problem.


screen-capture-2-17.png





My code looked like the following.

screen-capture-3-22.png






And of course it isn't working because I can't use the backslash command with a non-square matrix.
 


[PLAIN]http://www.docrafts.com/Content/Forum/Member/45338/bump%20hippo.jpg
 
Last edited by a moderator:


Hi Jamin2112!

Nice pics! :smile:

The method to use QR decomposition to find a least squares fit is described here:
http://en.wikipedia.org/wiki/Numeri...east_squares#Orthogonal_decomposition_methods

Note that this article is supposedly about linear least squares, but this method applies in your case as well.

[edit] Oh, I see this is exactly what Wingeer described. :blushing:
I presume your problem is to get it to work in matlab? [/edit]

[edit2] Please google "back substution in matlab" or something like that. [/edit2]
 
Last edited:


Your code is good. Only thing you have to do is pad the matrices with zeros, so that they can multiply.
 

Similar threads

  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 2 ·
Replies
2
Views
4K
  • · Replies 12 ·
Replies
12
Views
3K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
Replies
16
Views
2K
  • · Replies 9 ·
Replies
9
Views
11K
  • · Replies 1 ·
Replies
1
Views
2K
Replies
4
Views
2K