Best fit curve using Q-R Factorization?

Click For Summary
The discussion centers on using Q-R factorization to derive a best-fit curve, particularly in the context of minimizing the residuals in a least squares problem. Participants clarify that the goal is to solve for x in the equation R_n x = Q^T b_n, where R_n is the upper triangular part of the R matrix. There is confusion regarding the implementation in MATLAB, specifically about the limitations of using backslash commands with non-square matrices. Suggestions include ensuring matrices are padded with zeros for proper multiplication and referencing external resources for clarification on QR decomposition methods. The conversation emphasizes the connection between Q-R factorization and least squares fitting.
Jamin2112
Messages
973
Reaction score
12
Best fit curve using Q-R Factorization?

Homework Statement



screen-capture-2-15.png


Homework Equations



screen-capture-4-8.png


The Attempt at a Solution



So ... It's part (a) that is confusing me. I already factored it into Q and R. But does the Q-R Factorization have to do with best-fit lines? (To be fair, I'm working on homework that's due next friday; so this might be something we go over later in the week.)
 
Physics news on Phys.org


You will want to find an x that minimizes ||r=b-Ax||^2 where A is an mxn matrix, and m>n. But A=QR, so r=b-QRx which is equivalent to Q^T = Q^T b - Rx. Observe then that:
Q^T r = \left[ {\begin{array}{cc}<br /> (Q^Tb)_n - R_n x \\<br /> Q^T b_{m-n} \\<br /> \end{array} } \right] = \left[ {\begin{array}{cc}<br /> u \\<br /> v \\<br /> \end{array} } \right]
where
R=\left[ {\begin{array}{cc}<br /> R_n \\<br /> 0 \\<br /> \end{array} } \right]
i.e. R is padded with zeros so that it will multiply.
Also observe that
||r||^2 = r^T r = r^T QQ^T r = u^T u + v^T v. But since v is independent of x we find the minimal value of ||r||^2 when u=0 or when
R_n x = (Q^T b)_n.
 
Last edited:


Wingeer said:
You will want to find an x that minimizes ||r=b-Ax||^2 where A is an mxn matrix, and m>n. But A=QR, so r=b-QRx which is equivalent to Q^T = Q^T b - Rx. Observe then that:
Q^T r = \left[ {\begin{array}{cc}<br /> Q^Tb_n - R_n x \\<br /> Q^T b_{m-n} \\<br /> \end{array} } \right] = \left[ {\begin{array}{cc}<br /> u \\<br /> v \\<br /> \end{array} } \right]
where
R=\left[ {\begin{array}{cc}<br /> R_n \\<br /> 0 \\<br /> \end{array} } \right]
i.e. R is padded with zeros so that it will multiply.
Also observe that
||r||^2 = r^T r = r^T QQ^T r = u^T u + v^T v. But since v is independent of x we find the minimal value of ||r||^2 when u=0 or when
R_n x = Q^T b_n.

Still a little confused herre. I'm minimizing ||b - QRx||, of course. I know that an orthonormal matrix's inverse is equal to its transpose, I know that an the inverse of an upper triangular matrix is easy to calculate ... how does this all come together?
 


It all comes together to solving the equation R_n x = (Q^T b)_n for x as I derived. Observe my mistake in the last equation in the previous post which I have corrected in this post. The solution to that system will be the best fit, in your case x is alpha, beta, gamma. R_n is the part of R without zeros, Q transposed is Q transposed and b is the values for y.
 


Here me out, brah.

So this one website gave a simple explanation:

screen-capture-4-9.png





Simple enough. I decided to use it on the following problem.


screen-capture-2-17.png





My code looked like the following.

screen-capture-3-22.png






And of course it isn't working because I can't use the backslash command with a non-square matrix.
 


[PLAIN]http://www.docrafts.com/Content/Forum/Member/45338/bump%20hippo.jpg
 
Last edited by a moderator:


Hi Jamin2112!

Nice pics! :smile:

The method to use QR decomposition to find a least squares fit is described here:
http://en.wikipedia.org/wiki/Numeri...east_squares#Orthogonal_decomposition_methods

Note that this article is supposedly about linear least squares, but this method applies in your case as well.

[edit] Oh, I see this is exactly what Wingeer described. :blushing:
I presume your problem is to get it to work in matlab? [/edit]

[edit2] Please google "back substution in matlab" or something like that. [/edit2]
 
Last edited:


Your code is good. Only thing you have to do is pad the matrices with zeros, so that they can multiply.
 

Similar threads

  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 9 ·
Replies
9
Views
11K
  • · Replies 1 ·
Replies
1
Views
2K
Replies
4
Views
2K
  • · Replies 9 ·
Replies
9
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
Replies
7
Views
2K
  • · Replies 2 ·
Replies
2
Views
1K