Forcing a Least squares Polynomial through a fixed pointby KayWarner Tags: fit, polynomial, regression 

#1
Aug2211, 04:23 AM

P: 3

Hi guys,
Thanks for taking the time to read the post. I have a question related to curve fitting and polynomials that i was hoping someone might be able to help me with. I have a set of x and y data points, all on a graph. I have then calculated the 4th order least squares polynomial for those points, which gives me an equation for that fit. I would now like to go one step further and calculate what the least squares polynomial will be if i force the fit line through the last data point? Does anyone have a reference or any advice on how to do this? Many thanks for your help in advance, Kay 



#2
Aug2211, 08:54 PM

Mentor
P: 11,984

Welcome to Physics Forums.
I would shift the data so that last data point is at the origin. Then fit a polynomial with the constant term =0, to force it to contain that point at the origin. Finally, do a reverse shift on that polynomial to match it up with the original data. Hope that helps. 



#3
Aug2311, 08:05 AM

P: 3

Thank you so much Redbelly, that is an outstanding answer! I think i may have been trying to overcomplicate the solution to this.
So as i understand it; 1 calculate the original polynomial which will yeild an equation of: ax^4 + bx^3 + cx^2 + dx + e, 2 reverse the data so that the last data point is now first. 3 set (in my case) e = 0; 4 recalculate the polynomial using the first data point as (0,0). this should give me a new polynomial fit that passes right through the origin? the only bit I am unclear about is the reverse shift, what do you mean by do a reverse shift? many thanks for your help, kay 



#4
Aug2311, 08:24 AM

P: 744

Forcing a Least squares Polynomial through a fixed point
Hello !
If the fixed point is (a,b) with a and b given constants, simply write the polynomial as : y(x) = b + (xa)*P(x) where P(x) is a polynomial which coefficients can be computed thanks to least squares fitting. Alternatively, you may define Y(x) = (yb)/(xa) so that each value of Y can be previously computed from each given couple (x,y). Then the coefficients of the polynomial P can be computed thanks to least squares fitting. (But not convenient if (xa)=0, or too close to 0, for a particular point) Both methods will lead to slightly différent polynomial b+(xa)*P(x) because the criterium for least squares are not the same. But, generally, the difference of fitting will be quite small. 



#5
Aug2311, 03:31 PM

HW Helper
P: 930

In general you are talking about least squares with equality constraints, which can be solved using Lagrange Multipliers. But, with only one constraint, there is an easier approach. Let n denote the last point (i.e. the one you want to fit exactly). Then you have:
y_{n} = ax_{n}^{4} + bx_{n}^{3} + cx_{n}^{2} + dx_{n} + e or e = y_{n}  ax_{n}^{4}  bx_{n}^{3}  cx_{n}^{2}  dx_{n} If you substitute this expression for e into your original equation you end up with a function of only 4 parameters (a,b,c,d) which can be used for your least squares fit. 



#6
Aug2311, 03:37 PM

Emeritus
Sci Advisor
PF Gold
P: 16,101

If you're lazy and close is good enough, you could just put a nearinfinite weight on the error at the point you want your curve to pass through.




#7
Aug2611, 03:58 PM

P: 3

Thank you so much for the help guys, its really helped me out loads!
Its a huge tribute to the forum that i could get such accutrate and comprehensive guidance. Many thanks to all!! 



#8
Aug2711, 10:08 AM

Mentor
P: 11,984

Glad you got other good suggestions.
1. You have some (x,y) data values: (x1, y1)2. Make a new set of data  let's call these (x', y')  by subtracting (xN, yN) from each (x,y) value. (This is the shift): (x1xN, y1yN) = (x1', y1')3. Fit a polynomial, without the constant term, to the new data (x', y'). Since there is no constant term, the fit will contain the point (xN',yN')=(0,0), as required. Eg., for a cubic you would have y' = ax'^{3} + bx'^{2} + cx'4. We want to get an equation for x & y from the equation we have for x' and y'. This is the reverse shift. Using the substitutions x = x' + xNWe get y + yN = a(x+xN)^{3} + b(x+xN)^{2} + c(x+xN)or y = a(x+xN)^{3} + b(x+xN)^{2} + c(x+xN)  yNYou could either expand out the (x+xN)^{n} terms, or leave them in that form. 



#9
Aug2711, 10:12 AM

P: 3,015

If your polynomial is of power N, having one fixed point makes a definite relation among the N + 1 coefficients and they are no longer independent. You may want to use the Lagrange multiplier method when differentiating the sum of the squared errors to find its minimum (least squares method).



Register to reply 
Related Discussions  
Bijectivity proof concerning congruence class in F[x] modulo a fixed polynomial p(x)  Calculus & Beyond Homework  3  
hyperbolic fixed point  Calculus & Beyond Homework  0  
fixed point equation  Precalculus Mathematics Homework  2  
Fixed point charges.  Introductory Physics Homework  1  
linear polynomial least squares  Introductory Physics Homework  6 