Approximation theoryby a.mlw.walker Tags: golden section, levenberg marquardt, parameter estimation 

#19
Jul2011, 04:18 PM

P: 153

Yeah I got better too, but mine curves the other way...(attached). That is using D H's idea that c0 is not a parameter in itself.
The website that you said you found the whole document at says that T0 is the time of the initial revolution, on the page called manual at the bottom somewhere.... What have you plotted on the y axis? I plotted the cumulative time? 



#20
Jul2011, 04:35 PM

HW Helper
P: 930





#21
Jul2011, 04:48 PM

Mentor
P: 14,432





#22
Jul2011, 05:19 PM

HW Helper
P: 930

What still bothers me is the expression T_{0} = t_{1}  t_{0} because it seems to me t_{0} has to equal zero (based on equation #35). 



#23
Jul2011, 05:48 PM

Mentor
P: 14,432

a.mlw.walker:
Unless you object, I'd like to move this back to mathematics, where you originally asked about this topic. You didn't get any bites the first time around because you put the question in "General Math" and you gave the thread a bad title ("please can people advise me on this!"). Threads in general math whose titles are of the form "please help me", are entirely in lower case, and ends with "!" are typically from students asking us to do their homework for them while they go toss down a beer. So, other than perhaps the mentor responsible for that section, nobody even looked at your post. This time around you got bites despite having put the thread in the wrong place (this is not a MechE question) because you gave the thread a good title and kept the original post short and to the point. 



#24
Jul2011, 05:55 PM

Mentor
P: 14,432

Now back to the question at hand:
I have noticed on occasion that a nonlinear multivariate fit doesn't seem to fit as well as I'd like and it seems to leave some signal in the residuals. Polishing that initial fit is with a second fit on a restricted set of variables oftentimes does the trick (e.g. as was done in this paper) . That polishing is admittedly a bit ad hoc. If that adhocery doesn't work I either resort to something even more ad hoc or I back up and try again with a drastically different technique. 



#25
Jul2111, 04:38 AM

P: 153

Cool, we are getting to an aggreement. Hotvette got the two and three parameter fits to agree. Hotvette, on the website you found the actual document on, there is a link to a page called manual.
Search for this line t0: time of initial revolution in the chapter called 3. Set Ball. That is why i think T0 is just the time for the initial timing. D H If you would like to move it, do so by all means. Apart from it being specific to parameter estimation, I think the computing side of this problem could allow it to be considered as engineering, however i am not bothered. I dont even know if you have already moved it.... Your point on second fits. How do you know when a fit is good enough to not need a second fit? He does one yes, however the curve looks so good that is there some way to determine whether a second fit is necessary? Once we do/dont do a second fit, the next problem is equation 42. Can we talk about what he does here. He uses a and b and some approximations for theta_f to optimize for the three parameters. However he says he can solve it for the one nonlinear parameter first, and the linear parameters linearly. I have read about this, but Im not sure how the method changes. What is the method of trisection here? Google cant find much on it, but i suspect its similar to the golden section search as in it finds a minimum as you vary the non linear parameter of 2pi there will be a minimum error to try and find... Out of interest if you have linear and non linear parameters, can you solve for all of them non linearly and still get the correct linear parameters or does it have to be done the way he mentions  thats only a side note, just wondering... then the linear parameters can be found by any old meansin the linear equation 43. 



#26
Jul2111, 06:04 AM

P: 153

Guys, I have tried to use the golden section search for equation 41. It runs successfully but doesnt seem to produce a 'better' fit. What do you reckon. I used beta as ab^2 like he said, and c is a constant from the solution of the first part (attached)




#27
Jul2611, 04:48 AM

P: 153

hey guys, you gone on holiday?




#28
Jul2611, 06:03 PM

HW Helper
P: 930

1. Step 1: equation #40 is intended to obtain initial estimates of all three parameters (not two) based on equation #35. Subsequent discussion refers to further refinement steps and has nothing to do with equations #35 and #40. 2. Step 2: parameter refinement for a & b using equation #41 (and definition for c_{0}) isn't intended to get a better fit for equation #35. It is meant to get better estimates of a & b for use in step 3. 3. Step 3: using the refined esimtates for a & b from step 2, obtain estimates for the remaining parameters using equation #43 I believe the intention is to discard the previous equations at each new step. Re why the author chose the particular optimization method for each step, it is difficult to say. Perhaps the author tried several methods each time and found one that seemed to work better in each case. I seem to recall you asking a question about a curve fitting situation where some parameters are linear and others are nonlinear. As far as I know, even if only 1 parameter is nonlinear they all need to be treated as nonlinear. Even if a least squares problem is linear in all unknowns, it can still be approached as a nonlinear problem (and solved in a single iteration). Hope this helps. I really don't think there is anything more of use I can add. 



#29
Jul2711, 05:05 AM

P: 153

Great thanks Hotvette, however after equation 42, before 43, the author writes that using the method of trisection the value for phi can be found and then the equation becomes a linear equation (43). How would you find the value of phi without also finding the other parameters  or is this what you are saying. solve it completely non linearly and then using the value of phi improve the other approxiomations linearly?
Did you see my graph above, trying to improve the parameter a, gave a worse fit? 



#30
Jul2711, 02:56 PM

HW Helper
P: 930

Beyond that it's a bit fuzzy. Equation #41 looks like a single parameter problem (i.e. S(a) = xxx) to refine the value of 'a', thus the use of golden section or trisection. Beta is fixed using the values of a & b that were determined from equation #40. What isn't clear is whether c_{0} is considered fixed using the same values of a & b that were used for Beta or whether 'a' is still considered a variable. Once you solve equation #41, forget about #40 and #35, they no longer apply (that was my point in the previous post). What's the ultimate goal of this? Predict where the ball will land? 



#31
Jul2711, 03:31 PM

P: 153

I suppose thats the ultimate goal. I am applying for a job in finance and have been advised that I have a very good grasp of estimation theory. After hinting the interenet for more complex examples and I came across this years ago so thought that I would try and solve them. My background is mechanical engineering, but usually when i need to fit a curve its to a polynomial not to an equation like in this paper.
I have read a little more and emailed the author and have found out that eqn 41 is used to quantify the goodness of the fit. I.e if a changes much then the fit is not very good. D H mentioned that this technique is an 'ad hoc' techinque. As the original fit from equation 40 is better, I will use other methods to describe the goodness of the fit. Just looking at equation 40, the mimization method usually is the sum of the difference between real data and theoretical data (squared). however in equation 40 i cant tell which part is the real data part, can you see what i mean? EDIT: Oh right, eqn 41 is not the sum squared, it is the modulus of the sum, why has the author written this differently? 



#32
Jul2711, 06:04 PM

Mentor
P: 14,432

That said, it does appear after multiple readings of the text around equation #40 that equation #40 is a fit for three parameters. The factor c_{0} found with this fit is apparently tossed out. This is perhaps a bit disparaging, but it appears that the author knows a limited number of optimization techniques. To overcome the limitations of those techniques he used a lot of ad hoc, ummm, stuff. There's no mention in the paper of the ruggedness of the optimization landscape or of any correlations between his chosen tuning parameters. I suspect there's a lot of nastiness going on such as correlated coefficients and a rugged landscape with a long curving valley. Perhaps another technique would fare better. Simulated annealing or a biologicallymotivated technique such as ant search might well attack all of the parameters at once. One more point: The author obviously had a lot more than five data points (noisy data points at that) at hand. Noisy data does tend to make for a rugged landscape. 



#33
Jul2811, 01:41 PM

P: 153

Hi, Can I just ask you what you think about the setup of equation 42. Usually with least squares methods you have the actual data  the computed answer, square it, sum it and find the minimum. What part of equation 42 is the actual data part. (apart from theta_f). I am a little conusfed as to how to set the matlab up for this.
At the moment I am taking: f = sum(abs(c1.*exp(2*a*theta_f)+nu*((1+0.5*(4*a^2+1))*cos(theta_f+phi)2*a*sin(theta_f+phi))+b^2wf2)) That is the sum of the modulus of everything in the equation. I think this is what i am trying to minimize? I took your advice and decided i would try and solve all the parameters in 42 in one go. I am again using the golden section search. However one question is from the fact that my wf^2 comes out the same as my 'nu'  that is if the upper and lower boundaries start as the same. I just wondered how sensitive this method was to the start values, because changing the start values has a significant effect on the values it calculates 



#34
Jul2811, 09:10 PM

HW Helper
P: 930

I don't know where equation #42 came from but it isn't a least squares problem. It's just some function to be minimized with respect to several variables, meaning the partial derivatives of the function with respect to each variable need to be zero.




#35
Jul2911, 03:55 AM

P: 153

Ok so for this one what method would you suggest. I know he does a trisection search (i have never used trisection, D H said it wasnt good, so I attempted using golden section search).
However I thjink I am running into what he is talking about. He said he solves for phi between 0 and 2pi using trisection then solved equation 43 using other methods ( i think by hand?). My values for nu and omega^2 always come out the same using this method, so I think this is why he did it his way, However if i want to get all parameters at once what method would you recommend. Can i use the downhill simplex for this, and just set it up as the modulus and the sum rather than squaring it? D H mentioned an ant search which i have been reading about, however I dont think I understand what the algorithm is doing. A nice robust method like that though did sound appealing. His data he uses for equation 42 is in the attached graph. its pretty dirty... x axis is T0 (initial time) y axis is theta_f (fall angle) 


Register to reply 
Related Discussions  
Approximation theory  Calculus & Beyond Homework  16  
Quantum Field Theory and Perturbation Approximation  Quantum Physics  14  
Is string theory an approximation to QFT?  Beyond the Standard Model  7  
Smolin's  Could quantum mechanics be an approximation to another theory?  Beyond the Standard Model  8 