Can the Least Squares Method be expressed as a convolution?

Click For Summary
The discussion explores the possibility of expressing the Least Squares Method (LSM) as a convolution by transitioning from a summation to an integral form. The author simplifies the function by ignoring certain parameters and reformulating it, ultimately isolating a constant term that does not depend on the variable of interest. They identify a convolution term in the equation but struggle to determine the existence of a kernel function. The author expresses confusion regarding their initial logic and seeks assistance in resolving their dilemma about the kernel's existence. The discussion highlights the complexities involved in connecting LSM with convolution concepts.
Daniel Petka
Messages
147
Reaction score
16
Homework Statement
Consider a laser line position estimation by fitting using the Least Square Method (LSM) and prove (or disprove) that it can be considered as a convolution with some function and finding the center by looking for the maximum (zero‐crossing by the derivative). What is the smoothing function?

The Least Square Method (LSM) is defined as:
$$\sum_i[S(x_i)-F(x_i,;a,b,...)]^2=min,$$
where the fitting function is:
$$F(x;y_0,A,x_0,w)=y_0+A\cdot g(x-x_c,w)$$

The fit program will adjust all parameters, but we
are interested only for ##x_c##.

Hint: change sums to integrals in LSM description!
Relevant Equations
fitting function: ##F(x;y_0,A,x_0,w)=y_0+A\cdot g(x-x_c,w)##
convolution: ##f(x)=\int S(x-y)K(y)dy##
Least Squares Method: ##\sum_i[S(x_i)-F(x_i,;a,b,...)]^2=min##
1709981521836.png

I started by converting the LSM from sum to integral form:
$$f(x_c) = \sum_i[S(x_i)-F(x_i,;a,b,...)]^2 to f(x_c) = \int( S(x) - F(x-x_c)^2 dx$$

Since we are not interested in the other parameters (like offset), I assumed that they are fitted correctly and thus ignored them, turning ##F(x-x_c)## directly to ##g(x-x_c)##.

Then I expanded the binomial formula as following:
$$\int S(x)^2 - 2S(x)F(x-x_c) + g(x-x_c)^2 dx$$

And used the linearity of the integral to isolate the part of the equation that doesn't depend on x_0:
$$ f(x_c) = \int S(x)^2 dx + \int 2S(x)g(x-x_c) + g(x-x_c)^2 dx$$
Hence, we have a constant q that isn't affected by the convolution:

$$ f(x_c) = q + \int 2S(x)g(x-x_c) + g(x-x_c)^2 dx$$

The middle term is a convolution og the 2 functions. My idea was to disprove that a Kernel exists, because there is a term that doesn't depend on ##x_c##, but this logic doesn't make any sense after thinking about it. I am completely stuck at this point, since I can neither prove nor disprove that the kernel function exists. Any help would be highly appreciated!
 
Last edited:
At first, I derived that: $$\nabla \frac 1{\mu}=-\frac 1{{\mu}^3}\left((1-\beta^2)+\frac{\dot{\vec\beta}\cdot\vec R}c\right)\vec R$$ (dot means differentiation with respect to ##t'##). I assume this result is true because it gives valid result for magnetic field. To find electric field one should also derive partial derivative of ##\vec A## with respect to ##t##. I've used chain rule, substituted ##\vec A## and used derivative of product formula. $$\frac {\partial \vec A}{\partial t}=\frac...