I don't know if the method has a name; it is just one of the standard types of problem examined in an optimization course, for example. I constructed a fake example with points X1 =[0.5, 1.2, 3.1, 3.8, 4.5] for the first list and X2 = [6, 7.2, 8.1, 8.9, 9.3] for the second list. So, we need 4.5 <= r
<= 6 in the previous
notation. If you know the
value of r you can set dS/da
= 0, etc, and solve the linear
system. If r is also unknown
you also need to try using
the condition dS/dr = 0. This
will give a slightly nonlinear
system to solve, which might
be nasty in some cases.
Possibly the value of r
obtained in this way will
violate the required
constraint, in which case the
optimal value of r will lie at one of the two endpoints
(either r = 4.5 or r = 6 in my case), so this would just need the solution of two fixed-r problems. However, if you use an optimization package, all that is unnecessary: just ask to minimize S(a,b,c) [or S(a,b,c,r)] directly. For example, in EXCEL you put a, b, etc., in some cells and the final formula for S in some target cell, then ask Solver to minimize the target cell by varying the "variable cell" entries. If you have constraints such as 4.5 <= r <= 6, you just add them as r >= 4.5 and r <= 6 separately. (Solver works most efficiently if constraints are written with all variables on the left and only constants on the right.) For highly nonlinear problems it is advisable to help Solver, by giving a reasonable starting point for at least se of the variables. For example, you could supply a starting value of r, such as r = 5, and let Solver correct that value. (For the case where r is not variable, you just have a purely quadratic unconstrained optimization, which Solver handles with never any problem.
RGV