Least squares fitting by a constant

Click For Summary

Discussion Overview

The discussion revolves around the concept of least squares fitting, specifically focusing on fitting a dataset to a constant value rather than a linear model. Participants explore the mathematical foundations and methods involved in this fitting process, including optimization techniques and polynomial approaches.

Discussion Character

  • Exploratory
  • Technical explanation
  • Mathematical reasoning

Main Points Raised

  • One participant inquires about fitting a dataset of the form {y(t sub i), t sub i} to a constant value, contrasting it with fitting to a linear equation.
  • Another participant suggests minimizing the distances between the constant value and the data points, indicating that the objective is to minimize the sum of squared differences.
  • A participant expresses confusion regarding the necessity of matrices in this context and questions the reasoning behind squaring the differences.
  • Further clarification is provided that the minimization leads to a quadratic function of the constant, and differentiation is required to find the minimum value.
  • Another perspective is introduced, stating that for polynomial fits, including fitting to a constant, matrices can be avoided by using orthogonal polynomials and a specific recursive algorithm, which is linked for further reference.

Areas of Agreement / Disagreement

Participants exhibit varying levels of understanding and approaches to the problem, with some agreeing on the mathematical principles involved while others express confusion about the methods and assumptions. No consensus is reached regarding the necessity of matrices or the best approach to fitting a constant.

Contextual Notes

Some participants mention limitations in their mathematical background, which may affect their understanding of the optimization techniques discussed. The discussion also highlights the potential for different methods to achieve the same fitting goal, indicating a variety of approaches exist.

chuy52506
Messages
77
Reaction score
0
say we have data set {y(t sub i), t sub i} Where i=1 2 3...m.
I know how to fit these into a line of the form ax+b, but how about fitting into a constant??
 
Mathematics news on Phys.org
chuy52506 said:
say we have data set {y(t sub i), t sub i} Where i=1 2 3...m.
I know how to fit these into a line of the form ax+b, but how about fitting into a constant??

Hey chuy52506 and welcome to the forums.

Think about the fact that you are minimizing the distances between y = c and the data points where we have the sum over (y(a)-c)^2 for all a belonging to the dataset being a minimum.

How much math have you taken? Have you taken any classes on optimization or linear algebra?
 
I only have taken an introductory course to linear algebra and no optimization...im sorry I am confused, so there is no need to use matrices? and why would (y(a),c) be squared?
 
chuy52506 said:
I only have taken an introductory course to linear algebra and no optimization...im sorry I am confused, so there is no need to use matrices? and why would (y(a),c) be squared?

You have to solve the following equation:

Minimize \sum (y(n) - c)^2

You can expand this out in terms of c and you will get a quadratic function of c in terms of f(c) = ac^2 + bc + d and then by differentiating this you need to find the minimum which is given by solving 2ac + b = 0.
 
chuy52506 said:
There is no need to use matrices?
For a polynomial fit, including y=c, the matrices can be eliminated using a polynomial that is the sum of orthognal (for the given data points) polynomials of increasing order. Link to description of algorithm, that includes a c code example at the end.

http://rcgldr.net/misc/opls.rtf

The algorithm uses a recursive definition for the set of polynomials, and then based on this recursive definition combined with the fact the generated polynomials will be orthogonal, it's able to elminate the need for matrices, allowing coefficients to be determined via finite summation series. The algorithm generates 3 sets of constants for the orthogonal polynomials, but the code example explains how generate standard coefficients for a single polynomial, which is what you'd really want.

Note that this algorithm assumes you enter a set of data points {x, y} or a weighted set {w, x, y}. For an unweighted set of data points, just use w = 1. For y = c, just use incrementing numbers for x values, with the y values representing the actual values to be fitted via least squares (in case you want to see if there is a slope using y = bx + c).
 
Last edited:

Similar threads

  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 19 ·
Replies
19
Views
3K
  • · Replies 19 ·
Replies
19
Views
7K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 13 ·
Replies
13
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 10 ·
Replies
10
Views
4K
  • · Replies 7 ·
Replies
7
Views
2K