Least squares fitting by a constant

In summary, to fit a dataset {y(t sub i), t sub i} into a constant, you need to minimize the distances between y = c and the data points by solving the equation: Minimize \sum (y(n) - c)^2. This can be done without using matrices by finding a quadratic function of c and differentiating it to find the minimum. Another approach is to use a recursive algorithm to generate orthogonal polynomials and eliminate the need for matrices. This algorithm can be used by entering a set of data points {x, y} or a weighted set {w, x, y}.
  • #1
chuy52506
77
0
say we have data set {y(t sub i), t sub i} Where i=1 2 3...m.
I know how to fit these into a line of the form ax+b, but how about fitting into a constant??
 
Mathematics news on Phys.org
  • #2
chuy52506 said:
say we have data set {y(t sub i), t sub i} Where i=1 2 3...m.
I know how to fit these into a line of the form ax+b, but how about fitting into a constant??

Hey chuy52506 and welcome to the forums.

Think about the fact that you are minimizing the distances between y = c and the data points where we have the sum over (y(a)-c)^2 for all a belonging to the dataset being a minimum.

How much math have you taken? Have you taken any classes on optimization or linear algebra?
 
  • #3
I only have taken an introductory course to linear algebra and no optimization...im sorry I am confused, so there is no need to use matrices? and why would (y(a),c) be squared?
 
  • #4
chuy52506 said:
I only have taken an introductory course to linear algebra and no optimization...im sorry I am confused, so there is no need to use matrices? and why would (y(a),c) be squared?

You have to solve the following equation:

Minimize [itex]\sum (y(n) - c)^2[/itex]

You can expand this out in terms of c and you will get a quadratic function of c in terms of f(c) = ac^2 + bc + d and then by differentiating this you need to find the minimum which is given by solving 2ac + b = 0.
 
  • #5
chuy52506 said:
There is no need to use matrices?
For a polynomial fit, including y=c, the matrices can be eliminated using a polynomial that is the sum of orthognal (for the given data points) polynomials of increasing order. Link to description of algorithm, that includes a c code example at the end.

http://rcgldr.net/misc/opls.rtf

The algorithm uses a recursive definition for the set of polynomials, and then based on this recursive definition combined with the fact the generated polynomials will be orthogonal, it's able to elminate the need for matrices, allowing coefficients to be determined via finite summation series. The algoritm generates 3 sets of constants for the orthogonal polynomials, but the code example explains how generate standard coefficients for a single polynomial, which is what you'd really want.

Note that this algorithm assumes you enter a set of data points {x, y} or a weighted set {w, x, y}. For an unweighted set of data points, just use w = 1. For y = c, just use incrementing numbers for x values, with the y values representing the actual values to be fitted via least squares (in case you want to see if there is a slope using y = bx + c).
 
Last edited:

What is least squares fitting by a constant?

Least squares fitting by a constant is a statistical method used to find the best fitting line or curve for a set of data points. It involves finding the constant value that minimizes the sum of the squared differences between the data points and the predicted values.

How is the constant value determined in least squares fitting?

The constant value in least squares fitting is determined by calculating the mean of the dependent variable (y) and using that as the constant value. This ensures that the line or curve passes through the center of the data points.

What is the purpose of least squares fitting by a constant?

The purpose of least squares fitting by a constant is to find the best fitting line or curve for a set of data points. This can help to identify patterns, trends, and relationships in the data, and make predictions or estimations based on the fitted line or curve.

How is least squares fitting by a constant different from other regression methods?

Least squares fitting by a constant only uses a single constant value to fit the data, while other regression methods may use multiple variables or parameters. Additionally, least squares fitting by a constant is more commonly used for linear relationships, while other methods may be better suited for non-linear relationships.

What are the limitations of least squares fitting by a constant?

One limitation of least squares fitting by a constant is that it assumes a linear relationship between the variables, which may not always be the case. Additionally, it may not be the most accurate method for fitting data with outliers or extreme values. Finally, it does not account for other factors or variables that may influence the relationship between the variables being studied.

Similar threads

Replies
19
Views
2K
  • General Math
Replies
6
Views
779
  • General Math
Replies
2
Views
714
  • Calculus and Beyond Homework Help
Replies
6
Views
638
Replies
9
Views
806
  • General Math
Replies
5
Views
1K
  • General Math
Replies
19
Views
1K
Replies
1
Views
714
  • Set Theory, Logic, Probability, Statistics
Replies
4
Views
1K
  • Programming and Computer Science
Replies
4
Views
616
Back
Top