- #1

chuy52506

- 77

- 0

I know how to fit these into a line of the form ax+b, but how about fitting into a constant??

You are using an out of date browser. It may not display this or other websites correctly.

You should upgrade or use an alternative browser.

You should upgrade or use an alternative browser.

- Thread starter chuy52506
- Start date

In summary, to fit a dataset {y(t sub i), t sub i} into a constant, you need to minimize the distances between y = c and the data points by solving the equation: Minimize \sum (y(n) - c)^2. This can be done without using matrices by finding a quadratic function of c and differentiating it to find the minimum. Another approach is to use a recursive algorithm to generate orthogonal polynomials and eliminate the need for matrices. This algorithm can be used by entering a set of data points {x, y} or a weighted set {w, x, y}.

- #1

chuy52506

- 77

- 0

I know how to fit these into a line of the form ax+b, but how about fitting into a constant??

Mathematics news on Phys.org

- #2

chiro

Science Advisor

- 4,817

- 134

chuy52506 said:

I know how to fit these into a line of the form ax+b, but how about fitting into a constant??

Hey chuy52506 and welcome to the forums.

Think about the fact that you are minimizing the distances between y = c and the data points where we have the sum over (y(a)-c)^2 for all a belonging to the dataset being a minimum.

How much math have you taken? Have you taken any classes on optimization or linear algebra?

- #3

chuy52506

- 77

- 0

- #4

chiro

Science Advisor

- 4,817

- 134

chuy52506 said:

You have to solve the following equation:

Minimize [itex]\sum (y(n) - c)^2[/itex]

You can expand this out in terms of c and you will get a quadratic function of c in terms of f(c) = ac^2 + bc + d and then by differentiating this you need to find the minimum which is given by solving 2ac + b = 0.

- #5

rcgldr

Homework Helper

- 8,875

- 638

For a polynomial fit, including y=c, the matrices can be eliminated using a polynomial that is the sum of orthognal (for the given data points) polynomials of increasing order. Link to description of algorithm, that includes a c code example at the end.chuy52506 said:There is no need to use matrices?

http://rcgldr.net/misc/opls.rtf

The algorithm uses a recursive definition for the set of polynomials, and then based on this recursive definition combined with the fact the generated polynomials will be orthogonal, it's able to elminate the need for matrices, allowing coefficients to be determined via finite summation series. The algoritm generates 3 sets of constants for the orthogonal polynomials, but the code example explains how generate standard coefficients for a single polynomial, which is what you'd really want.

Note that this algorithm assumes you enter a set of data points {x, y} or a weighted set {w, x, y}. For an unweighted set of data points, just use w = 1. For y = c, just use incrementing numbers for x values, with the y values representing the actual values to be fitted via least squares (in case you want to see if there is a slope using y = bx + c).

Last edited:

Least squares fitting by a constant is a statistical method used to find the best fitting line or curve for a set of data points. It involves finding the constant value that minimizes the sum of the squared differences between the data points and the predicted values.

The constant value in least squares fitting is determined by calculating the mean of the dependent variable (y) and using that as the constant value. This ensures that the line or curve passes through the center of the data points.

The purpose of least squares fitting by a constant is to find the best fitting line or curve for a set of data points. This can help to identify patterns, trends, and relationships in the data, and make predictions or estimations based on the fitted line or curve.

Least squares fitting by a constant only uses a single constant value to fit the data, while other regression methods may use multiple variables or parameters. Additionally, least squares fitting by a constant is more commonly used for linear relationships, while other methods may be better suited for non-linear relationships.

One limitation of least squares fitting by a constant is that it assumes a linear relationship between the variables, which may not always be the case. Additionally, it may not be the most accurate method for fitting data with outliers or extreme values. Finally, it does not account for other factors or variables that may influence the relationship between the variables being studied.

- Replies
- 5

- Views
- 1K

- Replies
- 6

- Views
- 976

- Replies
- 19

- Views
- 2K

- Replies
- 19

- Views
- 2K

- Replies
- 2

- Views
- 913

- Replies
- 16

- Views
- 2K

- Replies
- 19

- Views
- 6K

- Replies
- 6

- Views
- 832

- Replies
- 9

- Views
- 1K

- Replies
- 1

- Views
- 958

Share: