Optimizing Regression Degree with Weighted Cost Function

  • Thread starter Thread starter andrewcheong
  • Start date Start date
  • Tags Tags
    Regression Smooth
andrewcheong
Messages
8
Reaction score
0
Hello, all. I know what I want, but I just don't know what it's called.

This has to do with regression (polynomial fits). Given a set of N (x,y) points, we can compute a regression of degree K. For example, we could have a hundred (x,y) points and compute a linear regression (degree 1). Of course, there would be residual error because the line-of-best-fit won't go through every point perfectly. We could also compute quadratic (degree 2) or higher-degree regressions. This should reduce the residual error, or at least, be no worse an estimate than the lower-degree regressions.

Now, what I want is a regression that determines the "best degree". I mean, if I have N points, I can always get a perfect fit by computing a regression of degree N-1. For example, if I only have 2 points, a 1-degree regression (linear) can fit both points perfectly. If I only have 3 points, a 2-degree regression (quadratic) can fit all three points perfectly, etc. So if I have a 100 points, one might say that a 99-degree regression is the "best degree". However, I look at higher-degrees as a cost.

I want a method of determining a regression with a balance between low residual errors and low degree. I imagine that there must be some sort of a "cost" parameter that I have to set, because the computer alone cannot say what the "right" balance between residual error and degree is.

Can anyone point me to the name of such a technique? Perhaps the most common used form of it?

I want to apply this to stock market prices. As human beings, we can look at a plot of stock prices and mentally "fit" a smooth curve across the points that makes sense. But how does a computer do this? We can't just tell it to do a perfect fit, because then it'll do an N-1 degree fit (e.g. cubic B-splines).

Thanks in advance!
 
Mathematics news on Phys.org
You can introduce a weight function: ##W=c_1\cdot \deg p + c_2 \cdot \mu## and minimize it. Of course you will first have to choose an appropriate measure ##\mu## to scale your error margins.
 
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. In Dirac’s Principles of Quantum Mechanics published in 1930 he introduced a “convenient notation” he referred to as a “delta function” which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/ by...
Fermat's Last Theorem has long been one of the most famous mathematical problems, and is now one of the most famous theorems. It simply states that the equation $$ a^n+b^n=c^n $$ has no solutions with positive integers if ##n>2.## It was named after Pierre de Fermat (1607-1665). The problem itself stems from the book Arithmetica by Diophantus of Alexandria. It gained popularity because Fermat noted in his copy "Cubum autem in duos cubos, aut quadratoquadratum in duos quadratoquadratos, et...
I'm interested to know whether the equation $$1 = 2 - \frac{1}{2 - \frac{1}{2 - \cdots}}$$ is true or not. It can be shown easily that if the continued fraction converges, it cannot converge to anything else than 1. It seems that if the continued fraction converges, the convergence is very slow. The apparent slowness of the convergence makes it difficult to estimate the presence of true convergence numerically. At the moment I don't know whether this converges or not.
Back
Top