Modified least squares fitting?

nmf77
Messages
13
Reaction score
0
Hi, I wonder if someone could give me some guidance on this problem please. I'm not a mathematician and I'm not even sure what the title of this problem should be - curve fitting, regression, function minimization?

It started with something fairly simple, least squares fitting with 3 variables and 6 or more equations. I know how to cast that problem in matrix form and get the answer, but now I need to do something slightly more difficult. If I call the variables X, Y and Z, it turns out that X is much more important than the other two variables, so I want solution where the variance in X is smaller than the other variances. How do I do that? In my head I'm picturing the problem geometrically. In the standard least squares approach I see the cost/loss function being spherical i.e. the same in all directions. What I need is a cost function that is more like an ellipse, but I don't know how to do that. I suppose the least squares algorithm doesn't really implement a cost function as such - effectively there is a cost function, but it is an innate feature of the algorithm and so can't be modified. Perhaps I need a whole new approach? I hope that makes some sense to someone!

Any advice appreciated, many thanks
Nick
 
Mathematics news on Phys.org
How are you implementing your least square algorithm? Did you write it from scratch or are you using prewritten code in MATLAB or excel or something?

Typically you just scale up the error in the variable that you care about - so if your prediction f(-) has error of x,y, and z in the X, Y and Z variable, you count that as Nx, y and z for some big number N and then run the rest of the algorithm using those errors.
 
Hi, thanks for the reply. I'm using a Matlab script. I wrote the script but it's a straight lift from textbook least squares in matrix form.

Basically it's this;

observations are in vector b, A is the 'design matrix' and x is the vector of fitting variables.

So b = Ax + v (v contains the errors/residuals), and x is computed as

x = (inverse(A'*A)) * A' * b (prime = transpose)

I think what you're suggesting is that I manipulate the design matrix with some weighting factors -is that correct?
 
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. In Dirac’s Principles of Quantum Mechanics published in 1930 he introduced a “convenient notation” he referred to as a “delta function” which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/ by...
Fermat's Last Theorem has long been one of the most famous mathematical problems, and is now one of the most famous theorems. It simply states that the equation $$ a^n+b^n=c^n $$ has no solutions with positive integers if ##n>2.## It was named after Pierre de Fermat (1607-1665). The problem itself stems from the book Arithmetica by Diophantus of Alexandria. It gained popularity because Fermat noted in his copy "Cubum autem in duos cubos, aut quadratoquadratum in duos quadratoquadratos, et...
Thread 'Imaginary Pythagorus'
I posted this in the Lame Math thread, but it's got me thinking. Is there any validity to this? Or is it really just a mathematical trick? Naively, I see that i2 + plus 12 does equal zero2. But does this have a meaning? I know one can treat the imaginary number line as just another axis like the reals, but does that mean this does represent a triangle in the complex plane with a hypotenuse of length zero? Ibix offered a rendering of the diagram using what I assume is matrix* notation...

Similar threads

Back
Top