I Finding the Largest (or smallest) Value of a Function - given some constant symmetric errors

  • I
  • Thread starter Thread starter erobz
  • Start date Start date
erobz
Gold Member
Messages
4,442
Reaction score
1,839
Just wondering if there is a mathematical way to find which sign (##\pm##) to take on symmetric measured error in a function ##f## of some variables. An example, lets say formulaically we find that ##f = k x##, with ##k>0## we measure ##x## and append some symmetric error ##\pm \epsilon_x##. So we say:

$$ (f + \epsilon_f) - f = k( x + (\pm \epsilon_x) ) - kx $$

$$ \implies ( \epsilon_f ) = k (\pm \epsilon_x) $$

So by inspection if we want to increase ##f## we don't want its change to be negative, thus we select ##+\epsilon_x##. And visa-versa if we wish to find the smallest ##f##.

Now, lets increase the complexity of ##f## with more measured variables that have symmetric error, for example:

$$f = \frac{kx}{y+1}$$

$$ \implies \epsilon_f = \frac{k(\pm \epsilon_x) ( y+1)-kx (\pm \epsilon_y) }{(y+1)^2+(y+1)(\pm \epsilon_y)}$$

Now I can still reason out this one, if we want ##f## to be its largest value we make numerator largest and denominator smallest:

$$ \implies \epsilon_f = \frac{k(+ \epsilon_x) ( y+1)-kx (- \epsilon_y) }{(y+1)^2+(y+1)(- \epsilon_y)}$$

What do you do if the function is complex enough such that it's not at all clear what combination will produce the upper/lower bound for the function?

Checking every sign combination by brute force will work, but it feels like an optimization of some kind.
 
Last edited:
Mathematics news on Phys.org
There isn't much guesswork. To first order we have <br /> \epsilon_f = \frac{\partial f}{\partial x}\epsilon_x + \frac{\partial f}{\partial y}\epsilon_y To maximize this, we want \epsilon_x to be positive if \frac{\partial f}{\partial x} is positive and negative if \frac{\partial f}{\partial x} is negative, etc.
 
pasmith said:
There isn't much guesswork. To first order we have <br /> \epsilon_f = \frac{\partial f}{\partial x}\epsilon_x + \frac{\partial f}{\partial y}\epsilon_y To maximize this, we want \epsilon_x to be positive if \frac{\partial f}{\partial x} is positive and negative if \frac{\partial f}{\partial x} is negative, etc.
Thats propbably more efficient than checking each combination. :smile:

So you just look at the first order change, and that will always correctly decide how to arrange the signs to get a minimum or maximum. So for a minimum we choose opposite signs of the partial derivative.

Then you just plug in the accordingly signed errors into the actual function and we are good to go?

You say to first order, are there caveats where the higher order derivatives will bungle this up?
 
Another thing, what if the function had a peak around the measured variable. Imagine our measurement is on 1 side of a peak, we evaluate this expression, it tells us to select the positive error. However, when I input the finite error, I could end up lower than I would have otherwise if it crosses the peak.

Does evaluating the derivative ## \left. \frac{\partial f }{\partial x } \right|_{x+\epsilon_x} ## cover the possible sign change?
 
Last edited:
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. In Dirac’s Principles of Quantum Mechanics published in 1930 he introduced a “convenient notation” he referred to as a “delta function” which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/ by...
Fermat's Last Theorem has long been one of the most famous mathematical problems, and is now one of the most famous theorems. It simply states that the equation $$ a^n+b^n=c^n $$ has no solutions with positive integers if ##n>2.## It was named after Pierre de Fermat (1607-1665). The problem itself stems from the book Arithmetica by Diophantus of Alexandria. It gained popularity because Fermat noted in his copy "Cubum autem in duos cubos, aut quadratoquadratum in duos quadratoquadratos, et...
I'm interested to know whether the equation $$1 = 2 - \frac{1}{2 - \frac{1}{2 - \cdots}}$$ is true or not. It can be shown easily that if the continued fraction converges, it cannot converge to anything else than 1. It seems that if the continued fraction converges, the convergence is very slow. The apparent slowness of the convergence makes it difficult to estimate the presence of true convergence numerically. At the moment I don't know whether this converges or not.
Back
Top