I Finding the Largest (or smallest) Value of a Function - given some constant symmetric errors

  • I
  • Thread starter Thread starter erobz
  • Start date Start date
AI Thread Summary
The discussion centers on determining the appropriate sign for symmetric errors in a function to find its maximum or minimum values. It begins with a simple linear function and explores how to apply this concept to more complex functions, emphasizing the importance of using first-order derivatives to guide the choice of error signs. The participants agree that selecting the sign of the error based on the sign of the partial derivatives is a more efficient method than brute force checking of all combinations. However, concerns are raised about potential pitfalls with higher-order derivatives and the behavior of the function near peaks, which could lead to misleading results. Ultimately, evaluating the derivative at the perturbed point is suggested as a way to account for possible sign changes in the function's behavior.
erobz
Gold Member
Messages
4,445
Reaction score
1,839
Just wondering if there is a mathematical way to find which sign (##\pm##) to take on symmetric measured error in a function ##f## of some variables. An example, lets say formulaically we find that ##f = k x##, with ##k>0## we measure ##x## and append some symmetric error ##\pm \epsilon_x##. So we say:

$$ (f + \epsilon_f) - f = k( x + (\pm \epsilon_x) ) - kx $$

$$ \implies ( \epsilon_f ) = k (\pm \epsilon_x) $$

So by inspection if we want to increase ##f## we don't want its change to be negative, thus we select ##+\epsilon_x##. And visa-versa if we wish to find the smallest ##f##.

Now, lets increase the complexity of ##f## with more measured variables that have symmetric error, for example:

$$f = \frac{kx}{y+1}$$

$$ \implies \epsilon_f = \frac{k(\pm \epsilon_x) ( y+1)-kx (\pm \epsilon_y) }{(y+1)^2+(y+1)(\pm \epsilon_y)}$$

Now I can still reason out this one, if we want ##f## to be its largest value we make numerator largest and denominator smallest:

$$ \implies \epsilon_f = \frac{k(+ \epsilon_x) ( y+1)-kx (- \epsilon_y) }{(y+1)^2+(y+1)(- \epsilon_y)}$$

What do you do if the function is complex enough such that it's not at all clear what combination will produce the upper/lower bound for the function?

Checking every sign combination by brute force will work, but it feels like an optimization of some kind.
 
Last edited:
Mathematics news on Phys.org
There isn't much guesswork. To first order we have <br /> \epsilon_f = \frac{\partial f}{\partial x}\epsilon_x + \frac{\partial f}{\partial y}\epsilon_y To maximize this, we want \epsilon_x to be positive if \frac{\partial f}{\partial x} is positive and negative if \frac{\partial f}{\partial x} is negative, etc.
 
pasmith said:
There isn't much guesswork. To first order we have <br /> \epsilon_f = \frac{\partial f}{\partial x}\epsilon_x + \frac{\partial f}{\partial y}\epsilon_y To maximize this, we want \epsilon_x to be positive if \frac{\partial f}{\partial x} is positive and negative if \frac{\partial f}{\partial x} is negative, etc.
Thats propbably more efficient than checking each combination. :smile:

So you just look at the first order change, and that will always correctly decide how to arrange the signs to get a minimum or maximum. So for a minimum we choose opposite signs of the partial derivative.

Then you just plug in the accordingly signed errors into the actual function and we are good to go?

You say to first order, are there caveats where the higher order derivatives will bungle this up?
 
Another thing, what if the function had a peak around the measured variable. Imagine our measurement is on 1 side of a peak, we evaluate this expression, it tells us to select the positive error. However, when I input the finite error, I could end up lower than I would have otherwise if it crosses the peak.

Does evaluating the derivative ## \left. \frac{\partial f }{\partial x } \right|_{x+\epsilon_x} ## cover the possible sign change?
 
Last edited:
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. In Dirac’s Principles of Quantum Mechanics published in 1930 he introduced a “convenient notation” he referred to as a “delta function” which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/ by...
Suppose ,instead of the usual x,y coordinate system with an I basis vector along the x -axis and a corresponding j basis vector along the y-axis we instead have a different pair of basis vectors ,call them e and f along their respective axes. I have seen that this is an important subject in maths My question is what physical applications does such a model apply to? I am asking here because I have devoted quite a lot of time in the past to understanding convectors and the dual...
Back
Top