Finding the Largest (or smallest) Value of a Function - given some constant symmetric errors

  • Context: Undergrad 
  • Thread starter Thread starter erobz
  • Start date Start date
Click For Summary
SUMMARY

This discussion focuses on optimizing the selection of signs for symmetric errors in mathematical functions to find their maximum or minimum values. The example provided illustrates the function f = kx with symmetric error εx, demonstrating that to maximize f, one should choose the positive error when the derivative ∂f/∂x is positive, and the negative error when it is negative. The conversation further explores the complexity of functions, such as f = kx/(y+1), and emphasizes the use of first-order derivatives to efficiently determine the optimal sign combinations for various measured variables. The participants conclude that evaluating the first-order change in the function provides a reliable method for maximizing or minimizing the function.

PREREQUISITES
  • Understanding of first-order derivatives in calculus
  • Familiarity with symmetric error analysis
  • Knowledge of optimization techniques in mathematical functions
  • Basic proficiency in mathematical notation and expressions
NEXT STEPS
  • Study advanced optimization techniques in multivariable calculus
  • Learn about Taylor series expansions and their applications in error analysis
  • Explore numerical methods for evaluating functions with complex error structures
  • Investigate the implications of higher-order derivatives on function behavior
USEFUL FOR

Mathematicians, engineers, data analysts, and anyone involved in error analysis and optimization of mathematical functions will benefit from this discussion.

erobz
Gold Member
Messages
4,459
Reaction score
1,846
Just wondering if there is a mathematical way to find which sign (##\pm##) to take on symmetric measured error in a function ##f## of some variables. An example, lets say formulaically we find that ##f = k x##, with ##k>0## we measure ##x## and append some symmetric error ##\pm \epsilon_x##. So we say:

$$ (f + \epsilon_f) - f = k( x + (\pm \epsilon_x) ) - kx $$

$$ \implies ( \epsilon_f ) = k (\pm \epsilon_x) $$

So by inspection if we want to increase ##f## we don't want its change to be negative, thus we select ##+\epsilon_x##. And visa-versa if we wish to find the smallest ##f##.

Now, lets increase the complexity of ##f## with more measured variables that have symmetric error, for example:

$$f = \frac{kx}{y+1}$$

$$ \implies \epsilon_f = \frac{k(\pm \epsilon_x) ( y+1)-kx (\pm \epsilon_y) }{(y+1)^2+(y+1)(\pm \epsilon_y)}$$

Now I can still reason out this one, if we want ##f## to be its largest value we make numerator largest and denominator smallest:

$$ \implies \epsilon_f = \frac{k(+ \epsilon_x) ( y+1)-kx (- \epsilon_y) }{(y+1)^2+(y+1)(- \epsilon_y)}$$

What do you do if the function is complex enough such that it's not at all clear what combination will produce the upper/lower bound for the function?

Checking every sign combination by brute force will work, but it feels like an optimization of some kind.
 
Last edited:
Physics news on Phys.org
There isn't much guesswork. To first order we have <br /> \epsilon_f = \frac{\partial f}{\partial x}\epsilon_x + \frac{\partial f}{\partial y}\epsilon_y To maximize this, we want \epsilon_x to be positive if \frac{\partial f}{\partial x} is positive and negative if \frac{\partial f}{\partial x} is negative, etc.
 
pasmith said:
There isn't much guesswork. To first order we have <br /> \epsilon_f = \frac{\partial f}{\partial x}\epsilon_x + \frac{\partial f}{\partial y}\epsilon_y To maximize this, we want \epsilon_x to be positive if \frac{\partial f}{\partial x} is positive and negative if \frac{\partial f}{\partial x} is negative, etc.
Thats propbably more efficient than checking each combination. :smile:

So you just look at the first order change, and that will always correctly decide how to arrange the signs to get a minimum or maximum. So for a minimum we choose opposite signs of the partial derivative.

Then you just plug in the accordingly signed errors into the actual function and we are good to go?

You say to first order, are there caveats where the higher order derivatives will bungle this up?
 
Another thing, what if the function had a peak around the measured variable. Imagine our measurement is on 1 side of a peak, we evaluate this expression, it tells us to select the positive error. However, when I input the finite error, I could end up lower than I would have otherwise if it crosses the peak.

Does evaluating the derivative ## \left. \frac{\partial f }{\partial x } \right|_{x+\epsilon_x} ## cover the possible sign change?
 
Last edited:

Similar threads

  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 18 ·
Replies
18
Views
3K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 13 ·
Replies
13
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 24 ·
Replies
24
Views
4K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 9 ·
Replies
9
Views
4K