Finding the Largest (or smallest) Value of a Function - given some constant symmetric errors

  • Context: Undergrad 
  • Thread starter Thread starter erobz
  • Start date Start date
Click For Summary

Discussion Overview

The discussion revolves around the mathematical approach to determining the appropriate sign for symmetric measured errors in a function, particularly in the context of maximizing or minimizing the function's value. Participants explore the implications of these errors in simple and complex functions, considering both theoretical and practical aspects.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • One participant proposes a method for selecting the sign of symmetric errors based on the desire to maximize or minimize the function value, using specific examples of functions.
  • Another participant suggests that to first order, the change in the function due to errors can be expressed using partial derivatives, indicating that the sign of the error should align with the sign of the corresponding partial derivative to maximize the function.
  • A later reply questions the sufficiency of first-order approximations, asking whether higher-order derivatives could affect the outcome.
  • Another participant raises a concern about functions with peaks, suggesting that the choice of error sign could lead to misleading results if the measurement is on one side of a peak, questioning whether evaluating the derivative at the perturbed point accounts for potential sign changes.

Areas of Agreement / Disagreement

Participants express differing views on the effectiveness of first-order approximations and the implications of higher-order derivatives. There is also a discussion about the potential pitfalls of selecting error signs in the context of functions with peaks, indicating that no consensus has been reached on these points.

Contextual Notes

Limitations include the dependence on the assumption that first-order derivatives adequately capture the behavior of the function, as well as the potential for complex interactions in functions with multiple variables and non-linearities.

erobz
Gold Member
Messages
4,459
Reaction score
1,846
Just wondering if there is a mathematical way to find which sign (##\pm##) to take on symmetric measured error in a function ##f## of some variables. An example, lets say formulaically we find that ##f = k x##, with ##k>0## we measure ##x## and append some symmetric error ##\pm \epsilon_x##. So we say:

$$ (f + \epsilon_f) - f = k( x + (\pm \epsilon_x) ) - kx $$

$$ \implies ( \epsilon_f ) = k (\pm \epsilon_x) $$

So by inspection if we want to increase ##f## we don't want its change to be negative, thus we select ##+\epsilon_x##. And visa-versa if we wish to find the smallest ##f##.

Now, lets increase the complexity of ##f## with more measured variables that have symmetric error, for example:

$$f = \frac{kx}{y+1}$$

$$ \implies \epsilon_f = \frac{k(\pm \epsilon_x) ( y+1)-kx (\pm \epsilon_y) }{(y+1)^2+(y+1)(\pm \epsilon_y)}$$

Now I can still reason out this one, if we want ##f## to be its largest value we make numerator largest and denominator smallest:

$$ \implies \epsilon_f = \frac{k(+ \epsilon_x) ( y+1)-kx (- \epsilon_y) }{(y+1)^2+(y+1)(- \epsilon_y)}$$

What do you do if the function is complex enough such that it's not at all clear what combination will produce the upper/lower bound for the function?

Checking every sign combination by brute force will work, but it feels like an optimization of some kind.
 
Last edited:
Physics news on Phys.org
There isn't much guesswork. To first order we have <br /> \epsilon_f = \frac{\partial f}{\partial x}\epsilon_x + \frac{\partial f}{\partial y}\epsilon_y To maximize this, we want \epsilon_x to be positive if \frac{\partial f}{\partial x} is positive and negative if \frac{\partial f}{\partial x} is negative, etc.
 
pasmith said:
There isn't much guesswork. To first order we have <br /> \epsilon_f = \frac{\partial f}{\partial x}\epsilon_x + \frac{\partial f}{\partial y}\epsilon_y To maximize this, we want \epsilon_x to be positive if \frac{\partial f}{\partial x} is positive and negative if \frac{\partial f}{\partial x} is negative, etc.
Thats propbably more efficient than checking each combination. :smile:

So you just look at the first order change, and that will always correctly decide how to arrange the signs to get a minimum or maximum. So for a minimum we choose opposite signs of the partial derivative.

Then you just plug in the accordingly signed errors into the actual function and we are good to go?

You say to first order, are there caveats where the higher order derivatives will bungle this up?
 
Another thing, what if the function had a peak around the measured variable. Imagine our measurement is on 1 side of a peak, we evaluate this expression, it tells us to select the positive error. However, when I input the finite error, I could end up lower than I would have otherwise if it crosses the peak.

Does evaluating the derivative ## \left. \frac{\partial f }{\partial x } \right|_{x+\epsilon_x} ## cover the possible sign change?
 
Last edited:

Similar threads

  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 18 ·
Replies
18
Views
3K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 13 ·
Replies
13
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 24 ·
Replies
24
Views
4K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 9 ·
Replies
9
Views
4K