1. Not finding help here? Sign up for a free 30min tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Absolute extrema function of 2 variables

  1. Oct 22, 2015 #1
    1. The problem statement, all variables and given/known data
    I need to find the absolute extrema of the function in the specified region

    f(x, y) = x^2 + xy R = {(x,y): |x|<=2, |y|<=1}


    3. The attempt at a solution
    The first partial derivatives are
    fx(x,y) = 2x+y and fy(x,y) = x
    They are both 0 only when x and y are both 0. So (0,0) is a critical point and f(0,0) = 0 now we look at the boundaries:
    f(2,1) = 4 +2 = 6
    f(2,-1) = 4-2 = 2
    f(-2,1) = 4-2 = 2
    f(-2,-1) = 4+2 = 6

    so the absolute min is at (0,0) (this would normally be a saddle point but in the specified region it is a minimum) and the absolute max are at (2,1) and (-2,-1)
    Am I correct about this?
     
  2. jcsd
  3. Oct 22, 2015 #2
    No you're not checking the boundary properly on each edge you get a normal one variable function to min/max. Your max value happends to be correct but not your min value.
     
  4. Oct 22, 2015 #3

    Ray Vickson

    User Avatar
    Science Advisor
    Homework Helper

    No: (0,0) is still a saddle point (because (0,0) is in the interior of the constraint set, not on the boundary). You can check this for yourself: go away from the point (0,0) along the lines y = kx for various constants k, and see what happens on the different lines using different values of k.

    Anyway, stationarity is nearly irrelevant in constrained problems. If you have a constraint of the form ##a \leq x \leq b## you need ##f'(x^*) = 0## for an interior optimum ##a < x^* < b##, but in a max problem you need ##f'(x^*) \leq 0## at ##x^* = a## (left end) or ##f'(x^*) \geq 0## at ##x^* = b## (right end). For a min problem, just reverse the direction of the inequalities. These also apply component-by-component in multivariate cases, using partial derivatives instead of ##f'##.

    In this case there is no local max or min at any interior point, so any local max or min will occur on the boundary. You can check each of the boundaries separately, because on each boundary you are left with a single-variable problem. For example, when ##x = -2## you are left with the univariate problem ##\max/ \min f(-2,y), \: -1 \leq y \leq 1##, etc.
     
  5. Oct 23, 2015 #4
    oops. ok so we have

    y = -1 |x| <= 2
    f(x, -1) = x^2 - x
    f'(x, -10 = 2x -1 = 0
    2x = 1
    x = 1/2
    doing the same with y=1 we get x = -1/2
    so the mins are at
    (.5, -1) and (-.5, 1)

    but now I am confused because
    when x = -2 |y| <= 1
    f(-2,y) = 4 -2y
    f' = -2 != 0
    so there is a problem there because the derivative does not equal 0. How would I find the max this way?
     
  6. Oct 23, 2015 #5

    ehild

    User Avatar
    Homework Helper
    Gold Member

    The derivative needs to be zero at a local extreme, but not at an absolute one. Find it at the boundaries.
     
  7. Oct 23, 2015 #6

    Ray Vickson

    User Avatar
    Science Advisor
    Homework Helper

    Did you not read my post #3, where that issue was discussed in detail? Anyway, why bother with derivatives on such a simple problem? Solving it by inspection is much faster.
     
    Last edited: Oct 23, 2015
  8. Oct 24, 2015 #7
    I read it I just don't think I fully understand. So basically there is no critical point inside the interval for the function 4-2y so I just have to plug in the boundaries?

    so the max is z = 6 and the min is z = -1/4
     
  9. Oct 24, 2015 #8
    This is mostly correct, but a local extreme point doesn't mean it's the largest or smallest point, you always have too compare with the boundary. However note that this is true for this problem, other problems may indeed have an absolute min/max inside the domain. When having a problem of this type you have to check both the inside and all the boundary's. You do the same in any dimension say you have a cube you first check inside the cube then you have to check each surface and then each edge.

    Compare it with a one dimensional problem:
    ##f(x) = x^3## in ##-10\le x \le 10##.
    You have a local stationary point at at ##0## that you can find by differentiating but you don't know if this is an absolute maximum or minimum(or either) yet. We have that ##f(0) = 0## Next check the two boundary's. in this case it's at ##f(-10)=-100## and ##f(10)=100## so those are the absolute min and max. You always need to check both stationary point's and every endpoint (edge, surface etc.)
     
  10. Oct 24, 2015 #9
    I think I am starting to get it. I check critical points by finding the first partial derivatives. This will give possible mins or maxs in the interior of the region. Then we plug in the constraints to get one variable functions for the boundaries of this region. i am picturing these as sort of 2d slices through the 3d graph. now I find the derivative of these one variable functions and set them to 0 to find critical points on the inside of that interval. Now I also have to plug in the constraints into the one variable function to find the max and mins on the edges of that one variable function. so then I take the points from setting the first partials of the 2 variable function to 0, the points from setting the derivatives of the one variable boundary functions to 0, and the points from plugging the constraint in the one variable functions and plug all these points in the original function and whichever ones give the highest value is the max and whichever ones give the lowest is the min?
     
  11. Oct 24, 2015 #10
    Exactly! In the end you have to compare all those possible points you find to know.
     
  12. Oct 24, 2015 #11

    Ray Vickson

    User Avatar
    Science Advisor
    Homework Helper

    Whether or not this is true depends on the nature of the function and the feasible region. For example, if you are minimizing a convex function over a convex region, any local minimum is automatically a global minimum, so finding one, single local min is enough. However, if you were maximizing a convex function (instead of minimizing) then, in the worst case you might need to check millions of local maxima in order to find the single best one. That's why, even today, industrial-strength non-convex global optimization problems are so difficult.

    In the current problem, the function is neither convex nor concave, so in this particular case you do need to check various boundary solutions and pick the numerically best ones.
     
  13. Oct 24, 2015 #12

    Ray Vickson

    User Avatar
    Science Advisor
    Homework Helper

    There is not a whole lot to understand---just draw some pictures to grasp what is happening. For example, if you want to maximize a function f(x) on an interval a ≤ x ≤ b, can the left end (x=a) be a local max if the derivative is positive there? Just make a sketch to see why not. That's all there is to it.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted



Similar Discussions: Absolute extrema function of 2 variables
Loading...