Optimizing Functions: Strategies and Considerations

In summary, these 2 questions are difficult to me. It's difficult to determine whether a local max or min is also a global max or min. It's also difficult to determine whether a local max or min is a saddle point or a maximum point.
  • #1
tcuay
8
0
These 2 questions,I have attempted them for hours but still no outcome:confused:
For the first question:
part a) i take the ∇f(x) and set it to be zero;
then find out(x1,x2)=(0,-1) or((-2a-1)/3,(-2a-4)/3);
but then for part b), after using second-order necessary condition,
i have no idea to continue;
part c): i know if i can prove the function is convex or concave, i can surely conclude the local minimum/maximum is a global minimum/maximum; but after taking the gradient for 2 times, i don't know to work for it in further;

For Question 2,
part a) i got 5 possible points (x1,x2)=(0.5,0),(1,-1),(1,1),(-1,-sqrt(3)),(-1,sqrt(3));
but in part b) i can only prove (0.5,0) is a local maximizer while other 4 points don't satisfy the 2nd conditions(because their determinant of Hessian Matrix is smaller than zero;)
part c) same, no idea how to do it;
part d) i find the directional vectors are both [0,0]'; does it mean f(1,-1) is a local minimum?

Thanks for help:frown:
 

Attachments

  • optimization.png
    optimization.png
    17.8 KB · Views: 407
  • Q1.png
    Q1.png
    41 KB · Views: 404
  • Q2.png
    Q2.png
    47 KB · Views: 406
Physics news on Phys.org
  • #2
no body can help?:cry::cry:
 
  • #3
tcuay said:
no body can help?:cry::cry:

Looking for stationary points and examining Hessians only works if the function has a (local) max or min at the point of interest. You can only apply these methods to global optima if you know that the functions have global optima, so first you need to examine that issue: does the function have a finite global max? Global min?
 
  • #4
Ray Vickson said:
Looking for stationary points and examining Hessians only works if the function has a (local) max or min at the point of interest. You can only apply these methods to global optima if you know that the functions have global optima, so first you need to examine that issue: does the function have a finite global max? Global min?
First of all, really thanks for your reply;
These 2 questions both are required to find their global optima, it implies they should have a finite global optima; but what i am confused is how to determine whether those local optima are also global optima? And Even though i know the methods to find local optima.( e,g first order necessary condition, second order condition necessary/sufficient condition...), but i still cannot solve these 2 problems
Very difficult to me. Could you spend some more time to help me solve them :frown:
 
  • #5
tcuay said:
First of all, really thanks for your reply;
These 2 questions both are required to find their global optima, it implies they should have a finite global optima; but what i am confused is how to determine whether those local optima are also global optima? And Even though i know the methods to find local optima.( e,g first order necessary condition, second order condition necessary/sufficient condition...), but i still cannot solve these 2 problems
Very difficult to me. Could you spend some more time to help me solve them :frown:

No, I cannot do any more, because that would involve my solving the problem for you. I will just say that your reasoning for the existence of global max or min is faulty; just because somebody asks you to find it does not mean it exists--it may, or may not!
 
  • #6
tcuay said:
These 2 questions both are required to find their global optima, it implies they should have a finite global optima;
As Ray says, don't deduce that.
They are both smooth functions, defined everywhere. If a global extremum is not also a local extremum, where is it?
 
  • #7
Okay, you are right. Then i also want to ask, for Q2 part d, if i find a steepest descent directional vector at a point, i.e x*, is zero; can i say that the point is a local minimizer? can the directional vector be zero when the point is a saddle point or maximum point?
 
  • #8
tcuay said:
Okay, you are right.
Umm.. about what exactly?
When I wrote
If a global extremum is not also a local extremum, where is it?
there is an answer to that question.
can the directional vector be zero when the point is a saddle point or maximum point?
Yes, it will be. The grad is zero if and only if the tangent plane is horizontal. To know whether it's a min, a max, a saddle, or even a horizontal valley or ridge, you need to look at higher derivatives.
 
  • Like
Likes 1 person
  • #9
haruspex said:
Umm.. about what exactly?
When I wrote

there is an answer to that question.

Yes, it will be. The grad is zero if and only if the tangent plane is horizontal. To know whether it's a min, a max, a saddle, or even a horizontal valley or ridge, you need to look at higher derivatives.
Okay, thanks very much;
And all answers are solved, thanks for your reminder;
 

1. What is unconstrained optimization?

Unconstrained optimization is a mathematical process used to find the maximum or minimum value of a function without any constraints or limitations on the variables. This means that the variables can take on any value within their defined range.

2. How is unconstrained optimization different from constrained optimization?

Constrained optimization involves finding the maximum or minimum value of a function while considering constraints or limitations on the variables. This means that the variables must satisfy certain conditions or limitations in order to achieve the optimal solution.

3. What is the objective of unconstrained optimization?

The objective of unconstrained optimization is to find the optimal value of a function, which could be the maximum or minimum value, without any constraints on the variables. This allows for a more flexible and generalized approach to solving mathematical problems.

4. What are some real-world applications of unconstrained optimization?

Unconstrained optimization has a wide range of applications in various fields such as engineering, economics, finance, and data science. Some examples include finding the most profitable investment portfolio, optimizing the design of a car engine, and minimizing the cost of production for a manufacturing company.

5. What are the common methods used for unconstrained optimization?

There are several methods for unconstrained optimization, but some of the most commonly used ones include gradient descent, Newton's method, and the Nelder-Mead method. These methods use different approaches to iteratively improve the solution until the optimal value is found.

Similar threads

  • Calculus and Beyond Homework Help
Replies
4
Views
837
  • Calculus and Beyond Homework Help
Replies
30
Views
2K
  • Calculus and Beyond Homework Help
Replies
4
Views
1K
  • Calculus and Beyond Homework Help
Replies
6
Views
1K
  • Calculus and Beyond Homework Help
Replies
4
Views
693
  • Calculus and Beyond Homework Help
Replies
27
Views
736
  • Calculus and Beyond Homework Help
Replies
4
Views
307
  • Calculus and Beyond Homework Help
Replies
5
Views
355
  • Calculus and Beyond Homework Help
Replies
2
Views
833
Replies
9
Views
714
Back
Top