How to solve following optimization problem?

  • MHB
  • Thread starter user_01
  • Start date
  • Tags
    Optimization
In summary, the mathematical expression for the rate expression of the model is given as $\max_{x,y} \ ax + by^3$, with variables $x$ and $y$ as the controlling parameters and the rest being positive constants. It is a convex problem within the limits of variables $x$ and $y$, as it satisfies the conditions for optimization of a convex function over a convex set. The inequality constraints $0\leq x\leq 1$ and $0\leq y\leq 1$ can be translated into the constraint function $\mathbf g(x,y) = (-x,-y,x-1,y-1)$, with $g_i(x,y) \leq 0
  • #1
user_01
8
0
The following is the mathematical expression for my model's rate expression. Variables $x,y$ are the controlling parameter, while the rest are positive constants.

$$\max_{x,y} \ ax + by^3 \ (s.t. \ 0\leq x \leq 1,\ 0\leq y\leq1)$$

Can I mathematically say that it is a convex problem within the limits of variables $x,y$?

The graph for the equation strictly follows the definition of convexity.
I have learned to solve the problems with KKT method, but I cannot understand how to resolve the inequality constraints $0 \leq (x,y) \leq 1$.
 
Mathematics news on Phys.org
  • #2
user_01 said:
The following is the mathematical expression for my model's rate expression. Variables $x,y$ are the controlling parameter, while the rest are positive constants.
$$\max_{x,y} \ ax + by^3 \ (s.t. \ 0\leq x \leq 1,\ 0\leq y\leq1)$$
Can I mathematically say that it is a convex problem within the limits of variables $x,y$?
The graph for the equation strictly follows the definition of convexity.

Wiki explains that a convex problem is the optimization of a convex function over a convex set.
So it is indeed a convex problem as both conditions are satisfied.

I have learned to solve the problems with KKT method, but I cannot understand how to resolve the inequality constraints $0 \leq (x,y) \leq 1$.
Those inequality constraints translate into the constraint function $\mathbf g(x,y) = (-x,-y,x-1,y-1)$ with $g_i(x,y) \le 0$.

Btw, we can already see by inspection that the optimum must be at $(x,y)=(1,1)$ with value $a+b$.
 
  • #3
The derivative of $ax+ by^3$ with respect to x is the constant a. If a is not 0, that derivative is never 0 so any max or min must be on the boundary. That boundary consists of 4 pieces

1) x= 0, $0\le y\le 1$. The function reduces to $by^3$ on $0\le y\le 1$ which obviously has its minimum value, 0, at y= 0 and maximum value, b, at y= 1.
2) y= 0, $0\le x\le 1$. The function reduces to $ax$ on $0\le x\le 1$ which obviouslly has its minimum value, 0, at x= 0 and maximum value , a, a x=1.
3) x= 1, $0\le y\le 1$. The function reduces to $a+ by^3$ on $0\le y\le 1$ which obviouly has its minimum, a, at y= 0 and its maximum a+ b at y= 1.
4. y= 1, $0\le x\le 1$. The function reduces to $ax+ b$ on $0\le x\le 1$ which obviouslly has its minimum, b, at x= 0 and its maximum, a+ b at x= 1.

Over all, then, the maximum is a+ b which is achieved at (1, 1). The minimum value is 0 which is achieved at (0, 0).
 

FAQ: How to solve following optimization problem?

What is an optimization problem?

An optimization problem is a type of mathematical problem that involves finding the best solution from a set of possible solutions. The goal is to minimize or maximize a certain objective function while satisfying a set of constraints.

What is the difference between linear and nonlinear optimization problems?

Linear optimization problems involve linear objective functions and constraints, meaning that the variables are raised to the first power. Nonlinear optimization problems, on the other hand, involve nonlinear objective functions and constraints, meaning that the variables may be raised to higher powers or involve trigonometric functions.

What are the common techniques used to solve optimization problems?

Some common techniques used to solve optimization problems include gradient descent, linear programming, quadratic programming, and genetic algorithms. These techniques use mathematical algorithms to search for the optimal solution to the problem.

How do you determine the optimal solution to an optimization problem?

The optimal solution to an optimization problem is determined by finding the values of the decision variables that minimize or maximize the objective function while satisfying all constraints. This can be done through various techniques such as analytical methods or numerical methods.

Can optimization problems be applied to real-world situations?

Yes, optimization problems have a wide range of applications in various fields such as engineering, economics, finance, and operations research. They can be used to optimize processes, resources, and decisions in order to achieve the best possible outcome.

Similar threads

Replies
1
Views
1K
Replies
7
Views
1K
Replies
0
Views
975
Replies
6
Views
2K
Replies
1
Views
1K
Replies
4
Views
2K
Replies
5
Views
1K
Replies
1
Views
2K
Back
Top