How do we use dynamic programming?

Click For Summary

Discussion Overview

The discussion revolves around the application of dynamic programming and other optimization methods to solve a specific non-linear programming problem. Participants explore various approaches, including Lagrange multipliers, while considering the constraints and characteristics of the problem.

Discussion Character

  • Exploratory
  • Technical explanation
  • Mathematical reasoning
  • Debate/contested

Main Points Raised

  • Some participants inquire about the use of dynamic programming for solving non-linear programming problems, specifically a given maximization problem with constraints.
  • Others suggest alternative non-linear methods such as graphing, GRC non-linear methods, Lagrange multipliers, and Powell's conjugate direction method.
  • Participants discuss the application of Lagrange multipliers, outlining different cases for where the maximum might occur based on the constraints.
  • There is a focus on the formulation of the Lagrange function, $\Lambda$, and the conditions under which it is derived, including the necessity of checking multiple cases to find the maximum.
  • Questions arise regarding the interpretation of derivatives and the role of the Lagrange multiplier as a dummy variable in the optimization process.
  • Participants engage in solving a system of equations derived from the Lagrange multipliers method, discussing the elimination of the multiplier and the resulting values for the variables.

Areas of Agreement / Disagreement

Participants express differing views on the applicability of dynamic programming and the methods for solving the problem. While some agree on the use of Lagrange multipliers, the discussion remains unresolved regarding the best approach and the implications of the various cases considered.

Contextual Notes

Participants highlight the need to check multiple cases when applying Lagrange multipliers, indicating that the solution may depend on the specific characteristics of the problem and the constraints involved.

evinda
Gold Member
MHB
Messages
3,741
Reaction score
0
Hello! (Wave)

Could you explain to me how we can use dynamic programming in order to solve a non linear programming problem?

What do we do for example if we are given the following problem?

$$\max (y_1^3-11 y_1^2+40 y_1+y_2^3-8y_2^2+21 y_2) \\ y_1+y_2 \leq 5.4 \\ y_1, y_2 \geq 0$$
 
Physics news on Phys.org
If this is not possible, how else could we solve the problem using a method that is related to optimization?
 
evinda said:
Hello! (Wave)

Could you explain to me how we can use dynamic programming in order to solve a non linear programming problem?

What do we do for example if we are given the following problem?

$$\max (y_1^3-11 y_1^2+40 y_1+y_2^3-8y_2^2+21 y_2) \\ y_1+y_2 \leq 5.4 \\ y_1, y_2 \geq 0$$

evinda said:
If this is not possible, how else could we solve the problem using a method that is related to optimization?

Hey evinda! (Smile)

We can't do it with linear programming (the Simplex method), since it's not linear.
But there are many non-linear methods like:
- Graphing it, which is doable since there are only 2 dimensions,
- GRC non-linear, which is what Excel does
- Lagrange multipliers
- Powell's conjugate direction method
 
I like Serena said:
Hey evinda! (Smile)

We can't do it with linear programming (the Simplex method), since it's not linear.
But there are many non-linear methods like:
- GRC non-linear, which is what Excel does
- Lagrange multipliers
- Powell's conjugate direction method

We have done Lagrange multipliers in class... (Yes)
But how can we apply them in this case? (Thinking)
 
evinda said:
We have done Lagrange multipliers in class... (Yes)
But how can we apply them in this case? (Thinking)

There are 4 cases:
  1. The objective function takes its maximum strictly inside the feasible region.
  2. The maximum is where $y_1+y_2=5.4$.
  3. The maximum is where $y_1=0$.
  4. The maximum is where $y_2=0$.

If we assume that $y_1+y_2\le 5.4$ is a binding constraint (case 2), then we have:
$$\Lambda =(y_1^3-11 y_1^2+40 y_1+y_2^3-8y_2^2+21 y_2) + \lambda(y_1+y_2 - 5.4)$$
Solve for each of the derivatives being zero... (Thinking)
 
I like Serena said:
There are 4 cases:
  1. The objective function takes its maximum strictly inside the feasible region.
  2. The maximum is where $y_1+y_2=5.4$.
  3. The maximum is where $y_1=0$.
  4. The maximum is where $y_2=0$.

So do we have to check all of the cases to solve the problem? (Thinking)

I like Serena said:
If we assume that $y_1+y_2\le 5.4$ is a binding constraint (case 2), then we have:
$$\Lambda =(y_1^3-11 y_1^2+40 y_1+y_2^3-8y_2^2+21 y_2) + \lambda(y_1+y_2 - 5.4)$$
Solve for each of the derivatives being zero... (Thinking)

So now we check Case 1? Could you explain to me why we define $\Lambda$ like that?

You mean that we solve $\frac{\partial{\Lambda}}{\partial{y_1}}(y)=0$ and $\frac{\partial{\Lambda}}{\partial{y_2}}(y)=0$ ? (Thinking)
 
evinda said:
So do we have to check all of the cases to solve the problem? (Thinking)

From a graph, we can immediately see which case applies.
Otherwise, yes, we would have to check all cases.
So now we check Case 1? Could you explain to me why we define $\Lambda$ like that?

You mean that we solve $\frac{\partial{\Lambda}}{\partial{y_1}}(y)=0$ and $\frac{\partial{\Lambda}}{\partial{y_2}}(y)=0$ ? (Thinking)

My example was to check case 2.

That's just how I've learned Lagrange multipliers. How would you do it? (Wondering)

For case 1, that's indeed what we need to solve (where $\Lambda$ is without the $\lambda$ part).
For case 2 we need additionally that $\frac{\partial{\Lambda}}{\partial{\lambda}}(y_1, y_2; \lambda)=0$. (Nerd)
 
I like Serena said:
From a graph, we can immediately see which case applies.
Otherwise, yes, we would have to check all cases.

How can we make the graph? For $y_1=0$ the function gets for example $y_2^3-8y_2^2+21 y_2$... (Sweating)

I like Serena said:
For case 1, that's indeed what we need to solve (where $\Lambda$ is without the $\lambda$ part).
For case 2 we need additionally that $\frac{\partial{\Lambda}}{\partial{\lambda}}(y_1, y_2; \lambda)=0$. (Nerd)

What does $\frac{\partial{\Lambda}}{\partial{\lambda}}(y_1, y_2; \lambda)$ represent? Do we consider $\lambda$ to be a constant? (Thinking)
 
evinda said:
How can we make the graph? For $y_1=0$ the function gets for example $y_2^3-8y_2^2+21 y_2$... (Sweating)

Take a look at this contour plot on Wolfram.
Which clue can we get from it? (Wondering)

What does $\frac{\partial{\Lambda}}{\partial{\lambda}}(y_1, y_2; \lambda)$ represent? Do we consider $\lambda$ to be a constant? (Thinking)

No, $\lambda$ is a dummy variable.
If we have a problem of the form $\max f(\mathbf x)$ where $g(\mathbf x)=0$ and $h(\mathbf x)=0$, we can apply the Lagrange multipliers method.
That means that we first define:
$$\Lambda(\mathbf x; \lambda, \mu) = f(\mathbf x) + \lambda g(\mathbf x) + \mu h(\mathbf x)$$
and then solve:
$$\pd \Lambda {x_1} = 0 \\
\vdots \\
\pd \Lambda {x_n} = 0 \\
\pd \Lambda \lambda = 0 \\
\pd \Lambda \mu = 0
$$
 
  • #10
I like Serena said:
No, $\lambda$ is a dummy variable.
If we have a problem of the form $\max f(\mathbf x)$ where $g(\mathbf x)=0$ and $h(\mathbf x)=0$, we can apply the Lagrange multipliers method.
That means that we first define:
$$\Lambda(\mathbf x; \lambda, \mu) = f(\mathbf x) + \lambda g(\mathbf x) + \mu h(\mathbf x)$$
and then solve:
$$\pd \Lambda {x_1} = 0 \\
\vdots \\
\pd \Lambda {x_n} = 0 \\
\pd \Lambda \lambda = 0 \\
\pd \Lambda \mu = 0
$$

In this case do we pick $h(x) \equiv 0$ ?

We have $\frac{\partial{\Lambda}}{\partial{y_1}}=3y_1^2-22y_1+40+ \lambda \\ \frac{\partial{\Lambda}}{\partial{y_2}}=3y_2^2-16y_2+21+ \lambda $

Right?
 
  • #11
evinda said:
In this case do we pick $h(x) \equiv 0$ ?

For instance, or we just leave the whole $h(\mathbf x)$ and $\mu$ part out. (Nerd)

We have $\frac{\partial{\Lambda}}{\partial{y_1}}=3y_1^2-22y_1+40+ \lambda \\ \frac{\partial{\Lambda}}{\partial{y_2}}=3y_2^2-16y_2+21+ \lambda $

Right?
And additionally $\pd \Lambda \lambda = y_1+y_2-5.4$.
Hey! That simply is the constraint! (Happy)
 
  • #12
I like Serena said:
And additionally $\pd \Lambda \lambda = y_1+y_2-5.4$.
Hey! That simply is the constraint! (Happy)

Oh yes... And what do we get from that? (Thinking)
 
  • #13
evinda said:
Oh yes... And what do we get from that? (Thinking)

That we need to solve the system:
\begin{cases}3y_1^2-22y_1+40+\lambda &= 0 \\
3y_2^2-16y_2+21+\lambda &= 0\\
y_1+y_2-5.4 &= 0
\end{cases}
That is, 3 equations with 3 unknowns.
Btw, we're not really interested in $\lambda$ itself, so it makes sense to eliminate $\lambda$ first. (Mmm)
 
  • #14
I like Serena said:
That we need to solve the system:
\begin{cases}3y_1^2-22y_1+40+\lambda &= 0 \\
3y_2^2-16y_2+21+\lambda &= 0\\
y_1+y_2-5.4 &= 0
\end{cases}
That is, 3 equations with 3 unknowns.
Btw, we're not really interested in $\lambda$ itself, so it makes sense to eliminate $\lambda$ first. (Mmm)

Then we get $y_1=3.2$ and $y_2=2.2$. Or am I wrong? (Thinking)
 
  • #15
evinda said:
Then we get $y_1=3.2$ and $y_2=2.2$. Or am I wrong? (Thinking)

That is correct. (Nod)
 
  • #16
I like Serena said:
That is correct. (Nod)

Then the function is equal to $66.256$, right?

So now do we check the other cases?
 

Similar threads

  • · Replies 5 ·
Replies
5
Views
1K
  • · Replies 32 ·
2
Replies
32
Views
5K
  • · Replies 15 ·
Replies
15
Views
1K
  • · Replies 21 ·
Replies
21
Views
4K
  • · Replies 2 ·
Replies
2
Views
4K
  • · Replies 14 ·
Replies
14
Views
3K
Replies
4
Views
2K
  • · Replies 2 ·
Replies
2
Views
1K
Replies
9
Views
2K
  • · Replies 5 ·
Replies
5
Views
1K