# Lagrange multiplier no solution or incorrect formulation

• unplebeian
In summary, the problem is a classic problem in operations research called the continuous knapsack problem. The solution is operationally simple, but the power can't be zero.
unplebeian
1. The problem statement
I'm stuck with this problem which does not yield a solution. I feel as if I'm not formulating it correctly. Here it is described below. I've also written down the equations as they're easier to be read (attachment)
This is something that I was doing with batteries and thought of using Lagrange multipliers. I have N energy cell or some sort (fuel cell, battery etc.) with different capacities. The basic problem is how to control a multitude of energy sources to meet the power demand. e.g. cell 1, 2, and 3 can each provide 50 Watts each but the cost of each cell is $1/Watt,$3/Watt, and $1.5/Watt and a total power demand of 120 Watts is needs which can fluctuate over time. Assume that I can control the cells using an internal switch of some sort and I use a control variable alpha where 0 <= alpha<= 1 (unitless) which can control the output power of each cell. At the end I need to find the Total cost and the values of alpha_i Generalizing: The i_th energy cell capable of providing a total power of Pi. Power_demand= alpha1*P1 + alpha2*P2 + ...+ alphaN*PN where 0 <= alpha_i <= 1.0 (unitless) The cost needs to be minimized where: Total_cost= C1*alpha1*P1 + C2*alpha2*P2 + ... + CN*alphaN*PN where Ci= Dollars/ Watt ## Homework Equations Minimize cost function 'Total_cost' subject to constraint Power_demand Define Lagrangian as L= (C1*alpha1*P1 + C2*alpha2*P2 + ... + CN*alphaN*PN) + LAMBDA*(alpha1*P1 + alpha2*P2 + ...+ alphaN*PN ) ## The Attempt at a Solution partial_L/partial_alpha_i = Pi+ LAMBDA*Pi partial_L/partial_LAMBDA = alpha1*P1 + alpha2*P2 + ...+ alphaN*PN - Power_demand Set first derivatives to zero: Pi= 0 or LAMBDA= -Ci and alpha1*P1 + alpha2*P2 + ...+ alphaN*PN - Power_demand This does not yield any meaningful result as power can't be 0. Can anyone help with this? Am I not formulating the problem correctly? #### Attachments • 20150111_220327.jpg 25.8 KB · Views: 479 unplebeian said: 1. The problem statement I'm stuck with this problem which does not yield a solution. I feel as if I'm not formulating it correctly. Here it is described below. I've also written down the equations as they're easier to be read (attachment) This is something that I was doing with batteries and thought of using Lagrange multipliers. I have N energy cell or some sort (fuel cell, battery etc.) with different capacities. The basic problem is how to control a multitude of energy sources to meet the power demand. e.g. cell 1, 2, and 3 can each provide 50 Watts each but the cost of each cell is$1/Watt, $3/Watt, and$1.5/Watt and a total power demand of 120 Watts is needs which can fluctuate over time. Assume that I can control the cells using an internal switch of some sort and I use a control variable alpha where 0 <= alpha<= 1 (unitless) which can control the output power of each cell.

At the end I need to find the Total cost and the values of alpha_i

Generalizing:
The i_th energy cell capable of providing a total power of Pi.

Power_demand= alpha1*P1 + alpha2*P2 + ...+ alphaN*PN
where 0 <= alpha_i <= 1.0 (unitless)

The cost needs to be minimized where:
Total_cost= C1*alpha1*P1 + C2*alpha2*P2 + ... + CN*alphaN*PN
where Ci= Dollars/ Watt

## Homework Equations

Minimize cost function 'Total_cost' subject to constraint Power_demand

Define Lagrangian as
L= (C1*alpha1*P1 + C2*alpha2*P2 + ... + CN*alphaN*PN) + LAMBDA*(alpha1*P1 + alpha2*P2 + ...+ alphaN*PN )

## The Attempt at a Solution

partial_L/partial_alpha_i = Pi+ LAMBDA*Pi
partial_L/partial_LAMBDA = alpha1*P1 + alpha2*P2 + ...+ alphaN*PN - Power_demand
Set first derivatives to zero:
Pi= 0 or LAMBDA= -Ci
and alpha1*P1 + alpha2*P2 + ...+ alphaN*PN - Power_demand

This does not yield any meaningful result as power can't be 0. Can anyone help with this? Am I not formulating the problem correctly?

The Lagrange multiplier method fails because you have inequality constraints of the form ## 0 \leq \alpha_i \leq 1 ##. These necessitate the use of the so-called Karush-Kuhn-Tucker (KKT) conditions, rather than simple Lagrange conditions. Google 'Kuhn-Tucker conditions' for more details, or consult a textbook on optimization, or a textbook on Operations Research.

In the meantime, it might be useful for you to know that your problem is a classic and well-studied operations research problem called the continuous knapsack problem (or fractional knapsack problem). It has an operationally simple solution; see, eg.,
http://courses.cs.washington.edu/courses/cse421/04su/slides/fracknap.pdf or
http://www.personal.kent.edu/~rmuhamma/Algorithms/MyAlgorithms/Greedy/knapscakFrac.htm

Note that these papers deal with the problem of maximizing ##V = \sum \alpha_i c_i p_i##, while you want to minimize it. However, you can re-write your problem by setting ##\alpha_i = 1 - \beta_i## for all ##i##. Note that ##0 \leq \beta_i \leq 1## for all i, so the ##\beta_i## form is a standard maximizing knapsack problem.

The papers cited above do not deal very well with actual correctness-proofs. One thing you can do is verify that their so-called "greedy solutions" actually satisfy the KKT conditions. Have fun.

Last edited:
unplebeian
Thank you! That did solve my problem and the solution I got is now very simple. I can't believe I didn't see that before.

Similar to the greedy problem, I choose the cell with the minimum cost first, try to maximize its contribution and check if the power demand is met. If not, then I select the cell with the lowest cost (out of the remaining cells) and adjust its alpha to see if the power demand is met. And so on...

unplebeian said:
Thank you! That did solve my problem and the solution I got is now very simple. I can't believe I didn't see that before.

Similar to the greedy problem, I choose the cell with the minimum cost first, try to maximize its contribution and check if the power demand is met. If not, then I select the cell with the lowest cost (out of the remaining cells) and adjust its alpha to see if the power demand is met. And so on...

Right: that works because the "ratio" has the form ##\frac{c p}{p} = c##, so just ordering by cost does the trick.

I looked up the KKT conditions and they're a little beyond me at the moment. I was hoping to use LaGrange multipliers to get a nice graphical solution with critical points. Perhaps if the power or cost function had variables, I could partially differentiate the variables (like P= I^2*R) and minimize that current quantity. That would require some modeling of the plant.

Thanks for your help. Ray Vickson! Extremely useful and helpful.

unplebeian said:
I looked up the KKT conditions and they're a little beyond me at the moment. I was hoping to use LaGrange multipliers to get a nice graphical solution with critical points. Perhaps if the power or cost function had variables, I could partially differentiate the variables (like P= I^2*R) and minimize that current quantity. That would require some modeling of the plant.

Thanks for your help. Ray Vickson! Extremely useful and helpful.

Fortunately, this is a so-called Linear Programming, so can be solved using tools at the high-school level (although this is usually not seen in high school). Here goes.

Write the problem as
$$\min Z = \sum_{j=1}^n c_j x_k\\ \text{subject to}\\ \sum_{j=1}^n w_j x_j = W\\ 0 \leq x_j \leq 1 \;, j=1, \ldots,n$$

We can assume all ##w_j > 0##, because: (i) if ##w_s = 0## the variable ##x_s## does not affect the constraint, so may be set to whatever value (0 or 1) makes its contribution to ##Z## the smallest, and (ii) if ##w_s < 0## we can write ##w_s x_s = w_s + w_s (x_s - 1) = w_s + (-w_s)(1-x_s)##. Writing ##w'_s = - w_s > 0## and ##x'_s = 1-x_s \in [0,1]## we would have a term of the form ##w'_s x'_s## in the constraint, with ##w'_s > 0##, and, of course, an "adjusted" value of ##W##.

Use the constraint to solve for one variable in terms of the others; say we solve for ##x_r##. (For now, just go along with it; you will soon see how this gives us what we want. And, for the moment, ##r## is just some number from 1 to n.) So, we have
$$x_r = \frac{W}{w_r} - \sum_{j \neq r} \frac{w_j}{w_r} x_j$$
Substitute this into the equation for ##Z##:
$$Z = c_r \frac{W}{w_r} + \sum_{j \neq r} \left( c_j - c_r \frac{w_j}{w_r} \right) x_j \equiv (c_r W/w_r) + \sum_{j \neq r} u_j x_j.$$
To minimize ##Z## we would like to set ##x_j = 1## if ##u_j < 0## and ##x_j = 0## if ##u_j > 0##. In other words, we want
$$(c_j/w_j) - (c_r/w_r) < 0 \; \longrightarrow x_j = 1\\ (c_j/w_j) - (c_r/w_r) > 0 \; \longrightarrow x_j = 0$$
In other words, if we order the ratios ##c_j/w_j## in ascending order, we want to have ##x_j = 1## for some initial segment ##j = 1,\ldots, r-1## and ##x_j = 0## for ##j = r+1, \ldots n##. So, to find ##r## we just fill in x=1 in ascending order until we go too far; that is, when setting ##x_j = 1## also for ##j=r## gives a sum ##\sum_{j=1}^r w_jx_j > W## (but ##\sum_{j=1}^{r-1} w_j x_j < W##). Then, we just give ##x_r## the necessary fractional value and are done: all ##x_j = 0## for ##j > r##.

This is optimal, because we will have made all ##x_j = 1## when ##u_j < 0## and all ##x_j = 0## when ##u_j > 0##; basically, by re-writing the system properly, we can see the solution "by inspection".

Last edited:

## 1. What is a Lagrange multiplier?

A Lagrange multiplier is a mathematical tool used to solve optimization problems involving constraints. It is a scalar value that is multiplied to the constraint equation to incorporate it into the objective function.

## 2. Can a Lagrange multiplier have no solution?

Yes, there are cases where a Lagrange multiplier may have no solution. This can happen when the constraints are not compatible with each other or when the objective function is not well-defined. In such cases, the problem may have no feasible solution.

## 3. How do you know if a Lagrange multiplier formulation is incorrect?

An incorrect Lagrange multiplier formulation can be identified by checking if the derivatives of the objective function and the constraints are consistent with each other. If there is a mismatch, it indicates that the formulation is incorrect and needs to be revised.

## 4. What happens if the Lagrange multiplier is set to zero?

If the Lagrange multiplier is set to zero, it means that the constraint is not considered in the optimization problem. This can result in an incorrect solution as the constraint is not satisfied. Therefore, the Lagrange multiplier should not be arbitrarily set to zero.

## 5. Are there any alternative methods to Lagrange multipliers for solving constrained optimization problems?

Yes, there are other methods such as KKT conditions, penalty methods, and barrier methods that can also be used to solve constrained optimization problems. These methods have their own advantages and disadvantages, and the choice of method depends on the specific problem at hand.