# Lagrange multiplier no solution or incorrect formulation

1. Jan 12, 2015

### unplebeian

1. The problem statement
I'm stuck with this problem which does not yield a solution. I feel as if I'm not formulating it correctly. Here it is described below. I've also written down the equations as they're easier to be read (attachment)
This is something that I was doing with batteries and thought of using Lagrange multipliers. I have N energy cell or some sort (fuel cell, battery etc.) with different capacities. The basic problem is how to control a multitude of energy sources to meet the power demand. e.g. cell 1, 2, and 3 can each provide 50 Watts each but the cost of each cell is $1/Watt,$3/Watt, and \$1.5/Watt and a total power demand of 120 Watts is needs which can fluctuate over time. Assume that I can control the cells using an internal switch of some sort and I use a control variable alpha where 0 <= alpha<= 1 (unitless) which can control the output power of each cell.

At the end I need to find the Total cost and the values of alpha_i

Generalizing:
The i_th energy cell capable of providing a total power of Pi.

Power_demand= alpha1*P1 + alpha2*P2 + ...+ alphaN*PN
where 0 <= alpha_i <= 1.0 (unitless)

The cost needs to be minimized where:
Total_cost= C1*alpha1*P1 + C2*alpha2*P2 + ... + CN*alphaN*PN
where Ci= Dollars/ Watt

2. Relevant equations

Minimize cost function 'Total_cost' subject to constraint Power_demand

Define Lagrangian as
L= (C1*alpha1*P1 + C2*alpha2*P2 + ... + CN*alphaN*PN) + LAMBDA*(alpha1*P1 + alpha2*P2 + ...+ alphaN*PN )

3. The attempt at a solution

partial_L/partial_alpha_i = Pi+ LAMBDA*Pi
partial_L/partial_LAMBDA = alpha1*P1 + alpha2*P2 + ...+ alphaN*PN - Power_demand
Set first derivatives to zero:
Pi= 0 or LAMBDA= -Ci
and alpha1*P1 + alpha2*P2 + ...+ alphaN*PN - Power_demand

This does not yield any meaningful result as power can't be 0. Can anyone help with this? Am I not formulating the problem correctly?

#### Attached Files:

• ###### 20150111_220327.jpg
File size:
28.9 KB
Views:
64
2. Jan 12, 2015

### Ray Vickson

The Lagrange multiplier method fails because you have inequality constraints of the form $0 \leq \alpha_i \leq 1$. These necessitate the use of the so-called Karush-Kuhn-Tucker (KKT) conditions, rather than simple Lagrange conditions. Google 'Kuhn-Tucker conditions' for more details, or consult a textbook on optimization, or a textbook on Operations Research.

In the meantime, it might be useful for you to know that your problem is a classic and well-studied operations research problem called the continuous knapsack problem (or fractional knapsack problem). It has an operationally simple solution; see, eg.,
http://courses.cs.washington.edu/courses/cse421/04su/slides/fracknap.pdf or
http://www.personal.kent.edu/~rmuhamma/Algorithms/MyAlgorithms/Greedy/knapscakFrac.htm

Note that these papers deal with the problem of maximizing $V = \sum \alpha_i c_i p_i$, while you want to minimize it. However, you can re-write your problem by setting $\alpha_i = 1 - \beta_i$ for all $i$. Note that $0 \leq \beta_i \leq 1$ for all i, so the $\beta_i$ form is a standard maximizing knapsack problem.

The papers cited above do not deal very well with actual correctness-proofs. One thing you can do is verify that their so-called "greedy solutions" actually satisfy the KKT conditions. Have fun.

Last edited: Jan 12, 2015
3. Jan 13, 2015

### unplebeian

Thank you! That did solve my problem and the solution I got is now very simple. I can't believe I didn't see that before.

Similar to the greedy problem, I choose the cell with the minimum cost first, try to maximize its contribution and check if the power demand is met. If not, then I select the cell with the lowest cost (out of the remaining cells) and adjust its alpha to see if the power demand is met. And so on...

4. Jan 14, 2015

### Ray Vickson

Right: that works because the "ratio" has the form $\frac{c p}{p} = c$, so just ordering by cost does the trick.

5. Jan 14, 2015

### unplebeian

I looked up the KKT conditions and they're a little beyond me at the moment. I was hoping to use LaGrange multipliers to get a nice graphical solution with critical points. Perhaps if the power or cost function had variables, I could partially differentiate the variables (like P= I^2*R) and minimize that current quantity. That would require some modeling of the plant.

Thanks for your help. Ray Vickson! Extremely useful and helpful.

6. Jan 14, 2015

### Ray Vickson

Fortunately, this is a so-called Linear Programming, so can be solved using tools at the high-school level (although this is usually not seen in high school). Here goes.

Write the problem as
$$\min Z = \sum_{j=1}^n c_j x_k\\ \text{subject to}\\ \sum_{j=1}^n w_j x_j = W\\ 0 \leq x_j \leq 1 \;, j=1, \ldots,n$$

We can assume all $w_j > 0$, because: (i) if $w_s = 0$ the variable $x_s$ does not affect the constraint, so may be set to whatever value (0 or 1) makes its contribution to $Z$ the smallest, and (ii) if $w_s < 0$ we can write $w_s x_s = w_s + w_s (x_s - 1) = w_s + (-w_s)(1-x_s)$. Writing $w'_s = - w_s > 0$ and $x'_s = 1-x_s \in [0,1]$ we would have a term of the form $w'_s x'_s$ in the constraint, with $w'_s > 0$, and, of course, an "adjusted" value of $W$.

Use the constraint to solve for one variable in terms of the others; say we solve for $x_r$. (For now, just go along with it; you will soon see how this gives us what we want. And, for the moment, $r$ is just some number from 1 to n.) So, we have
$$x_r = \frac{W}{w_r} - \sum_{j \neq r} \frac{w_j}{w_r} x_j$$
Substitute this into the equation for $Z$:
$$Z = c_r \frac{W}{w_r} + \sum_{j \neq r} \left( c_j - c_r \frac{w_j}{w_r} \right) x_j \equiv (c_r W/w_r) + \sum_{j \neq r} u_j x_j.$$
To minimize $Z$ we would like to set $x_j = 1$ if $u_j < 0$ and $x_j = 0$ if $u_j > 0$. In other words, we want
$$(c_j/w_j) - (c_r/w_r) < 0 \; \longrightarrow x_j = 1\\ (c_j/w_j) - (c_r/w_r) > 0 \; \longrightarrow x_j = 0$$
In other words, if we order the ratios $c_j/w_j$ in ascending order, we want to have $x_j = 1$ for some initial segment $j = 1,\ldots, r-1$ and $x_j = 0$ for $j = r+1, \ldots n$. So, to find $r$ we just fill in x=1 in ascending order until we go too far; that is, when setting $x_j = 1$ also for $j=r$ gives a sum $\sum_{j=1}^r w_jx_j > W$ (but $\sum_{j=1}^{r-1} w_j x_j < W$). Then, we just give $x_r$ the necessary fractional value and are done: all $x_j = 0$ for $j > r$.

This is optimal, because we will have made all $x_j = 1$ when $u_j < 0$ and all $x_j = 0$ when $u_j > 0$; basically, by re-writing the system properly, we can see the solution "by inspection".

Last edited: Jan 14, 2015