Optimization: How to find the dual problem?

In summary, the Lagrangian for this problem is:$$L(\vec{x}, \vec{\lambda}, \vec{\nu}) = \sum_{j=1}^{m} x_j log(x_j) + \vec{\lambda}^{T} \left( A\vec{x} - b \right) + \text{something with inequality constraint}$$where ##A## is a matrix, ##\vec{x}## is a vector, and ##\vec{\lambda}^{T}## is a vector of length ##n##. The problem is convex and the Lagrangian is positive semidefinite, and the dual problem is also convex and has the same
  • #1
Master1022
611
117
Homework Statement
Given the primal problem below: (i) is it convex?, (ii) find the dual problem
Relevant Equations
Lagrangian, hessian
Hi,

I am working on the following optimization problem, and am confused how to apply the Lagrangian in the following scenario:

Question:
Let us look at the following problem
[tex] \min_{x \in \mathbb{R}_{+} ^{n}} \sum_{j=1}^{m} x_j log(x_j) [/tex]
[tex] \text{subject to} A \vec{x} \leq b \rightarrow A\vec{x} - b \leq \vec{0} [/tex]
[tex] \sum_{j=1}^{n} x_j = 1 [/tex]

(i) Is this problem convex?
(ii) Find the dual problem

Attempt:
(i) For the convexity, I argued that it is convex and used the hessian argument to show that (although I don't know if I applied it correctly):
[tex] \frac{\partial ^2 (obj)}{\partial x_k ^2} = \frac{1}{x_k} \geq 0 \forall x_k \geq 0 [/tex]
[tex] \frac{\partial ^2 (obj)}{\partial x_k \partial x_j} = 0 [/tex]
Thus the second order convexity condition is satisfied and the problem is convex.

(ii) This is where I get confused as I am not sure how to deal with the equality constraint. We can define the Lagrangian to be:
[tex] L(\vec{x}, \vec{\lambda}, \vec{\nu}) = \sum_{j=1}^{m} x_j log(x_j) + \vec{\lambda}^{T} \left( A\vec{x} - b \right) + \text{something with inequality constraint} [/tex]

The first thing that comes to mind is: [itex] -1 + \sum_{j=1}^{n} \nu_j x_j = 0 [/itex]. However, this ignores the fact that the one is also present in the expression. Other options I have thought about include:
(a) Introducing extra slack variables
(b) Re-formatting the equation

But I am not sure how to make any progress.
 
Physics news on Phys.org
  • #2
Hi,
(i) I agree
(ii) I don't even understand the inequality constraint. Is ##\vec A## an unknown vector of length ##n## and ##b## an unknown number ##\in \mathbb{R}^n## ?
And is there any limit on ##m## ? Or is ##m=n## ?

You make it look like ##A \vec{x} - b =0## is the the equality constraint. Isn't it the inequality constraint ?

And isn't ##\displaystyle \sum_{j=1}^{n} x_j - 1 = 0 ## the equality constraint ?

Anyway,
 
  • #3
Thanks for the reply!
BvU said:
Hi,
(i) I agree
Great

BvU said:
(ii) I don't even understand the inequality constraint. Is ##\vec A## an unknown vector of length ##n## and ##b## an unknown number ##\in \mathbb{R}^n## ?
##A## is a matrix. ## \vec{x} ## is a vector of dimension ##n##. ##\vec{b}## is a vector whose dimension we are not told.

BvU said:
And is there any limit on ##m## ? Or is ##m=n## ?
Apologies, that was a typo and sum on ## x_i log(x_i) ## should be from ##j = 1## to ## m ##

BvU said:
You make it look like ##A \vec{x} - b =0## is the the equality constraint. Isn't it the inequality constraint ?
Once again, another typo on my part. Yes, it is the inequality constraint

BvU said:
And isn't ##\displaystyle \sum_{j=1}^{n} x_j - 1 = 0 ## the equality constraint ?
Yes

I hope that makes more sense now.
 
  • #4
Master1022 said:
##A## is a matrix. ## \vec{x} ## is a vector of dimension ##n##. ##\vec{b}## is a vector whose dimension we are not told.
So how are we to interpret ##{\sf A}\vec x -\vec b \le 0 ## ? As an unknown number of equations ##({\sf A}\vec x)_k -\vec b_k \le 0 ## for ##0\le k\le n## ?

Re limit on ##m## :
Apologies, that was a typo and sum on ## x_i log(x_i) ## should be from ##j = 1## to ## m ##
no typo, therefore: in post #1 you had ##\displaystyle \min_{x \in \mathbb{R}_{+} ^{n}} \sum_{j=1}^{m} x_j\,\log x_j\ ## . But I suppose we can take ##1\le m\le n##

So the Lagrange problem with only the equality constraint would be something like $$
\sum_{j=1}^m(1+\log x_j) - \lambda _j = 0$$ and ##\lambda_k =0## for ##k=m+1\;... \; n##, plus the constraint ##\displaystyle\sum_{j = 1}^n x_i = 1## ?

##\ ##
 
  • #5
Thanks for taking the time to respond.

BvU said:
So how are we to interpret ##{\sf A}\vec x -\vec b \le 0 ## ? As an unknown number of equations ##({\sf A}\vec x)_k -\vec b_k \le 0 ## for ##0\le k\le n## ?
Yes I suppose so. Why does the number of equations matter in terms of setting up the dual problem when we know it is a vector and we are just having an inner product between it and some vector ##\nu##?

BvU said:
Re limit on ##m## :
no typo, therefore: in post #1 you had ##\displaystyle \min_{x \in \mathbb{R}_{+} ^{n}} \sum_{j=1}^{m} x_j\,\log x_j\ ## . But I suppose we can take ##1\le m\le n##
Apologies, somehow I mis-typed it again (even after double checking the problem!) - it should be ##n## not ##m## in the limit.

BvU said:
So the Lagrange problem with only the equality constraint would be something like $$
\sum_{j=1}^m(1+\log x_j) - \lambda _j = 0$$ and ##\lambda_k =0## for ##k=m+1\;... \; n##, plus the constraint ##\displaystyle\sum_{j = 1}^n x_i = 1## ?

##\ ##
Sorry, could you perhaps explain a bit further about how you got to this for the equality constraint?

Many thanks for taking the time to help me.
 
  • #6
Master1022 said:
how you got to this for the equality constraint?
I have learned (a long long time ago :frown:) that an extremum of ##f## under the condition ##g=0## can be found by requiring$$\nabla f -\lambda \nabla g = 0\ .$$
Here we have (after some back and forh :wink:) ##\ f = \displaystyle \sum_{j=1}^{n} x_j\,\log x_j\ ## and ##\ g= \displaystyle\left [\sum_{j = 1}^n x_i \right ]\ - 1\ ## , so ##\ \ {\partial f\over \partial x_j} = 1+ \log x_j \ ## and ##\ \ {\partial g\over \partial x_j} =1 ##

And that means the Lagrange equations are $$1+\log x_j - \lambda = 0 \qquad j = 1\ldots n\tag 1$$ plus the constraint ##\displaystyle\sum_{j = 1}^n x_i = 1## .

If I try this out for ##n=3## I get a very reasonable result :biggrin:

When I try to revert to the complete problem: In the notation here we have found ##\nu## (the role played by the ##\lambda## above. There is only one component because there is only one constraint).

So it's possible to write down the Lagrangian, more or less as you posted in #1, but now with the 'something with the equality constraint' properly filled in. And the dual Lagrangian function.
Which may well be all that is required for this exercise: you don't have to solve the thing :cool:

##\ ##
 
Last edited:
  • Wow
Likes Master1022
  • #7
Thanks @BvU for taking the time to write this out! I am going to take some time to thoroughly read and (try to) understand everything in the post before getting back to you with any follow-up questions.
 
  • Like
Likes WWGD

1. What is optimization and why is it important?

Optimization is the process of finding the best solution to a problem, given a set of constraints. It is important because it allows us to make efficient decisions and improve systems, leading to cost savings, increased productivity, and better performance.

2. What is the dual problem in optimization?

The dual problem in optimization is a mathematical formulation that is derived from the primal (original) problem. It provides an alternative perspective and can help in solving the primal problem more efficiently.

3. How do you find the dual problem?

To find the dual problem, you need to first write the primal problem in its standard form. Then, use the Lagrangian function to create the dual problem by taking the derivative with respect to the constraints. Finally, solve the resulting equations to obtain the dual problem.

4. What is the relationship between the primal and dual problems?

The primal and dual problems are closely related, and solving one can provide information about the other. The optimal solutions of both problems are equal, and the duality gap (difference between the optimal values) is zero when strong duality holds.

5. When is it useful to find the dual problem?

Finding the dual problem is useful when the primal problem is difficult to solve directly or when the dual problem can provide additional insights into the problem. It is also helpful when the primal problem has a large number of constraints, as the dual problem may have fewer constraints to solve.

Similar threads

  • Calculus and Beyond Homework Help
Replies
1
Views
562
  • Calculus and Beyond Homework Help
Replies
4
Views
1K
  • Calculus and Beyond Homework Help
Replies
2
Views
822
  • Calculus and Beyond Homework Help
Replies
1
Views
499
  • Calculus and Beyond Homework Help
Replies
1
Views
707
  • Calculus and Beyond Homework Help
Replies
3
Views
1K
  • Calculus and Beyond Homework Help
Replies
4
Views
844
  • Calculus and Beyond Homework Help
Replies
4
Views
820
  • Calculus and Beyond Homework Help
Replies
9
Views
773
  • Calculus and Beyond Homework Help
Replies
20
Views
1K
Back
Top