Optimization: How to find the dual problem?

Click For Summary
SUMMARY

The forum discussion centers on the optimization problem defined as min_{x ∈ ℝ_{+}^{n}} Σ_{j=1}^{m} x_j log(x_j) subject to the constraints A x ≤ b and Σ_{j=1}^{n} x_j = 1. Participants confirm the convexity of the problem using Hessian analysis, establishing that the second-order condition is satisfied. The main confusion arises around the formulation of the dual problem, particularly regarding the treatment of equality and inequality constraints, with clarifications provided on the roles of the matrix A and vector b.

PREREQUISITES
  • Understanding of convex optimization principles
  • Familiarity with Lagrangian duality
  • Knowledge of Hessian matrices and their role in convexity
  • Basic linear algebra concepts, including matrix and vector operations
NEXT STEPS
  • Study the formulation of Lagrangian dual problems in convex optimization
  • Learn about the implications of convexity in optimization problems
  • Explore the role of slack variables in inequality constraints
  • Review examples of dual problems in optimization to solidify understanding
USEFUL FOR

Students and professionals in mathematics, operations research, and optimization, particularly those focusing on convex optimization and Lagrangian methods.

Master1022
Messages
590
Reaction score
116
Homework Statement
Given the primal problem below: (i) is it convex?, (ii) find the dual problem
Relevant Equations
Lagrangian, hessian
Hi,

I am working on the following optimization problem, and am confused how to apply the Lagrangian in the following scenario:

Question:
Let us look at the following problem
\min_{x \in \mathbb{R}_{+} ^{n}} \sum_{j=1}^{m} x_j log(x_j)
\text{subject to} A \vec{x} \leq b \rightarrow A\vec{x} - b \leq \vec{0}
\sum_{j=1}^{n} x_j = 1

(i) Is this problem convex?
(ii) Find the dual problem

Attempt:
(i) For the convexity, I argued that it is convex and used the hessian argument to show that (although I don't know if I applied it correctly):
\frac{\partial ^2 (obj)}{\partial x_k ^2} = \frac{1}{x_k} \geq 0 \forall x_k \geq 0
\frac{\partial ^2 (obj)}{\partial x_k \partial x_j} = 0
Thus the second order convexity condition is satisfied and the problem is convex.

(ii) This is where I get confused as I am not sure how to deal with the equality constraint. We can define the Lagrangian to be:
L(\vec{x}, \vec{\lambda}, \vec{\nu}) = \sum_{j=1}^{m} x_j log(x_j) + \vec{\lambda}^{T} \left( A\vec{x} - b \right) + \text{something with inequality constraint}

The first thing that comes to mind is: -1 + \sum_{j=1}^{n} \nu_j x_j = 0. However, this ignores the fact that the one is also present in the expression. Other options I have thought about include:
(a) Introducing extra slack variables
(b) Re-formatting the equation

But I am not sure how to make any progress.
 
Physics news on Phys.org
Hi,
(i) I agree
(ii) I don't even understand the inequality constraint. Is ##\vec A## an unknown vector of length ##n## and ##b## an unknown number ##\in \mathbb{R}^n## ?
And is there any limit on ##m## ? Or is ##m=n## ?

You make it look like ##A \vec{x} - b =0## is the the equality constraint. Isn't it the inequality constraint ?

And isn't ##\displaystyle \sum_{j=1}^{n} x_j - 1 = 0 ## the equality constraint ?

Anyway,
 
Thanks for the reply!
BvU said:
Hi,
(i) I agree
Great

BvU said:
(ii) I don't even understand the inequality constraint. Is ##\vec A## an unknown vector of length ##n## and ##b## an unknown number ##\in \mathbb{R}^n## ?
##A## is a matrix. ## \vec{x} ## is a vector of dimension ##n##. ##\vec{b}## is a vector whose dimension we are not told.

BvU said:
And is there any limit on ##m## ? Or is ##m=n## ?
Apologies, that was a typo and sum on ## x_i log(x_i) ## should be from ##j = 1## to ## m ##

BvU said:
You make it look like ##A \vec{x} - b =0## is the the equality constraint. Isn't it the inequality constraint ?
Once again, another typo on my part. Yes, it is the inequality constraint

BvU said:
And isn't ##\displaystyle \sum_{j=1}^{n} x_j - 1 = 0 ## the equality constraint ?
Yes

I hope that makes more sense now.
 
Master1022 said:
##A## is a matrix. ## \vec{x} ## is a vector of dimension ##n##. ##\vec{b}## is a vector whose dimension we are not told.
So how are we to interpret ##{\sf A}\vec x -\vec b \le 0 ## ? As an unknown number of equations ##({\sf A}\vec x)_k -\vec b_k \le 0 ## for ##0\le k\le n## ?

Re limit on ##m## :
Apologies, that was a typo and sum on ## x_i log(x_i) ## should be from ##j = 1## to ## m ##
no typo, therefore: in post #1 you had ##\displaystyle \min_{x \in \mathbb{R}_{+} ^{n}} \sum_{j=1}^{m} x_j\,\log x_j\ ## . But I suppose we can take ##1\le m\le n##

So the Lagrange problem with only the equality constraint would be something like $$
\sum_{j=1}^m(1+\log x_j) - \lambda _j = 0$$ and ##\lambda_k =0## for ##k=m+1\;... \; n##, plus the constraint ##\displaystyle\sum_{j = 1}^n x_i = 1## ?

##\ ##
 
Thanks for taking the time to respond.

BvU said:
So how are we to interpret ##{\sf A}\vec x -\vec b \le 0 ## ? As an unknown number of equations ##({\sf A}\vec x)_k -\vec b_k \le 0 ## for ##0\le k\le n## ?
Yes I suppose so. Why does the number of equations matter in terms of setting up the dual problem when we know it is a vector and we are just having an inner product between it and some vector ##\nu##?

BvU said:
Re limit on ##m## :
no typo, therefore: in post #1 you had ##\displaystyle \min_{x \in \mathbb{R}_{+} ^{n}} \sum_{j=1}^{m} x_j\,\log x_j\ ## . But I suppose we can take ##1\le m\le n##
Apologies, somehow I mis-typed it again (even after double checking the problem!) - it should be ##n## not ##m## in the limit.

BvU said:
So the Lagrange problem with only the equality constraint would be something like $$
\sum_{j=1}^m(1+\log x_j) - \lambda _j = 0$$ and ##\lambda_k =0## for ##k=m+1\;... \; n##, plus the constraint ##\displaystyle\sum_{j = 1}^n x_i = 1## ?

##\ ##
Sorry, could you perhaps explain a bit further about how you got to this for the equality constraint?

Many thanks for taking the time to help me.
 
Master1022 said:
how you got to this for the equality constraint?
I have learned (a long long time ago :frown:) that an extremum of ##f## under the condition ##g=0## can be found by requiring$$\nabla f -\lambda \nabla g = 0\ .$$
Here we have (after some back and forh :wink:) ##\ f = \displaystyle \sum_{j=1}^{n} x_j\,\log x_j\ ## and ##\ g= \displaystyle\left [\sum_{j = 1}^n x_i \right ]\ - 1\ ## , so ##\ \ {\partial f\over \partial x_j} = 1+ \log x_j \ ## and ##\ \ {\partial g\over \partial x_j} =1 ##

And that means the Lagrange equations are $$1+\log x_j - \lambda = 0 \qquad j = 1\ldots n\tag 1$$ plus the constraint ##\displaystyle\sum_{j = 1}^n x_i = 1## .

If I try this out for ##n=3## I get a very reasonable result :biggrin:

When I try to revert to the complete problem: In the notation here we have found ##\nu## (the role played by the ##\lambda## above. There is only one component because there is only one constraint).

So it's possible to write down the Lagrangian, more or less as you posted in #1, but now with the 'something with the equality constraint' properly filled in. And the dual Lagrangian function.
Which may well be all that is required for this exercise: you don't have to solve the thing :cool:

##\ ##
 
Last edited:
  • Wow
Likes   Reactions: Master1022
Thanks @BvU for taking the time to write this out! I am going to take some time to thoroughly read and (try to) understand everything in the post before getting back to you with any follow-up questions.
 
  • Like
Likes   Reactions: WWGD

Similar threads

  • · Replies 4 ·
Replies
4
Views
2K
Replies
3
Views
1K
Replies
2
Views
1K
  • · Replies 3 ·
Replies
3
Views
2K
Replies
4
Views
2K
  • · Replies 20 ·
Replies
20
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 0 ·
Replies
0
Views
2K