How can I find the optimal vector x for a constrained optimization problem?

Click For Summary

Discussion Overview

The discussion revolves around finding the optimal vector x for a constrained optimization problem, specifically minimizing a quadratic function subject to linear equality constraints. The focus is on approaches to handle cases where the constraint matrix may have a rank less than the number of constraints, leading to redundancy in the constraints.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested

Main Points Raised

  • One participant describes the problem of minimizing the function f(x) = ∑x_i² subject to the constraint Ax = b, noting that the trivial solution x = 0 exists but the rank of A complicates the use of Lagrange multipliers.
  • Another participant suggests that if the rank of A is less than m, one should identify and use a smaller set of independent constraints to apply Lagrange multipliers, stating that any independent set will yield the same solution for x.
  • A different participant acknowledges the presence of redundant constraints and expresses interest in exploring whether the suggested method of reducing constraints is the only approach available before proceeding with implementation.
  • Another proposed method involves explicitly eliminating variables by rewriting constraint equations and iteratively reducing the system until only independent constraints remain, suggesting this may be simpler depending on the problem's structure.
  • A further idea is to set up the Lagrange multiplier system with all constraints, including redundant ones, and solve it using singular value decomposition (SVD), which is noted to work for under-determined least-squares problems.
  • One participant indicates they will investigate the literature on SVD to assess its applicability to the problem at hand.

Areas of Agreement / Disagreement

Participants express differing views on the best approach to handle redundancy in constraints, with multiple methods proposed but no consensus on a single optimal solution. The discussion remains unresolved regarding the most effective strategy.

Contextual Notes

Participants highlight the complexity introduced by redundant constraints and the rank of matrix A, indicating that the problem's structure may influence the choice of method. There are unresolved considerations regarding the implications of these factors on the optimization process.

dilbert2011
Messages
3
Reaction score
0
Hi all,
I am working on a project and stuck at the following problem.

Find vector [tex]x_{n\times 1}[/tex] which minimizes the function
Code:
[tex]f(x) = \sum_{i}^{n}x_{i}^{2}[/tex]

subject to the linear equality constraint
Code:
[tex][A]_{m\times n} x_{n \times 1}=b_{m\times 1}[/tex] with [tex]m\leq n[/tex]

The function f(x) trivially has a minimum value of 0 for x = 0. The equality constraint is an under determined problem with multiple solutions. One of the possible solutions is certain to minimize f(x). So I am relatively certain that solution to this problem exists. The additional issue is that the rank of matrix A could be less than m. So the standard Lagrange multipliers approach does not work in this case.

Any suggestions on how to approach this problem ?
 
Physics news on Phys.org
If the rank of A is lesss than m, then your m constraint equations are not all independent. If you convert A to a smaller set of independent constraints, you can then use Lagrange multipliers.

The "best" way to do that would probably be to figure out why the redundant constraints have got into your math model, and remove them from it. Alternatively, you can find the rank of A and a set of independent basis vectors (i.e. independent constraints) numerically.

Note, the set of independent constraints is not unique, but it doesn't matter which indepedent set you find. Any set will give you the same solution for x.
 
AlephZero,

Unfortunately, there is no escaping redundant constraints in the problem i am looking at. The approach you suggest is something I have thought of already and is certainly doable. Indeed, if the equality constraints had a full row rank, the problem would have been much easier to deal with.

But I would still like to know if this is the *only* way to do this before i go ahead and code this thing up.
 
Another way is to explicitly eliminate variables from the problem. Find the largest coefficient [itex]|a_{jk}|[/itex] in [itex]A[/itex], and rewrite the j'th constraint equation in the form

[itex]x_k[/itex]= a linear equation in the other variables.

Eliminate [itex]x_k[/itex] from the other constraint equations and from the quadratic form you are minimizing.

Repeat till the remaining constraints have all the coefficients zero (to with rounding error) - i.e. they were redundant.

Mathematically, this is very much the same as the previous method, but it may be simpler to implement depending on the size of the problem relative to the number of constraints, and/or the sparseness of the matrix [itex]A[/itex].

A third idea which I haven't investigated, but might work: just set up the Lagrange multiplier equation system for all the constraints including the redundant ones, then solve it by doing a singular value decomposition (SVD) and ignore the singular values that are zero. That certainly works for solving unconstrained least-squares problems which are under-determined because data points are not sufficient to fix the values for all the variables in the model. See any good numerical methods text under "SVD" or "least squares" for more details.
 
Let me read up the literature on SVD and see if it can work.
 

Similar threads

  • · Replies 25 ·
Replies
25
Views
4K
  • · Replies 21 ·
Replies
21
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
Replies
4
Views
2K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 1 ·
Replies
1
Views
3K