Minimizing infinity norm squared

In summary, the problem at hand is to find the maximum of an expression involving a scalar product and a squared infinity norm, subject to linear and quadratic constraints. Different approaches have been suggested, including using a quadratic programming package or relaxing the constraints using Lagrange multipliers. However, due to the distributed nature of the problem, a centralized optimizer is not allowed and alternative methods may be needed.
  • #1
farooq117
6
0
I have to minimize an expression of the following type:

min <a,x>-L||x-u||_inf^2
s.t.: ||x||_inf <= R,

where a is a vector of coefficients, x is the vector of decision variables, <.,.> denotes the scalar product, R and L are scalars, u is some constant (known) vector, and 'inf' denotes the infinity norm.

I know how to deal with the situation where the expression is of the type

||x-u||_inf.

But, I do not know how to approach the expression when the infinity norm is squared. I hope someone out there can suggest some ideas, or a book that I can consult, etc.
 
Physics news on Phys.org
  • #2
A hint that might be useful: if ||x||_inf <= R, then ||x||_inf^2 <= R^2. You should then be able to proceed just as in your other case. If not, you need to show us more ...
 
  • #3
well.. the actual problem is to find the argument which maximizes the following expression:

<a,x> - (L/2)||x-u||_inf^2,
s.t.: x >= 0,
||x||_inf <= R,

where a and u are known vectors, x is the vector of decision variables, and R and L are known scalars. Furthermore, the vector u satisfies the following:

u >= 0, ||u||_inf <= R.

I know of a simple trick for dealing with the minimization of an expression of the form

||Ax-b||_inf.

But I cannot figure out how to approach the situation where the infinity norm expression is squared and where the scalar product adds a linear term as well.
 
  • #4
i have an approach in mind.. i thought i'd get your opinion on it..

the problem is to find the maximum of the following:
<a,x> - (L/2)||x-u||_inf^2 s.t.: x >= 0, ||x||_inf <= R.

The is the same as finding the minimum of
(L/2)||x-u||_inf^2 - <a,x> s.t.: x >= 0, ||x||_inf <= R.

I use the following trick:

min t
s.t.: (L/2)||x-u||_inf^2 - <a,x> <= t,
x >= 0, ||x_inf|| <= R.

And write this as follows:

min t
s.t. (L/2)(xi-ui)^2 - <a,x> <= t^2, for all i = 1,...,m
x >= 0, ||x||_inf <= R.

This gives me a quadratic cost with quadratic and linear constraints. And the constraints are coupled. Do you think that
a) the above is correct; and
b) if it's possible to have a formulation where the constraints are linear or uncoupled?
 
  • #5
there is another approach.. this gives a quadratic cost with coupled linear constraints.. i'd be very grateful if you could check the working..

the problem is to find the maximum of
<a,x> - (L/2)||x-u||_inf^2 s.t.: x >= 0, ||x||_inf <= R.

this is the same as finding the minimum of
(L/2)||x-u||_inf^2 - <a,x> s.t.: x >= 0, ||x||_inf <= R.

I express this as follows:

min t0^2 + t1 + t2 + ... + tm
s.t.: (L/2)||x-u||_inf^2 <= t0^2,
-ai*xi <= ti, i = 1,...,m,
x >= 0, ||x||_inf <= R.

I express the first constraint as follows:
(L/2)(xi - ui)^2 <= t0^2, i = 1,...,m.

I express these constraints as follows:
ui - sqrt(2/L)*t0 <= xi <= ui + sqrt(2/L)*t0, i = 1,...,m.

The optimization problem is given by:
min t0^2 + t1 + ... + tm
s.t.: -xi - sqrt(2/L)*t0 <= - ui, i = 1,...,m
xi - sqrt(2/L)*t0 <= ui, i = 1,...,m
-ai*xi -ti <= 0, i = 1,...,m
0 <= xi <= R, i = 1,...,m

i don't think i can uncouple the constraints somehow..
 
  • #6
Hey, sorry for not answering earlier ... I live in a different timezone :)

farooq117 said:
i have an approach in mind.. i thought i'd get your opinion on it..

min t
s.t. (L/2)(xi-ui)^2 - <a,x> <= t^2, for all i = 1,...,m
x >= 0, ||x||_inf <= R.

Of course, in the second line you can write
s.t. 0 <= x_i <= R for all i

as you did in your other example. The calculation you did seems fine to me, but I don't know either how to uncouple your constraints ...
 
  • #7
thanks..
 
  • #8
farooq117 said:
there is another approach.. this gives a quadratic cost with coupled linear constraints.. i'd be very grateful if you could check the working..

the problem is to find the maximum of
<a,x> - (L/2)||x-u||_inf^2 s.t.: x >= 0, ||x||_inf <= R.

this is the same as finding the minimum of
(L/2)||x-u||_inf^2 - <a,x> s.t.: x >= 0, ||x||_inf <= R.

I express this as follows:

min t0^2 + t1 + t2 + ... + tm
s.t.: (L/2)||x-u||_inf^2 <= t0^2,
-ai*xi <= ti, i = 1,...,m,
x >= 0, ||x||_inf <= R.

I express the first constraint as follows:
(L/2)(xi - ui)^2 <= t0^2, i = 1,...,m.

I express these constraints as follows:
ui - sqrt(2/L)*t0 <= xi <= ui + sqrt(2/L)*t0, i = 1,...,m.

The optimization problem is given by:
min t0^2 + t1 + ... + tm
s.t.: -xi - sqrt(2/L)*t0 <= - ui, i = 1,...,m
xi - sqrt(2/L)*t0 <= ui, i = 1,...,m
-ai*xi -ti <= 0, i = 1,...,m
0 <= xi <= R, i = 1,...,m

i don't think i can uncouple the constraints somehow..

You want to minimize M*||x-u||^2 - <a,x>, s.t. ||x|| <= R. Here, M = L/2 and || . || = sup-norm. This can be re-written as
min M*z^2 - <a,x>
s.t. z >= x(i)-u(i), z >= u(i)-x(i) for all i
-R <= x(i) <= R for all i. This has just one single quadratic term in the objective, but is otherwise linear. Somehow, one feels there must be a better way to solve it than using a standard quadratic programming package, although that would certainly work in this case, because we have a convex quadratic programming problem.

This only works because we are *minimizing* ||x-u||^2; if, instead, you wanted to maximize the objective M*||x-u||^2 - <a,x> you would have a problem with a mixed continuous-combinatorial aspect to it, and formulation/solution would be difficult.

RGV
 
  • #9
Somehow, one feels there must be a better way to solve it than using a standard quadratic programming package, although that would certainly work in this case, because we have a convex quadratic programming problem.

in fact, in my case, a quadratic programming package isn't really an option.. the problem that i described is one sub-problem in one iteration of a gradient-based algorithm for distributed optimization.. if there is a centralized optimizer, then perhaps it can use such a quadratic programming package.. but a centralized optimizer isn't allowed..

perhaps, relaxing the coupling constraints using Lagrange multipliers could be an alternative to a quadratic programming package.. but i doubt the efficacy of such an approach.. it may still require global coordination.. moreover, it may be too slow for what is essentially one sub-problem in a larger algorithm requiring many iterations..
 

1. What is the infinity norm squared?

The infinity norm squared, also known as the maximum norm squared, is a mathematical concept used to measure the size of a vector or a function. It represents the maximum absolute value of the elements in the vector or the function's range.

2. Why is minimizing the infinity norm squared important?

Minimizing the infinity norm squared is important because it helps to reduce the maximum error or deviation in a system. By minimizing this norm, we can ensure that the system is more accurate and reliable.

3. How is the infinity norm squared different from other norms?

The infinity norm squared is different from other norms, such as the Euclidean norm or the Manhattan norm, because it considers the maximum value instead of the sum of all values. This makes it a more suitable measure for systems where the maximum error is crucial.

4. What are some applications of minimizing the infinity norm squared?

Minimizing the infinity norm squared has various applications in fields such as control systems, signal processing, and optimization. It is commonly used to design controllers, filters, and other systems that require high accuracy and stability.

5. How can one minimize the infinity norm squared?

There are various methods for minimizing the infinity norm squared, including gradient descent, Newton's method, and linear programming. The approach used depends on the specific system and its requirements.

Similar threads

  • Calculus and Beyond Homework Help
Replies
1
Views
542
  • Calculus and Beyond Homework Help
Replies
2
Views
337
  • Calculus and Beyond Homework Help
Replies
1
Views
888
Replies
1
Views
736
  • Calculus and Beyond Homework Help
Replies
9
Views
1K
  • Calculus and Beyond Homework Help
Replies
5
Views
3K
  • Calculus and Beyond Homework Help
Replies
4
Views
1K
  • Calculus and Beyond Homework Help
Replies
7
Views
686
  • Calculus and Beyond Homework Help
Replies
10
Views
2K
  • Calculus and Beyond Homework Help
Replies
5
Views
1K
Back
Top