Can this optimization problem be solved?

In summary, the conversation discusses an optimization problem involving a minimization function and a binary matrix. The cost function is dependent on the number of elements connected to a particular element and the connection of a specific element to another element. The conversation also touches upon using brute force or simulated annealing as potential methods for solving the problem. The problem arises from resource allocation in a distributed control system and the goal is to find the global optimum point. The speaker is currently considering implementing simulated annealing and has questions about how to handle constraints for the binary matrix.
  • #1
smehdi
16
0
Hello, I am working on an optimization problem but I am not sure if the problem can be formulated and solved with conventional solvers.

Assume the minimization problem for a set of elements ##\mathcal{N} =\{ 1,\dots, h, \dots, i,\dots, N \}##
$$
\mathrm{minimize}\quad C = \sum_{i=1}^{N} \sum_{j=1}^{N} m_{ij}.c_{ij}
$$ in which ##m_{ij} \in \mathbf{M}##.
##\mathbf{M}## is a binary ##N\times N ## matrix such that ##m_{ij}=1## says the element ##i## is connected to element ##j## and its cost ##c_{ij}## is defined as $$ c_{ij} = \frac{a_{ij}}{1+\sum_{n=1}^{\color{red}{i}} m_{nj}} - \sum_{h=1}^{i} \frac{ m_{hj}. a_{hj}}{(1+\sum_{n=1}^{h} m_{nj}) (2+\sum_{n=1}^{h} m_{nj}) }$$ in which ##a_{ij}## and ##a_{hj}## are given.
In a simpler case, we can consider
$$ c_{ij} = \frac{a_{ij}}{1+\sum_{n=1}^{\color{red}{i}} m_{nj}} - \frac{ m_{hj}. a_{hj}}{(1+\sum_{n=1}^{h} m_{nj}) (2+\sum_{n=1}^{h} m_{nj}) }$$ for a given ##h##.
In other words, the cost of connecting to element ##j## for element ##i## depends on how many elements are connected to ##j## and if a particular element ##h## is connected to ##j##.
Every element must be connected to at least one other element and ##m_{i,i} = 0, \forall i \in \mathcal{N}##.
In fact, such a matrix has to be found $$
\mathbf{M} =
\begin{bmatrix}
m_{11} & \dots & m_{1j}& \dots& m_{1N} \\
\vdots & \dots & \vdots & \dots & \vdots \\
m_{i1} & \dots & m_{ij} & \dots& m_{iN} \\
\vdots & \dots & \vdots & \dots & \vdots \\
m_{N1}& \dots & m_{Nj}& \dots& m_{NN} \\
\end{bmatrix}.$$ I am looking for the optimum result and the solver is not very much important.
Actually the main problem is that the denominator of the second part of ##c_{i,j}## is non-linear and I am wondering if it can be converted to a linear form or it can be solved by a solver as it is.
Thanks.
 
Physics news on Phys.org
  • #2
If N is not too big, you can find "brute force" numerical solution.
The code is simple. No "conventional solver" is needed.

Or, you can find Cij and then feed any solver with them.
 
  • #3
smehdi said:
in which ##a_{ij}## and ##a_{hj}## are given.

Are they non-negative ?
 
  • #4
This problem screams "simulated annealing" in my opinion. For large M's where you can't brute-force every single combination(computational time for evaluating 2^(N*N) combinations will blow up fast) ,it will give you results that are really good (close to global minimum), but may not be THE optimal solution(Unless you run it long enough to evaluate all combinations)

It is not very difficult to implement either. If you are going for this alternative, I will be happy to answer more of your questions.

Other than that, if you manage to get a linear problem, you could use the Branch-and-bound method with LP relaxations. Dunno if you could do something similar for a nonlinear problem, but in any case I think simulated annealing is the better option(and easiest) option.

Where did you come across this model?
 
Last edited:
  • Like
Likes smehdi and mfb
  • #5
Irene Kaminkowa said:
If N is not too big, you can find "brute force" numerical solution.
The code is simple. No "conventional solver" is needed.

Or, you can find Cij and then feed any solver with them.

Thanks, ##N## is around 20.
"brute force" should work but I am looking for a more efficient way to solve the problem.
Actually I am trying to find a way to formulate it as an standard optimization problem.
 
Last edited:
  • #6
Stephen Tashi said:
Are they non-negative ?

All the ##a_{i,j}, i,j \in \mathcal{N} ## are non-negative.
 
  • #7
maka89 said:
This problem screams "simulated annealing" in my opinion. For large M's where you can't brute-force every single combination(computational time for evaluating 2^(N*N) combinations will blow up fast) ,it will give you results that are really good (close to global minimum), but may not be THE optimal solution(Unless you run it long enough to evaluate all combinations)

It is not very difficult to implement either. If you are going for this alternative, I will be happy to answer more of your questions.

Other than that, if you manage to get a linear problem, you could use the Branch-and-bound method with LP relaxations. Dunno if you could do something similar for a nonlinear problem, but in any case I think simulated annealing is the better option(and easiest) option.

Where did you come across this model?
Thanks. "simulated annealing" is a good candidate to use. I will work a bit more to see if I can find an easier way, preferably in an standard optimization form.

The problem comes from distributed control for resource allocation in a multiagent system. When an agent uses a resource, the efficiency of the network depends on whether the other agents use the same resource or not. For instance, when two agents can be served by one resource, the cost assigned to each agent is less than that of assigning one resource per agent, as the denominator depends on the number of agents using the resource. In this case, assigning lower cost to the agents results in saving resource for the whole system, i. e., one resource is required for the whole system instead of two. The cost function that I am using is a very specific form of the Shapley function.

What I am trying to find is the global optimum point for the system, as a centralized approach. It will be used as a benchmark for distributed algorithm to see how good the distributed approach works compared to the global optimum.
 
  • #8
maka89 said:
This problem screams "simulated annealing" in my opinion. For large M's where you can't brute-force every single combination(computational time for evaluating 2^(N*N) combinations will blow up fast) ,it will give you results that are really good (close to global minimum), but may not be THE optimal solution(Unless you run it long enough to evaluate all combinations)
It is not very difficult to implement either. If you are going for this alternative, I will be happy to answer more of your questions.

Well, I am going for simulated annealing but I have some questions regarding the implementation of this algorithm.
I have a couple of constraints for ##M##, like ##\sum_{j \in \mathcal{N}} m_{i,j}=1## and some more.
I think I have to first find a feasible ##M## based on my constraints and then give it to SA algorithm. In this case, how ##M## is generated will also be very important. In fact, ##M## must be generated in a way to somehow, by considering the constraints, go all over the feasible region in order to make sure that we do not miss the global optimum. Moreover, ##M## must be completely random but at a same time an ##M##, generated and rejected at any step by SA alg., must not be generated again. Then I think it would be a bit complicated and consumes huge memory and also the run-time would be very high. right? :confused:
Another issue is that MATLAB optimization toolbox has the SA algorithm implemented. Based on your experience, is it better to use the MATLAB optimization toolbox or I should write the code by myself?
 
  • #9
I wouldn't bother with making sure that an M is not generated twice.

I haven't tried the MATLAB version, its probably good. It does not take many lines of code to implement the algorithm, but maybe it takes some time to understand it.

I did not know that you had constraints. It basically just means that you have to be a bit clever about how you define your states and/or the part of the algorithm that chooses a neighbour states.

If I understood your constraint right, one only has one nonzero entry in each row? With the constraint you gave, one way of choosing states and neighbours is : Let j(i) be the column index of the non-zero entry in row i. You can then choose a neighbour state by 1. Pick a random row, 2. move non-zero entry in that row to the left or the right in the matrix with even probability(unless you are looking at the first or last column, in which case it can only go one way).
 
Last edited:
  • Like
Likes smehdi

1. Can all optimization problems be solved?

No, not all optimization problems can be solved. Some problems may be too complex or have too many variables for a solution to be found. In addition, some problems may have multiple solutions or no feasible solution at all.

2. How do I know if an optimization problem can be solved?

One way to determine if an optimization problem can be solved is by checking if it is a convex optimization problem. Convex optimization problems have unique global minima and can be solved efficiently using various algorithms. Additionally, the problem may need to be formatted into a specific form, such as linear programming, for a solution to be found.

3. What are some common techniques for solving optimization problems?

Some common techniques for solving optimization problems include gradient descent, simplex method, branch and bound, and dynamic programming. These techniques can be used for different types of optimization problems, such as linear programming, nonlinear programming, and integer programming.

4. Can an optimization problem be solved exactly?

In most cases, an optimization problem cannot be solved exactly due to the complexity and number of variables involved. Instead, algorithms aim to find the best possible solution within a reasonable amount of time. However, for some small and simple problems, an exact solution may be achievable.

5. What factors can affect the solvability of an optimization problem?

The solvability of an optimization problem can be affected by various factors such as the complexity of the problem, the number of variables, the form of the problem, and the availability of efficient algorithms. Additionally, real-world constraints and limitations may also impact the solvability of a problem.

Similar threads

Replies
4
Views
1K
  • Calculus and Beyond Homework Help
Replies
26
Views
2K
Replies
5
Views
363
Replies
4
Views
724
  • Calculus and Beyond Homework Help
Replies
4
Views
954
  • Linear and Abstract Algebra
Replies
9
Views
888
Replies
27
Views
898
  • Calculus and Beyond Homework Help
Replies
4
Views
1K
Replies
4
Views
1K
  • Programming and Computer Science
Replies
9
Views
1K
Back
Top