mentil said:
Homework Statement
You want to process x units of a product (at y processing plants), minimizing the total time spent processing all the units. Each place y_i has a queue of q_i units ahead of you and a processing rate of r_i units/minute. If you can simultaneously process units at the same time at different places, what is the minimum amount of time you can spend?Homework Equations
The Attempt at a Solution
I solved the problem by formulating it like the following:
min(max((x>0)*(x+q)/m))
where x,q,m are vectors representing how quantities sent/queues/rates at each processing plant.
and with constraints
sum(x)=total units
x_i>=0
The (x>0) is just an indicator variable saying that if we go to processing plant y_i, then we will have to add in the time we have to wait in the queue.
I get what I think is the correct solution by using an evolutionary search algorithm, but I was wondering if anyone has any suggestions on how I might restructure the problem to be a linear programming problem or what algorithm is the right one to use (otherwise it seems like an np hard problem?). Does anyone know any similar types of problems or methods? I hope this is the right area to post this...thanks!
Let me re-phase your question. Say you have processors i = 1,2,...,n. Processor i has a delay d_i (d_i = r_i*q_i); that is, processor i is not available to you until time d_i has elapsed. So, if we have N units to process (my N = your x) and let x_i be the number of units processed on unit i, then the completion time is the maximum of the d_i + r_i*x_i for i = 1, 2, ..., n, but *only counting the i that you actually use*. You want to minimize this, subject to x_i >= 0 and sum x_i = N.
This can be written is a standard mixed-integer linear programming problem. The problem I have is that if I show you how to do it I will be doing the problem for you. Let me try to give a hint, and I hope not give too much away.
Let T = completion time (T >= 0). How is T related to d_1 + r_1*x_1 if processor 1 is used? How is related to d_2 + r_2*x_2 if processor 2 is used? And so on. So, let y_i be a binary variable, with y_i = 0 if processor i is not used and y_i = 1 if it is used. How can you model the situation that T >= d_1 + r_1*x_1 if y_1 = 1 and no relevant restriction on T if y_1 = 0? Take a large, positive number M > 0; for example, any M >= max(d_i) + max(r_i)*N will do. Suppose you write the consraint T >= d_1 + r_1*x_1 - M*(1-y_1). What do you get when y_1 = 1? What do you get when y_1 = 0? Of course, this is not the whole story: you don't want to have all y_i = 0, and you don't want to have x_i > 0 with y_i = 0. Can you see how to ensure that x_i > 0 only if y_i = 1?
There is also another, less systematic but more elementary method that I think will work (but have no fully-convincing proof). By re-labeling if necessary, suppose d_1 < d_2 < ... < d_n. Put (d_2-d_1)/r_1 units on Machine M1 (or, if this is > N, put all N units on M1). So, either we are done, or else we now have M1 and M2 both available at the same time. If we put x more units together on M1 and M2 and require the same finishing times, we need to put x_1 on M1 and x_2 on M2, with x1*r_1 = x2*r_2 and x_1 + x_2 = x. Do that, and increase x until the completion times on M1 and M2 reach d_3 or the job is done. If we reach d_3 we now have M1, M2 and M3 all available at the same time, and we load x more units on those three (keeping all three completion times equal) until we are done or reach d_4, etc. Intuitively, it seems that this procedure ought to produce the optimal schedule, but at the moment this needs proof.
RGV