benorin said:
All variables are non-negative, yes. ##p## is in the millions, as are ##a, d##, and ##h##. Therefore it is not likely that ##k < 9,\!250,\!000.## so we must deal with the floor function I suppose. Any thoughts on this? I suppose I could solve the constraint for a variable and substitute that expression into the function to be maximized and look for extrema as usual but do I need to look at all points where the derivative of the floor function is undefined? Seems so: ##\frac{d}{dp}\text{floor}(50 + 0.0001 p )## is undefined whenever
##50 + 0.0001 p\in \mathbb{Z}^{+}\Rightarrow p\in \left\{ 10,\!000 j | j\in\mathbb{Z}^{+}\right\}##
and the constraint solved for ##a## is
##a=\frac{1}{1439}\left( k-427d-925p-\frac{912 h}{5}\right)##
substituting both these into the function we get
##f\left( \frac{1}{1439}\left( k-427d-925p-\frac{912 h}{5}\right) , d, p, h\right) = \left[ \frac{4}{1439}\left( k-427d-9\!250\!000 j-\frac{912 h}{5}\right) +3d+3h+c1\right] c2 *\left( 2+ 0.01( 50+k) \right)## for ##j\in\mathbb{Z}^{+}##...
Oop, out of time right now to work on this, be back later: feel free to chime in, and thanks!
I have found through an example that an optimal solution might not exist; that is, you can sometimes have a "supremum" but not a "maximum".
Since the constant ##c_2## just scales the maximization objective we can omit it. Furthermore, ##c_1## is an additive constant for each evaluation of the floor function, so we can drop it from the optimization---provided that we put it back later. That is, the constant ##c_1## appears as an additive constant ##2.5 c_1## in one subproblem, as another additive constant ##2.51 c_1## in the next neigboring subproblem, etc. After solving each subproblem we can add the appropriate constant in order to make a true comparision between them.
I took ##k =416\,655\,000##, which allows five ##p##-intervals ##[0,10\,000), [10\,000, 20\,000), [20\,000, 30\,000), [30\,000, 40\,000)## and ##[40\,000, 50\,000).## The first two subproblems are
$$
\begin{array}{rcl}
\text{P1:} & \max &Z_1 = 4a + 3d+ 3h\\
&\text{s.t.}& 1439 a +427 d + 9259 p + (912/5) h = 416\,655\,000\\
& & 0 \leq p \leq 10\,000\\
\text{P2:} & \max &Z_2 = 4a + 3d+ 3h\\
&\text{s.t.}& 1439 a +427 d + 9259 p + (912/5) h = 416\,655\,000\\
& & 10\,000 \leq p \leq 20\,000
\end{array}
$$
and the other three are similar.
The solutions to P1 and P2 (obtained using Maple's LPSolve package are):
$$\begin{array}{cl}
\text{P1:} & Z_1 = 900\,806.1154\\
& a = 225\,201.5288 \\
&d=0, \: h=0 \\
&p = 10\, 000\\
\text{P2:} & Z_2 = 643\,432.9395\\
&a = 160\,858.2349\\
& d = 0 \; h = 0 \\
& p = 20\,000
\end{array}
$$
The problem is that the actual ##p##-intervals for the floor function are half-open ##[0,10\,000), [10\,000, 20\,000)## etc., but the linear programming formulations
do not allow strict inequalities. Therefore, in subproblem P1 we use ##p \leq 10\,000## instead of ##p < 10\,000##, etc. The solution of P1 is right at the upper end of the ##p##-interval, where we would then be using the wrong value of the floor function! In other words, if we use a non-strict inequality such as ##p \leq 9\,999.999\,999\,999\,999## instead of the true ##p < 10\,000##, the linear programming solution would be at ##p = 9\,999.999\,999\,999\,999##. When we use ##p = 10\,000## we really ought to be looking at problem P2. However, the solution of P2 has ##p = 20\,000##, nowhere near ##10\,000##, and its maximum objective ##Z_2## is much worse than ##Z_1##.
The "true" maximum for P1 is ##2.5 Z_1 + 2.5 c_1,## while the true maximum for P2 is ##2.51 Z_2 + 2.51 c_2##. Thus, the interval ##[0, 10\,000)## is the true optimal ##p##-interval (and 50 is the true best value of the floor function) as long as ##2.5 Z_1 + 2.5 c_1 > 2.51 Z_2 + 2.51 c_1##. For ##Z_1, Z_2## as given in the solutions above, [0,10 000) is the best interval whenever ##c_1 < 63\,699\ 861.## I don't know what value of ##c_1## you intend to use, but as long as it does not exceed about 64 million, P1 is the best subproblem. However, in that case the value of ##2.5 Z1 + 2.5 c_1## is a supremum of the profit (or whatever you are trying to maximize). You can get withing a hair's breadth of that value, but cannot actually reach it exactly. But, as long as you are satisfied with 1000 decimal place accuracy, you might as well use the linear programming solution for P1 (safely using 10 000 in place of 9 999.999999999999999 ...).
The fact that there is no true maximum in this example should not come as a great surprise. The standard theorem---that a continuous function on a compact set has a maximum---does not apply in this problem You are attempting to maximize a discontinuous function over a compact set, so all bets are off. Your maximization objective has jump discontinuites, and that can sometimes lead to trouble as it does in the example above.
Finally: you will note that in the linear programming solutions some of the variables are zero in the optimal solution. That is characteristic of linear optimization: there are optimal solution in which the maximum number of nonzero variables is equal to the number of equality constraints. In our case we have five variables ##a,d,h, p,s_1##, where ##s_1## is a slack variable for the upper bound on ##p##. That is, we write the upper bound constraint on ##p## as an equality ##p + s_1 = 10\, 000## with ##s_1 \geq 0.## This means that our formulation actually has two equation constraints, so the solution has two positive variables ##a>0## and ##p > 0##. The three zero variables are ##d = h = s_1 = 0.##
That type of behavior will apply in any of the subproblems, so if you want all four of ##a,d,h,p## to be positive, you will need to impose additional constraints. Perhaps your formulation is incomplete in some way.