# Optimization/Simplex Method, proof of unique solution

This was on my homework this week. I already turned it in but the professor hasn't posted the solutions yet so I'm genuinely curious what the answer is.

## Homework Statement

The LP:
http://i48.tinypic.com/25svp6u.png

The problem:
http://i50.tinypic.com/nnnm2w.png

## Homework Equations

c^{T}d = z_{N}^{T}d_{N}

sorry for the non-use of latex! I tried to use the ## ## and $environments but couldn't get it to work! ## The Attempt at a Solution Not sure how standard the notation is since we don't use a textbook, just the professor's notes. Anyway, I'm taking my first analysis course this semester so I'm getting decent with proofs. He gives us a big hint in the problem statement. So suppose there exists an i in the index set N such that ##z_{Ni} = 0##. Does this mean that we just let all the remaining components of z_N be strictly positive? I assumed we want to show that if z_N >= 0, we no longer have a unique solution so we require z_N > 0. Optimality occurs when z_N >= 0, but I interpreted the problem as wanting us to show that we need a stronger condition for uniqueness From this, I said that d_N must necessarily be the zero vector since we use the convention that: d_Nj = 1, if there exists a j0 such that z_Nj0 < 0 d_Nj = 0, otherwise Since we clearly have that z_N is non-negative, d_N must be zero. We have the relation (c^T)d = (z_N)^T*d_N (since d_N = 0) Then this means that we can find a feasible direction d such that (c^T)d = 0, ie we can step down to x' = x + td. The objective function value at x' is: (c^T)(x' + td) = (c^T)x' + t*(c^T)d = (c^T)x' + 0 which is a contradiction since we know x is a unique solution but we find that x' is also a solution. My proof-writing is sketchy but I think I understood the main jist. The problem is I don't get how the hint that we should assume that a single component of z_N is zero helps us. How is this any different than if I assumed for contradiction that z_N was strictly positive for all i? Because for strictly positive z_N, we'd still have d_N = 0. Last edited: ## Answers and Replies Ray Vickson Science Advisor Homework Helper Dearly Missed This was on my homework this week. I already turned it in but the professor hasn't posted the solutions yet so I'm genuinely curious what the answer is. ## Homework Statement The LP: http://i48.tinypic.com/25svp6u.png The problem: http://i50.tinypic.com/nnnm2w.png ## Homework Equations c^{T}d = z_{N}^{T}d_{N} sorry for the non-use of latex! I tried to use the ## ## and [itex] environments but couldn't get it to work! ## The Attempt at a Solution Not sure how standard the notation is since we don't use a textbook, just the professor's notes. Anyway, I'm taking my first analysis course this semester so I'm getting decent with proofs. He gives us a big hint in the problem statement. So suppose there exists an i in the index set N such that ##z_{Ni} = 0##. Does this mean that we just let all the remaining components of z_N be strictly positive? I assumed we want to show that if z_N >= 0, we no longer have a unique solution so we require z_N > 0. Optimality occurs when z_N >= 0, but I interpreted the problem as wanting us to show that we need a stronger condition for uniqueness From this, I said that d_N must necessarily be the zero vector since we use the convention that: d_Nj = 1, if there exists a j0 such that z_Nj0 < 0 d_Nj = 0, otherwise Since we clearly have that z_N is non-negative, d_N must be zero. We have the relation (c^T)d = (z_N)^T*d_N (since d_N = 0) Then this means that we can find a feasible direction d such that (c^T)d = 0, ie we can step down to x' = x + td. The objective function value at x' is: (c^T)(x' + td) = (c^T)x' + t*(c^T)d = (c^T)x' + 0 which is a contradiction since we know x is a unique solution but we find that x' is also a solution. My proof-writing is sketchy but I think I understood the main jist. The problem is I don't get how the hint that we should assume that a single component of z_N is zero helps us. How is this any different than if I assumed for contradiction that z_N was strictly positive for all i? Because for strictly positive z_N, we'd still have d_N = 0. The hint did not say zNi= 0 for a single i; it said zNi = 0 for some i, and that could mean one or two or three or ... . As to LaTeX: if you use "[t e x] your stuff [/t e x]" (but remove the spaces inside [ and ]) you will get a displayed result, like this: $$(c^T)(x' + td) = (c^T)x' + t*(c^T)d = (c^T)x' + 0$$ If, instead, you say "[i t e x] your stuff [/ i t e x]" (again, no spaces) you will get it in-line, like this: [itex](c^T)(x' + td) = (c^T)x' + t*(c^T)d = (c^T)x' + 0$

RGV

Last edited:
Yes, but I'm not entirely seeing how that initial supposition will help. Is it okay to suppose there exists an $i_{0} \in N$ such that $z_{Ni_{0}} = 0$ and that for all $i \ne i_{0}$ $z_{Ni} > 0$? Like I said, I thought the point of the proof was to show that $z_{N} \ge 0$ is not a strong enough condition to guarantee a unique solution.

For what it's worth, the hint gives so much a way that I understand the direction I should be going but not how to make that initial leap.

[[thanks for the tex tip btw. I have no idea why it wasn't working the first time.]]

Ray Vickson
Homework Helper
Dearly Missed
Yes, but I'm not entirely seeing how that initial supposition will help. Is it okay to suppose there exists an $i_{0} \in N$ such that $z_{Ni_{0}} = 0$ and that for all $i \ne i_{0}$ $z_{Ni} > 0$? Like I said, I thought the point of the proof was to show that $z_{N} \ge 0$ is not a strong enough condition to guarantee a unique solution.

For what it's worth, the hint gives so much a way that I understand the direction I should be going but not how to make that initial leap.

[[thanks for the tex tip btw. I have no idea why it wasn't working the first time.]]

OK. Let me change the notation a bit, to make it easier to type. By re-labelling the variables if necessary, assume that x1, ..., xm are basic and xm+1,..., xn are non-basic in the optimal solution. However, let me re-name the non-basic variables as u1, ..., ur, where r = n-m. Thus, u = xN. The final form of the constraint equations are $$x_B = v - T u,$$ where
$$v = B^{-1}b, \text{ and } T = B^{-1}N.$$ We have
$$Z = \sum c_j x_j = V + \sum_j z_j u_j, \\ \text{where } V = c_B v = c_B B^{-1} b, \text{ and } z = c_N - c_B B^{-1}N = c_N - c_B T.$$ (For simplicity I take cB and cN to be row vectors.) In detail, we have:
$$x_1 = v_1 - t_{11} u_1 - \cdots - t_{1 r} u_r\\ \vdots \\ x_m = v_m - t_{m1} u_1 - \cdots - t_{m r} u_r\\ Z = V + z_1 u_1 + \cdots + z_r u_r$$
We are assuming two things: (i) this solution is non-degenerate, so that vi > 0 for i = 1,2, ...,m; and (ii) it is optimal.

Therefore, we need all zj ≥ 0 (because if some zp < 0 we can increase the variable up a bit and make the value of Z smaller (that is, we can reduce V). The non-degeneracy assumption is crucial here; without it (at a degenerate solution) we might have an optimal solution but a non-optimal basis.

If some zp = 0, we can increase up a bit (i.e., by a truly positive amount) but with no change in Z, because the variable up does not actually enter into the expression for Z in this basis. So, if some component of z is zero we can find an optimal solution that is different from the one we started with, and that would contradict the uniqueness.

Note: in applied modelling, we often use this type of argument to tell whether or not a solution is unique: if all the "reduced costs" are non-zero, the solution is unique, but if some reduced costs vanish there are alternative optima (at least in the non-degenerate case).

RGV