1. Limited time only! Sign up for a free 30min personal tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Homework Help: Optimization/Simplex Method, proof of unique solution

  1. Sep 21, 2012 #1
    This was on my homework this week. I already turned it in but the professor hasn't posted the solutions yet so I'm genuinely curious what the answer is.

    1. The problem statement, all variables and given/known data
    The LP:

    The problem:

    2. Relevant equations

    c^{T}d = z_{N}^{T}d_{N}

    sorry for the non-use of latex! I tried to use the ## ## and [itex] environments but couldn't get it to work!

    3. The attempt at a solution

    Not sure how standard the notation is since we don't use a textbook, just the professor's notes. Anyway, I'm taking my first analysis course this semester so I'm getting decent with proofs. He gives us a big hint in the problem statement.

    So suppose there exists an i in the index set N such that ##z_{Ni} = 0##. Does this mean that we just let all the remaining components of z_N be strictly positive? I assumed we want to show that if z_N >= 0, we no longer have a unique solution so we require z_N > 0. Optimality occurs when z_N >= 0, but I interpreted the problem as wanting us to show that we need a stronger condition for uniqueness

    From this, I said that d_N must necessarily be the zero vector since we use the convention that:

    d_Nj = 1, if there exists a j0 such that z_Nj0 < 0
    d_Nj = 0, otherwise

    Since we clearly have that z_N is non-negative, d_N must be zero. We have the relation

    (c^T)d = (z_N)^T*d_N (since d_N = 0)

    Then this means that we can find a feasible direction d such that (c^T)d = 0, ie we can step down to x' = x + td. The objective function value at x' is:

    (c^T)(x' + td) = (c^T)x' + t*(c^T)d = (c^T)x' + 0

    which is a contradiction since we know x is a unique solution but we find that x' is also a solution.

    My proof-writing is sketchy but I think I understood the main jist. The problem is I don't get how the hint that we should assume that a single component of z_N is zero helps us. How is this any different than if I assumed for contradiction that z_N was strictly positive for all i? Because for strictly positive z_N, we'd still have d_N = 0.
    Last edited: Sep 21, 2012
  2. jcsd
  3. Sep 21, 2012 #2

    Ray Vickson

    User Avatar
    Science Advisor
    Homework Helper

    The hint did not say zNi= 0 for a single i; it said zNi = 0 for some i, and that could mean one or two or three or ... .

    As to LaTeX: if you use "[t e x] your stuff [/t e x]" (but remove the spaces inside [ and ]) you will get a displayed result, like this:
    [tex](c^T)(x' + td) = (c^T)x' + t*(c^T)d = (c^T)x' + 0[/tex] If, instead, you say "[i t e x] your stuff [/ i t e x]" (again, no spaces) you will get it in-line, like this: [itex](c^T)(x' + td) = (c^T)x' + t*(c^T)d = (c^T)x' + 0[/itex]

    Last edited: Sep 21, 2012
  4. Sep 21, 2012 #3
    Yes, but I'm not entirely seeing how that initial supposition will help. Is it okay to suppose there exists an [itex]i_{0} \in N[/itex] such that [itex]z_{Ni_{0}} = 0[/itex] and that for all [itex]i \ne i_{0}[/itex] [itex]z_{Ni} > 0[/itex]? Like I said, I thought the point of the proof was to show that [itex]z_{N} \ge 0[/itex] is not a strong enough condition to guarantee a unique solution.

    For what it's worth, the hint gives so much a way that I understand the direction I should be going but not how to make that initial leap.

    [[thanks for the tex tip btw. I have no idea why it wasn't working the first time.]]
  5. Sep 21, 2012 #4

    Ray Vickson

    User Avatar
    Science Advisor
    Homework Helper

    OK. Let me change the notation a bit, to make it easier to type. By re-labelling the variables if necessary, assume that x1, ..., xm are basic and xm+1,..., xn are non-basic in the optimal solution. However, let me re-name the non-basic variables as u1, ..., ur, where r = n-m. Thus, u = xN. The final form of the constraint equations are [tex] x_B = v - T u,[/tex] where
    [tex] v = B^{-1}b, \text{ and } T = B^{-1}N.[/tex] We have
    [tex] Z = \sum c_j x_j = V + \sum_j z_j u_j, \\
    \text{where } V = c_B v = c_B B^{-1} b, \text{ and } z = c_N - c_B B^{-1}N
    = c_N - c_B T.[/tex] (For simplicity I take cB and cN to be row vectors.) In detail, we have:
    [tex] x_1 = v_1 - t_{11} u_1 - \cdots - t_{1 r} u_r\\
    \vdots \\
    x_m = v_m - t_{m1} u_1 - \cdots - t_{m r} u_r\\
    Z = V + z_1 u_1 + \cdots + z_r u_r
    We are assuming two things: (i) this solution is non-degenerate, so that vi > 0 for i = 1,2, ...,m; and (ii) it is optimal.

    Therefore, we need all zj ≥ 0 (because if some zp < 0 we can increase the variable up a bit and make the value of Z smaller (that is, we can reduce V). The non-degeneracy assumption is crucial here; without it (at a degenerate solution) we might have an optimal solution but a non-optimal basis.

    If some zp = 0, we can increase up a bit (i.e., by a truly positive amount) but with no change in Z, because the variable up does not actually enter into the expression for Z in this basis. So, if some component of z is zero we can find an optimal solution that is different from the one we started with, and that would contradict the uniqueness.

    Note: in applied modelling, we often use this type of argument to tell whether or not a solution is unique: if all the "reduced costs" are non-zero, the solution is unique, but if some reduced costs vanish there are alternative optima (at least in the non-degenerate case).

Share this great discussion with others via Reddit, Google+, Twitter, or Facebook