Optimization/Simplex Method, proof of unique solution

In summary, the conversation discusses a homework problem involving an LP and equations related to it. The problem statement gives a hint to assume that there exists an i in the index set N such that zNi = 0. The conversation then goes on to try and understand the significance of this hint and how it can help in solving the problem. The conversation also mentions the use of LaTeX in the discussion.
  • #1
hkcool
11
0
This was on my homework this week. I already turned it in but the professor hasn't posted the solutions yet so I'm genuinely curious what the answer is.

Homework Statement


The LP:
http://i48.tinypic.com/25svp6u.png

The problem:
http://i50.tinypic.com/nnnm2w.png

Homework Equations



c^{T}d = z_{N}^{T}d_{N}

sorry for the non-use of latex! I tried to use the ## ## and [itex] environments but couldn't get it to work!

The Attempt at a Solution



Not sure how standard the notation is since we don't use a textbook, just the professor's notes. Anyway, I'm taking my first analysis course this semester so I'm getting decent with proofs. He gives us a big hint in the problem statement.

So suppose there exists an i in the index set N such that ##z_{Ni} = 0##. Does this mean that we just let all the remaining components of z_N be strictly positive? I assumed we want to show that if z_N >= 0, we no longer have a unique solution so we require z_N > 0. Optimality occurs when z_N >= 0, but I interpreted the problem as wanting us to show that we need a stronger condition for uniqueness

From this, I said that d_N must necessarily be the zero vector since we use the convention that:

d_Nj = 1, if there exists a j0 such that z_Nj0 < 0
d_Nj = 0, otherwise

Since we clearly have that z_N is non-negative, d_N must be zero. We have the relation

(c^T)d = (z_N)^T*d_N (since d_N = 0)

Then this means that we can find a feasible direction d such that (c^T)d = 0, ie we can step down to x' = x + td. The objective function value at x' is:

(c^T)(x' + td) = (c^T)x' + t*(c^T)d = (c^T)x' + 0

which is a contradiction since we know x is a unique solution but we find that x' is also a solution.

My proof-writing is sketchy but I think I understood the main jist. The problem is I don't get how the hint that we should assume that a single component of z_N is zero helps us. How is this any different than if I assumed for contradiction that z_N was strictly positive for all i? Because for strictly positive z_N, we'd still have d_N = 0.
 
Last edited:
Physics news on Phys.org
  • #2
hkcool said:
This was on my homework this week. I already turned it in but the professor hasn't posted the solutions yet so I'm genuinely curious what the answer is.

Homework Statement


The LP:
http://i48.tinypic.com/25svp6u.png

The problem:
http://i50.tinypic.com/nnnm2w.png


Homework Equations



c^{T}d = z_{N}^{T}d_{N}

sorry for the non-use of latex! I tried to use the ## ## and [itex] environments but couldn't get it to work!

The Attempt at a Solution



Not sure how standard the notation is since we don't use a textbook, just the professor's notes. Anyway, I'm taking my first analysis course this semester so I'm getting decent with proofs. He gives us a big hint in the problem statement.

So suppose there exists an i in the index set N such that ##z_{Ni} = 0##. Does this mean that we just let all the remaining components of z_N be strictly positive? I assumed we want to show that if z_N >= 0, we no longer have a unique solution so we require z_N > 0. Optimality occurs when z_N >= 0, but I interpreted the problem as wanting us to show that we need a stronger condition for uniqueness

From this, I said that d_N must necessarily be the zero vector since we use the convention that:

d_Nj = 1, if there exists a j0 such that z_Nj0 < 0
d_Nj = 0, otherwise

Since we clearly have that z_N is non-negative, d_N must be zero. We have the relation

(c^T)d = (z_N)^T*d_N (since d_N = 0)

Then this means that we can find a feasible direction d such that (c^T)d = 0, ie we can step down to x' = x + td. The objective function value at x' is:

(c^T)(x' + td) = (c^T)x' + t*(c^T)d = (c^T)x' + 0

which is a contradiction since we know x is a unique solution but we find that x' is also a solution.

My proof-writing is sketchy but I think I understood the main jist. The problem is I don't get how the hint that we should assume that a single component of z_N is zero helps us. How is this any different than if I assumed for contradiction that z_N was strictly positive for all i? Because for strictly positive z_N, we'd still have d_N = 0.

The hint did not say zNi= 0 for a single i; it said zNi = 0 for some i, and that could mean one or two or three or ... .

As to LaTeX: if you use "[t e x] your stuff [/t e x]" (but remove the spaces inside [ and ]) you will get a displayed result, like this:
[tex](c^T)(x' + td) = (c^T)x' + t*(c^T)d = (c^T)x' + 0[/tex] If, instead, you say "[i t e x] your stuff [/ i t e x]" (again, no spaces) you will get it in-line, like this: [itex](c^T)(x' + td) = (c^T)x' + t*(c^T)d = (c^T)x' + 0[/itex]

RGV
 
Last edited:
  • #3
Yes, but I'm not entirely seeing how that initial supposition will help. Is it okay to suppose there exists an [itex]i_{0} \in N[/itex] such that [itex]z_{Ni_{0}} = 0[/itex] and that for all [itex]i \ne i_{0}[/itex] [itex]z_{Ni} > 0[/itex]? Like I said, I thought the point of the proof was to show that [itex]z_{N} \ge 0[/itex] is not a strong enough condition to guarantee a unique solution.

For what it's worth, the hint gives so much a way that I understand the direction I should be going but not how to make that initial leap.

[[thanks for the tex tip btw. I have no idea why it wasn't working the first time.]]
 
  • #4
hkcool said:
Yes, but I'm not entirely seeing how that initial supposition will help. Is it okay to suppose there exists an [itex]i_{0} \in N[/itex] such that [itex]z_{Ni_{0}} = 0[/itex] and that for all [itex]i \ne i_{0}[/itex] [itex]z_{Ni} > 0[/itex]? Like I said, I thought the point of the proof was to show that [itex]z_{N} \ge 0[/itex] is not a strong enough condition to guarantee a unique solution.

For what it's worth, the hint gives so much a way that I understand the direction I should be going but not how to make that initial leap.

[[thanks for the tex tip btw. I have no idea why it wasn't working the first time.]]

OK. Let me change the notation a bit, to make it easier to type. By re-labelling the variables if necessary, assume that x1, ..., xm are basic and xm+1,..., xn are non-basic in the optimal solution. However, let me re-name the non-basic variables as u1, ..., ur, where r = n-m. Thus, u = xN. The final form of the constraint equations are [tex] x_B = v - T u,[/tex] where
[tex] v = B^{-1}b, \text{ and } T = B^{-1}N.[/tex] We have
[tex] Z = \sum c_j x_j = V + \sum_j z_j u_j, \\
\text{where } V = c_B v = c_B B^{-1} b, \text{ and } z = c_N - c_B B^{-1}N
= c_N - c_B T.[/tex] (For simplicity I take cB and cN to be row vectors.) In detail, we have:
[tex] x_1 = v_1 - t_{11} u_1 - \cdots - t_{1 r} u_r\\
\vdots \\
x_m = v_m - t_{m1} u_1 - \cdots - t_{m r} u_r\\
Z = V + z_1 u_1 + \cdots + z_r u_r
[/tex]
We are assuming two things: (i) this solution is non-degenerate, so that vi > 0 for i = 1,2, ...,m; and (ii) it is optimal.

Therefore, we need all zj ≥ 0 (because if some zp < 0 we can increase the variable up a bit and make the value of Z smaller (that is, we can reduce V). The non-degeneracy assumption is crucial here; without it (at a degenerate solution) we might have an optimal solution but a non-optimal basis.

If some zp = 0, we can increase up a bit (i.e., by a truly positive amount) but with no change in Z, because the variable up does not actually enter into the expression for Z in this basis. So, if some component of z is zero we can find an optimal solution that is different from the one we started with, and that would contradict the uniqueness.

Note: in applied modelling, we often use this type of argument to tell whether or not a solution is unique: if all the "reduced costs" are non-zero, the solution is unique, but if some reduced costs vanish there are alternative optima (at least in the non-degenerate case).

RGV
 

1. What is the Simplex Method?

The Simplex Method is an algorithm used to solve linear programming problems by finding the optimal solution, which is the maximum or minimum value of the objective function. It involves systematically moving from one feasible solution to another until the optimal solution is reached.

2. How does the Simplex Method work?

The Simplex Method works by starting at a feasible solution and then moving along a path of adjacent feasible solutions until the optimal solution is reached. This path is determined by pivoting, which involves choosing a pivot element and using it to eliminate a variable from the current solution, resulting in a new feasible solution.

3. What is the proof of unique solution in the Simplex Method?

The proof of unique solution in the Simplex Method relies on the fact that each pivot operation improves the value of the objective function. This means that the optimal solution can only be reached when the objective function cannot be further improved, making it the unique solution.

4. What are the limitations of the Simplex Method?

The Simplex Method can only be used for linear programming problems with continuous variables. It also requires a starting feasible solution and may run into issues with degeneracy, where the same solution is reached multiple times.

5. How can the Simplex Method be improved?

The Simplex Method can be improved by using more efficient pivot rules and implementing strategies to avoid or resolve degeneracy. Other optimization methods, such as interior point methods, can also be used to solve linear programming problems with better computational efficiency.

Similar threads

Replies
1
Views
162
Replies
1
Views
395
  • Calculus and Beyond Homework Help
Replies
24
Views
798
  • Calculus and Beyond Homework Help
Replies
3
Views
493
  • Calculus and Beyond Homework Help
Replies
2
Views
325
  • Calculus and Beyond Homework Help
Replies
7
Views
285
  • Calculus and Beyond Homework Help
Replies
1
Views
705
  • Calculus and Beyond Homework Help
Replies
0
Views
167
Replies
4
Views
501
  • Calculus and Beyond Homework Help
Replies
8
Views
622
Back
Top