Bland rule proof linear programming

In summary: So, if we have a dictionary with two non-basic variables ##x_1, x_2##, and we want to solve for ##x_3##, we can do it by setting ##x_1 = x_3## and ##x_2 = -x_3##.
  • #1
mertcan
340
6
<Moderator's note: Continued from a technical forum and thus no template. Re-opening has been approved by moderator.>

Hi, my question is related to simplex algorithm anticycling rule called Bland's rule. While I was working with the proof in the link https://www.math.ubc.ca/~anstee/math340/340blandsrule.pdf I got stuck with the very important part and my question is so simple by the way. Why the variable $$xs=q$$ can be everything??
I believe $$xs$$ only be zero because among degenerate iterations entering variable and leaving variable must be 0...

Also if you look at this link you can examine a similar proof (of my previous link) in the link http://profs.sci.univr.it/~rrizzi/classes/LP/homeworks/Bland/Bland.html. And there is sentence that "it must admit any solution which is admitted from
img7-gif.gif
. In particular, it must be satisfied by the following choice of values": $$xs=y$$
Why must it admit any solution thus why must the choice of values be $$xs=y$$??
Again I consider that it must be only 0 can not be any choice of value..
 

Attachments

  • img7-gif.gif
    img7-gif.gif
    163 bytes · Views: 997
Physics news on Phys.org
  • #2
The author is careful to point out that the hypothetical solution with ##x_s = q## is not necessarily feasible, implying that the restriction that ##x_s = 0## does not apply. This also implies that the solution with ##x_s = q## is not necessarily the actual solution at iteration ##i## of the simplex algorithm.
 
  • #3
tnich said:
The author is careful to point out that the hypothetical solution with ##x_s = q## is not necessarily feasible, implying that the restriction that ##x_s = 0## does not apply. This also implies that the solution with ##x_s = q## is not necessarily the actual solution at iteration ##i## of the simplex algorithm.
I do not understand forgive me could you be more explicit??
I consider xs=0 must be the only solution because in degenerate iterations entering and leaving variable must be zero to ensure the same optimal value. Thus if xs always equals 0 this proof must be totally WRONG. because this proof actually mainly depends on the fact that xs can be all kind of value...
 
  • #4
mertcan said:
I do not understand forgive me could you be more explicit??
I consider xs=0 must be the only solution because in degenerate iterations entering and leaving variable must be zero to ensure the same optimal value. Thus if xs always equals 0 this proof must be totally WRONG. because this proof actually mainly depends on the fact that xs can be all kind of value...
To quote the author in the first paragraph of the proof, "We already know in a cycle that each pivot will be a degenerate pivot (you should know why?) and hence, if xj is fickle, xj = 0 in each basic feasible solution during the cycle." Then he deliberately introduces a not-necessarily-feasible solution which will not be produced by the simplex algorithm. That means that ##x_s## in that solution does not have to be zero.
 
  • Like
Likes mertcan
  • #5
mertcan said:
<Moderator's note: Continued from a technical forum and thus no template. Re-opening has been approved by moderator.>

Hi, my question is related to simplex algorithm anticycling rule called Bland's rule. While I was working with the proof in the link https://www.math.ubc.ca/~anstee/math340/340blandsrule.pdf I got stuck with the very important part and my question is so simple by the way. Why the variable $$xs=q$$ can be everything??
I believe $$xs$$ only be zero because among degenerate iterations entering variable and leaving variable must be 0...

Also if you look at this link you can examine a similar proof (of my previous link) in the link http://profs.sci.univr.it/~rrizzi/classes/LP/homeworks/Bland/Bland.html. And there is sentence that "it must admit any solution which is admitted from View attachment 221622 . In particular, it must be satisfied by the following choice of values": $$xs=y$$
Why must it admit any solution thus why must the choice of values be $$xs=y$$??
Again I consider that it must be only 0 can not be any choice of value..

I am not sure my answer here will clarify things for you sufficiently for you to then follow the proof to its end, but here goes, anyway.

Remember, a dictionary is a set of equations written so that some variables are functions of the others. The variables on the left of the "=" signs are called basic (or dependent), while those on the right are called non-basic (or independent). You get a basic solution by deliberately setting the non-basic variables to zero, but you are certainly allowed to set non-basic variables to non-zero values in a dictionary. The result will be a solution that is non-basic, and may even be "infeasible" (with some basic variables < 0).

Remember also, that different dictionaries are algebraically equivalent, so if we look at solutions with some non-basic variable ##x_n = q \neq 0## in some dictionary, the variable will be replaced by the value ##q## in all dictionaries. (I mean that if we eliminate the non-basic variable ##x_n## from a dictionary, and just replace it by a number ##q##, that number ##q## will migrate algebraically among all the dictionaries as you do pivoting.)
 
  • Like
Likes mertcan
  • #6
tnich said:
To quote the author in the first paragraph of the proof, "We already know in a cycle that each pivot will be a degenerate pivot (you should know why?) and hence, if xj is fickle, xj = 0 in each basic feasible solution during the cycle." Then he deliberately introduces a not-necessarily-feasible solution which will not be produced by the simplex algorithm. That means that ##x_s## in that solution does not have to be zero.
According to Bland rule, simple algorithm should not get into cycle whereas FEASIBLE solutions are produced. But why the author assumes that there may be infeasible solution?? or why author pretends that xs may not be feasible in order to ensure the proof?? what is the contribution of assumption of xs may be not feasible to the proof??

We already know that simple algorithm can not accept infeasible xs, so why does author say that xs may not be feasible although xs must be feasible in simplex??
infeasible xs does not make sense for me because we can not encounter such a weird situation...
 
Last edited:
  • #7
mertcan said:
According to Bland rule, simple algorithm should not get into cycle whereas FEASIBLE solutions are produced. But why the author assumes that there may be infeasible solution?? or why author pretends that xs may not be feasible in order to ensure the proof?? what is the contribution of assumption of xs may be not feasible to the proof??

We already know that simple algorithm can not accept infeasible xs, so why does author say that xs may not be feasible although xs must be feasible in simplex??
The author is showing that there exists some variable ##x_r \in B_i## for which ##c_r^* a_{rs} < 0##. He introduced the not-necessarily-feasible solution to do that. Follow his reasoning and you will see that is true.
 
  • Like
Likes mertcan
  • #8
mertcan said:
According to Bland rule, simple algorithm should not get into cycle whereas FEASIBLE solutions are produced. But why the author assumes that there may be infeasible solution?? or why author pretends that xs may not be feasible in order to ensure the proof?? what is the contribution of assumption of xs may be not feasible to the proof??

We already know that simple algorithm can not accept infeasible xs, so why does author say that xs may not be feasible although xs must be feasible in simplex??
infeasible xs does not make sense for me because we can not encounter such a weird situation...

He does not say that infeasible solutions will ever be found during the sequence of pivots. He says that all the dictionaries are equivalent, so that you can look at how possibly-infeasible solutions are also propagated among the dictionaries. By allowing for non-basic, possibly infeasible solutions to the equations, you can get some extra information that would not be obvious from feasible solutions alone.
 
  • Like
Likes mertcan
  • #9
tnich said:
The author is showing that there exists some variable ##x_r \in B_i## for which ##c_r^* a_{rs} < 0##. He introduced the not-necessarily-feasible solution to do that. Follow his reasoning and you will see that is true.
Ray Vickson said:
He does not say that infeasible solutions will ever be found during the sequence of pivots. He says that all the dictionaries are equivalent, so that you can look at how possibly-infeasible solutions are also propagated among the dictionaries. By allowing for non-basic, possibly infeasible solutions to the equations, you can get some extra information that would not be obvious from feasible solutions alone.
OK you say that no matter what xs(q) is (feasible or infeasible) ##\left(c_s-c^*_s+\sum_{x_k\in B_i} c^*_k a_{ks}\right)*q = \sum_{x_k\in B_i} c^*_k b_k = 0 ##
Also @tnich says The author is showing that there exists some variable ##x_r \in B_i## for which ##c_r^* a_{rs} < 0##. He introduced the not-necessarily-feasible solution to do that. BUT let^s say q is feasible and is 0, then ##\left(c_s-c^*_s+\sum_{x_k\in B_i} c^*_k a_{ks}\right)*q = 0##. Here it is if q is feasible (means zero) then ##\left(c_s-c^*_s+\sum_{x_k\in B_i} c^*_k a_{ks}\right)## does not have to be zero so it can be every value and we can not precisely express that there exists some variable ##x_r \in B_i## for which ##c_r^* a_{rs} < 0##. What do you think about that??

Also @Ray Vickson author says q can be everything INCLUDING zero but you can see above we can not accurately say there exists some variable ##x_r \in B_i## for which ##c_r^* a_{rs} < 0##
You are right about the author by allowing for non-basic, possibly infeasible solutions to the equations, he try to get some extra information and try to generalise the case, but we are capable of applying the real case (q=0) and c^*_k a_{ks} can be everything??
 
  • #10
mertcan said:
OK you say that no matter what xs(q) is (feasible or infeasible) ##\left(c_s-c^*_s+\sum_{x_k\in B_i} c^*_k a_{ks}\right)*q = \sum_{x_k\in B_i} c^*_k b_k = 0 ##
Also @tnich says The author is showing that there exists some variable ##x_r \in B_i## for which ##c_r^* a_{rs} < 0##. He introduced the not-necessarily-feasible solution to do that. BUT let^s say q is feasible and is 0, then ##\left(c_s-c^*_s+\sum_{x_k\in B_i} c^*_k a_{ks}\right)*q = 0##. Here it is if q is feasible (means zero) then ##\left(c_s-c^*_s+\sum_{x_k\in B_i} c^*_k a_{ks}\right)## does not have to be zero so it can be every value and we can not precisely express that there exists some variable ##x_r \in B_i## for which ##c_r^* a_{rs} < 0##. What do you think about that??

Also @Ray Vickson author says q can be everything INCLUDING zero but you can see above we can not accurately say there exists some variable ##x_r \in B_i## for which ##c_r^* a_{rs} < 0##
You are right about the author by allowing for non-basic, possibly infeasible solutions to the equations, he try to get some extra information and try to generalise the case, but we are capable of applying the real case (q=0) and c^*_k a_{ks} can be everything??

No: he shows that
$$\left( c_s - c^*_s + \sum_{x_k \in B_i} x^*_k a_{ks} \right) q = \sum_{x_k \in B_i} c^*_k b_k \: \hspace{4em}(1)$$
for ALL ##q##. If the left-hand side of (1) varies with ##q## but the right-hand-side does not, then we have a contradiction. Therefore, the left-hand-side of (1) cannot vary with ##q##, and that means that the coefficient of ##q## must equal zero. That also implies that the right-hand-side equals 0 as well.
 
  • Like
Likes mertcan and tnich
  • #11
Ray Vickson said:
No: he shows that
$$\left( c_s - c^*_s + \sum_{x_k \in B_i} x^*_k a_{ks} \right) q = \sum_{x_k \in B_i} c^*_k b_k \: \hspace{4em}(1)$$
for ALL ##q##. If the left-hand side of (1) varies with ##q## but the right-hand-side does not, then we have a contradiction. Therefore, the left-hand-side of (1) cannot vary with ##q##, and that means that the coefficient of ##q## must equal zero. That also implies that the right-hand-side equals 0 as well.
BUT ##\sum_{x_k \in B_i} c^*_k b_k## has to be zero anyway. For instance if at the beginning of degenerate iterations ##x_k## is in basic and has value 5 (##b_k##) then after some degenerate iterations this value remains same and ##c^*_k## remains zero also. Or at the beginning of degenerate iterations, if ##x_k## is in basic and has value 0 (##b_k##), then after some degenerate iterations yes it can be non basic but always has zero value anyway. What do you say about that situation?? Can it be such a situation that ##\sum_{x_k \in B_i} c^*_k b_k## is not always zero??
 
  • Like
Likes tnich
  • #12
mertcan said:
BUT ##\sum_{x_k \in B_i} c^*_k b_k## has to be zero anyway. For instance if at the beginning of degenerate iterations ##x_k## is in basic and has value 5 (##b_k##) then after some degenerate iterations this value remains same and ##c^*_k## remains zero also. Or at the beginning of degenerate iterations, if ##x_k## is in basic and has value 0 (##b_k##), then after some degenerate iterations yes it can be non basic but always has zero value anyway. What do you say about that situation?? Can it be such a situation that ##\sum_{x_k \in B_i} c^*_k b_k## is not always zero??

I don't say anything about such a situation: it does not arise. The quantity exhibited above is zero (provably so by more than one method), so speculating on what we ought to do if, somehow, it could be nonzero seems fruitless to me. It is zero---period.
 
Last edited:
  • Like
Likes mertcan
  • #13
Ray Vickson said:
I don't say anything about such a situation: it does not arise. The quantity exhibited above is zero (provably so by more than one method), so speculating on what we ought to do if, somehow, it could be nonzero seems fruitless to me. It is zero---period.
Initially forgive me still I have some trouble, Let's say that like author xs=q can be both feasible and infeasible. BUT HOW does the equation ##\upsilon+c_sq = \upsilon+c^*_sq + \sum_{x_k \in B_i} c^*_k\left(b_k-a_{ks}q\right)## still holds even xs(q) is INFEASIBLE?? I know this holds for feasible xs or q (while they are 0) but HOW do we prove that even in infeasible case previous equation always holds??

For instance while ##c_s## enters the basis, we are sure ##c_s## > 0. Actually in feasible case xs=q=0 so change in optimal value is zero due to cs*q=0. BUT in INFEASIBLE case let's say that cs=10 and xs=q=5 (think that xs will have non zero value when it enters the basis), so optimal value changes by 50 due to cs*q=50. Well how the change of optimal value by 50 is ensured in the equation ##\upsilon+c^*_sq + \sum_{x_k \in B_i} c^*_k\left(b_k-a_{ks}q\right)## ??

your return towards that question really crucial for me to comprehend...
 
Last edited:
  • #14
mertcan said:
Initially forgive me still I have some trouble, Let's say that like author xs=q can be both feasible and infeasible. BUT HOW does the equation ##\upsilon+c_sq = \upsilon+c^*_sq + \sum_{x_k \in B_i} c^*_k\left(b_k-a_{ks}q\right)## still holds even xs(q) is INFEASIBLE?? I know this holds for feasible xs or q (while they are 0) but HOW do we prove that even in infeasible case previous equation always holds??

For instance while ##c_s## enters the basis, we are sure ##c_s## > 0. Actually in feasible case xs=q=0 so change in optimal value is zero due to cs*q=0. BUT in INFEASIBLE case let's say that cs=10 and xs=q=5 (think that xs will have non zero value when it enters the basis), so optimal value changes by 50 due to cs*q=50. Well how the change of optimal value by 50 is ensured in the equation ##\upsilon+c^*_sq + \sum_{x_k \in B_i} c^*_k\left(b_k-a_{ks}q\right)## ??

your return towards that question really crucial for me to comprehend...

It holds because the author just put ##x_s = q## in one dictionary and then changed to another dictionary. All dictionaries are algebraically equivalent, in the sense that if some numbers ##(x_1, x_2, \ldots, x_n)## satisfy the equations in one dictionary they will also satisfy the equations in another dictionary, and if they fail to satisfy some of the equations in one dictionary they will also fail to satisfy some of the equations in another dictionary. The new equation is just obtained by going to a new dictionary but with the same values of all the ##x_j##s.

This is the absolutely crucial point that you keep missing: ALL DICTIONARIES ARE EQUIVALENT!
 
  • Like
Likes mertcan
  • #15
mertcan said:
Initially forgive me still I have some trouble, Let's say that like author xs=q can be both feasible and infeasible. BUT HOW does the equation ##\upsilon+c_sq = \upsilon+c^*_sq + \sum_{x_k \in B_i} c^*_k\left(b_k-a_{ks}q\right)## still holds even xs(q) is INFEASIBLE?? I know this holds for feasible xs or q (while they are 0) but HOW do we prove that even in infeasible case previous equation always holds??

For instance while ##c_s## enters the basis, we are sure ##c_s## > 0. Actually in feasible case xs=q=0 so change in optimal value is zero due to cs*q=0. BUT in INFEASIBLE case let's say that cs=10 and xs=q=5 (think that xs will have non zero value when it enters the basis), so optimal value changes by 50 due to cs*q=50. Well how the change of optimal value by 50 is ensured in the equation ##\upsilon+c^*_sq + \sum_{x_k \in B_i} c^*_k\left(b_k-a_{ks}q\right)## ??

your return towards that question really crucial for me to comprehend...
I think you could look at it this way. Given a feasible basic solution at pivot i, the author has perturbed the solution, adding a delta (called ##q##) to ##x_s##. The equation ##\upsilon+c_sq = \upsilon+c^*_sq + \sum_{x_k \in B_i} c^*_k\left(b_k-a_{ks}q\right)## shows what happens to the z value as a result. He is saying, "What if we moved the solution at pivot ##i## off of the bounding constraint? How would it affect the solution at pivot f?"
 
  • #16
tnich said:
I think you could look at it this way. Given a feasible basic solution at pivot i, the author has perturbed the solution, adding a delta (called ##q##) to ##x_s##. The equation ##\upsilon+c_sq = \upsilon+c^*_sq + \sum_{x_k \in B_i} c^*_k\left(b_k-a_{ks}q\right)## shows what happens to the z value as a result. He is saying, "What if we moved the solution at pivot ##i## off of the bounding constraint? How would it affect the solution at pivot f?"
By the way, as @Ray Vickson points out, it is the same solution, just expressed in a different way.
 
  • Like
Likes mertcan
  • #17
Ray Vickson said:
It holds because the author just put ##x_s = q## in one dictionary and then changed to another dictionary. All dictionaries are algebraically equivalent, in the sense that if some numbers ##(x_1, x_2, \ldots, x_n)## satisfy the equations in one dictionary they will also satisfy the equations in another dictionary, and if they fail to satisfy some of the equations in one dictionary they will also fail to satisfy some of the equations in another dictionary. The new equation is just obtained by going to a new dictionary but with the same values of all the ##x_j##s.

This is the absolutely crucial point that you keep missing: ALL DICTIONARIES ARE EQUIVALENT!
tnich said:
I think you could look at it this way. Given a feasible basic solution at pivot i, the author has perturbed the solution, adding a delta (called ##q##) to ##x_s##. The equation ##\upsilon+c_sq = \upsilon+c^*_sq + \sum_{x_k \in B_i} c^*_k\left(b_k-a_{ks}q\right)## shows what happens to the z value as a result. He is saying, "What if we moved the solution at pivot ##i## off of the bounding constraint? How would it affect the solution at pivot f?"
tnich said:
By the way, as @Ray Vickson points out, it is the same solution, just expressed in a different way.
Ok I understand that dictionaries in cycle have same set of solutions thus objective value remains same. Also even we change the solutions in a feasible or infeasible way objective remains same again in all dictionaries in cycle. Besides @tnich and @Ray Vickson you express your NICE answers in a different way, so I would like to say let's focus on perturbation side/method (As @tnich says we can think of perturbed solution) : On the left side and right side of equation ##\upsilon+c_sq = \upsilon+c^*_sq + \sum_{x_k \in B_i} c^*_k\left(b_k-a_{ks}q\right)## perturbation must be same. While doing the simplex in a normal way our objective changes by reduced cost coefficient of entering variable*the value it will take after goes into basis. On the left side of equation we can witness that situation, we say objective change by ##c_s##*xs(q) when xs enters the basis and xt leaves the basis and assume xs takes the arbitrary value q. BUT I do not understand how this perturbation is ensured on the right side of equation?? For instance, for the left side to ensure perturbation we use only ##c_s## but why do we use ##c^*_s## or ##c^*_i## ?? Is there a formulation about that situation(perturbation of objective when solutions deviate)??
 
  • #18
mertcan said:
Ok I understand that dictionaries in cycle have same set of solutions thus objective value remains same. Also even we change the solutions in a feasible or infeasible way objective remains same again in all dictionaries in cycle. Besides @tnich and @Ray Vickson you express your NICE answers in a different way, so I would like to say let's focus on perturbation side/method (As @tnich says we can think of perturbed solution) : On the left side and right side of equation ##\upsilon+c_sq = \upsilon+c^*_sq + \sum_{x_k \in B_i} c^*_k\left(b_k-a_{ks}q\right)## perturbation must be same. While doing the simplex in a normal way our objective changes by reduced cost coefficient of entering variable*the value it will take after goes into basis. On the left side of equation we can witness that situation, we say objective change by ##c_s##*xs(q) when xs enters the basis and xt leaves the basis and assume xs takes the arbitrary value q. BUT I do not understand how this perturbation is ensured on the right side of equation?? For instance, for the left side to ensure perturbation we use only ##c_s## but why do we use ##c^*_s## or ##c^*_i## ?? Is there a formulation about that situation(perturbation of objective when solutions deviate)??
The author carefully sets this up by explaining:
##x_s = q##
##x_j = 0## for ##x_j \in B_i## and ##j \neq s##
##x_k = b_k − a_{ks}q## for ##x_k \in B_i##
##z = v + c_sq##
Given this information, you can determine the value of ##z## given the dictionary ##D_f##. As the author explains, "The equation for ##z## in ##D_f## is obtained from the equation from ##D_i## by adding suitable multiples of equations of ##D_i## to the ##z## row." Since ##x_s## is not in the basis in ##D_i## but it is in ##D_f##, you will have to account for that.
 
  • Like
Likes mertcan
  • #19
tnich said:
The author carefully sets this up by explaining:
##x_s = q##
##x_j = 0## for ##x_j \in B_i## and ##j \neq s##
##x_k = b_k − a_{ks}q## for ##x_k \in B_i##
##z = v + c_sq##
Given this information, you can determine the value of ##z## given the dictionary ##D_f##. As the author explains, "The equation for ##z## in ##D_f## is obtained from the equation from ##D_i## by adding suitable multiples of equations of ##D_i## to the ##z## row." Since ##x_s## is not in the basis in ##D_i## but it is in ##D_f##, you will have to account for that.
Initially @tnich ##x_s## does not not have to be in basis (##D_f##) because it says ##c^*_s## <=0 which means it can be both non basic and basic. Also @tnich and @Ray Vickson RIGHT hand side of equation ##\upsilon+c_sq = \upsilon+c^*_sq + \sum_{x_k \in B_i} c^*_k\left(b_k-a_{ks}q\right)## can be stemmed from REVISED simplex method?? The right side can be updated according to revised simplex method for objective value??

For instance in normal simplex we proceed 1 by 1 in each iteration, 1 new basic variable emerges and 1 new non basic variable emerges. BUT in REVISED simplex we are capable of creating 3,4,5.. new basic variable at the same time and 3,4,5.. non basic variable at the same time, so the amount of change of objective value equals previous reduced cost coefficient of those variable*their last basis value. Am I RİGHT??
 
  • #20
mertcan said:
Initially @tnich ##x_s## does not not have to be in basis (##D_f##) because it says ##c^*_s## <=0 which means it can be both non basic and basic.
I think you are right about that.
 
  • Like
Likes mertcan
  • #21
mertcan said:
RIGHT hand side of equation ##\upsilon+c_sq = \upsilon+c^*_sq + \sum_{x_k \in B_i} c^*_k\left(b_k-a_{ks}q\right)## can be stemmed from REVISED simplex method?? The right side can be updated according to revised simplex method for objective value??

For instance in normal simplex we proceed 1 by 1 in each iteration, 1 new basic variable emerges and 1 new non basic variable emerges. BUT in REVISED simplex we are capable of creating 3,4,5.. new basic variable at the same time and 3,4,5.. non basic variable at the same time, so the amount of change of objective value equals previous reduced cost coefficient of those variable*their last basis value. Am I RİGHT??[/USER]

I think that is a valid way of looking at it. Whether you do sequential pivots or change to a new set of basis variables all at once, you should get the same result.
 
  • Like
Likes mertcan
  • #22
mertcan said:
Initially @tnich ##x_s## does not not have to be in basis (##D_f##) because it says ##c^*_s## <=0 which means it can be both non basic and basic. Also @tnich and @Ray Vickson RIGHT hand side of equation ##\upsilon+c_sq = \upsilon+c^*_sq + \sum_{x_k \in B_i} c^*_k\left(b_k-a_{ks}q\right)## can be stemmed from REVISED simplex method?? The right side can be updated according to revised simplex method for objective value??

For instance in normal simplex we proceed 1 by 1 in each iteration, 1 new basic variable emerges and 1 new non basic variable emerges. BUT in REVISED simplex we are capable of creating 3,4,5.. new basic variable at the same time and 3,4,5.. non basic variable at the same time, so the amount of change of objective value equals previous reduced cost coefficient of those variable*their last basis value. Am I RİGHT??
@Ray Vickson what are your opinions about my last question that I quoted here? By the way @tnich agrees to look at that way (like in revised simplex) to update the objective values with new set of solutions
 
  • #23
mertcan said:
@Ray Vickson what are your opinions about my last question that I quoted here? By the way @tnich agrees to look at that way (like in revised simplex) to update the objective values with new set of solutions

Most implementations of the revised simplex method I have seen still do one-at-a time changes, rather than your cited "block-pivoting" method. Basically, it is not easy to see if changing several basic variables simultaneously can maintain feasibility or whether it will help or hurt the objective. When we change bases one-variable-at-a time we are sure we maintain feasibliity and are sure of improvement eventually (possibly after a number of degenerate pivots) or else termination at an optimum or revelation of an unbounded "solution".

Nevertheless, block-pivoting strategies and methods have been studied and developed; Google "block pivoting in linear programming" so see some studies.
 
  • #24
Ray Vickson said:
Most implementations of the revised simplex method I have seen still do one-at-a time changes, rather than your cited "block-pivoting" method. Basically, it is not easy to see if changing several basic variables simultaneously can maintain feasibility or whether it will help or hurt the objective. When we change bases one-variable-at-a time we are sure we maintain feasibliity and are sure of improvement eventually (possibly after a number of degenerate pivots) or else termination at an optimum or revelation of an unbounded "solution".

Nevertheless, block-pivoting strategies and methods have been studied and developed; Google "block pivoting in linear programming" so see some studies.
As you can see we are sure that author updates the objective value of the right hand side of equation ##\upsilon+c_sq = \upsilon+c^*_sq + \sum_{x_k \in B_i} c^*_k\left(b_k-a_{ks}q\right)## changing the new set of several basic variables at the same time (he employs same solution set like dictionary ##D_i## in dictionary ##D_f## . For the left hand side he only involves ##c_s## and update objective via this coefficient, this situation totally regards normal simplex method( enters and leaves 1 variable in each iteration). According to you, we can not explain the update of objective value via REVISED simplex ( you may be right about that also it may be 1 by 1 iteration for entering or leaving) ?? For you, what method does author use to update the objective on the right side of previous equation involving MULTIPLE BASIS variables change at the same time?? (you say block pivoting is used to update basis variable in dictionary ##D_f## by author??)
Also AM I WRONG about thinking that in dictionary ##D_f## MULTİPLE BASIS change is created employing the same set of solutions in dictionary ##D_i##??

By the way @tnich, @Ray Vickson not totally consider that REVISED simplex may be a valid way of looking at the update of solution set at the same time in dictionary ##D_f##, because this method progresses by 1 by for entering or leaving the basis as well.
 
Last edited:
  • #25
mertcan said:
As you can see we are sure that author updates the objective value of the right hand side of equation ##\upsilon+c_sq = \upsilon+c^*_sq + \sum_{x_k \in B_i} c^*_k\left(b_k-a_{ks}q\right)## changing the new set of several basic variables at the same time (he employs same solution set like dictionary ##D_i## in dictionary ##D_f## . For the left hand side he only involves ##c_s## and update objective via this coefficient, this situation totally regards normal simplex method( enters and leaves 1 variable in each iteration). According to you, we can not explain the update of objective value via REVISED simplex ( you may be right about that also it may be 1 by 1 iteration for entering or leaving) ?? For you, what method does author use to update the objective on the right side of previous equation involving several basis variables change at the same time?? (you say block pivoting is used to update basis variable in dictionary ##D_f## by author??)

By the way @tnich, @Ray Vickson not totally consider that REVISED simplex may be a valid way of looking at the update of solution set at the same time in dictionary ##D_f##, because this method progresses by 1 by for entering or leaving the basis as well.
mertcan said:
As you can see we are sure that author updates the objective value of the right hand side of equation ##\upsilon+c_sq = \upsilon+c^*_sq + \sum_{x_k \in B_i} c^*_k\left(b_k-a_{ks}q\right)## changing the new set of several basic variables at the same time (he employs same solution set like dictionary ##D_i## in dictionary ##D_f## . For the left hand side he only involves ##c_s## and update objective via this coefficient, this situation totally regards normal simplex method( enters and leaves 1 variable in each iteration). According to you, we can not explain the update of objective value via REVISED simplex ( you may be right about that also it may be 1 by 1 iteration for entering or leaving) ?? For you, what method does author use to update the objective on the right side of previous equation involving several basis variables change at the same time?? (you say block pivoting is used to update basis variable in dictionary ##D_f## by author??)

By the way @tnich, @Ray Vickson not totally consider that REVISED simplex may be a valid way of looking at the update of solution set at the same time in dictionary ##D_f##, because this method progresses by 1 by for entering or leaving the basis as well.
I think you have misunderstood both my comments and those of @Ray Vickson. I suggest that you do a little research to find out what is normally meant by "revised simplex method". The revised simplex method produces the same pivots as the simplex method with the same result at each pivot.
 
  • #26
mertcan said:
As you can see we are sure that author updates the objective value of the right hand side of equation ##\upsilon+c_sq = \upsilon+c^*_sq + \sum_{x_k \in B_i} c^*_k\left(b_k-a_{ks}q\right)## changing the new set of several basic variables at the same time (he employs same solution set like dictionary ##D_i## in dictionary ##D_f## . For the left hand side he only involves ##c_s## and update objective via this coefficient, this situation totally regards normal simplex method( enters and leaves 1 variable in each iteration). According to you, we can not explain the update of objective value via REVISED simplex ( you may be right about that also it may be 1 by 1 iteration for entering or leaving) ?? For you, what method does author use to update the objective on the right side of previous equation involving several basis variables change at the same time?? (you say block pivoting is used to update basis variable in dictionary ##D_f## by author??)

By the way @tnich, @Ray Vickson not totally consider that REVISED simplex may be a valid way of looking at the update of solution set at the same time in dictionary ##D_f##, because this method progresses by 1 by for entering or leaving the basis as well.

You seem to be suggesting that the author is performing several basis changes at the same time. NOT TRUE: he introduces a single non-basic variable and then looks at some non-basic solutions where that variable takes a range of possible values ##q##, using that to gain some additional information about the structure of the dictionaries.

Aside from that your message is unclear and I cannot figure out what you are trying to say. However, I think it goes beyond what PF allows in the way of a discussion. You had a specific question, and that was answered. Now, it seems, you want to go beyond the confines of that question and get into additional topics. That needs a new thread. Furthermore, as it is probably no longer a "homework" problem, it likely belongs in a different forum.

In any case I am now leaving this discussion, as I have nothing more to say that is pertinent to the original question.
 
  • #27
Ray Vickson said:
You seem to be suggesting that the author is performing several basis changes at the same time. NOT TRUE: he introduces a single non-basic variable and then looks at some non-basic solutions where that variable takes a range of possible values ##q##, using that to gain some additional information about the structure of the dictionaries.

Aside from that your message is unclear and I cannot figure out what you are trying to say. However, I think it goes beyond what PF allows in the way of a discussion. You had a specific question, and that was answered. Now, it seems, you want to go beyond the confines of that question and get into additional topics. That needs a new thread. Furthermore, as it is probably no longer a "homework" problem, it likely belongs in a different forum.

In any case I am now leaving this discussion, as I have nothing more to say that is pertinent to the original question.
tnich said:
I think you have misunderstood both my comments and those of @Ray Vickson. I suggest that you do a little research to find out what is normally meant by "revised simplex method". The revised simplex method produces the same pivots as the simplex method with the same result at each pivot.
Actually I do not try to deviate from or stray away from the main discussion, just eager to comprehend the proof. Also MENTORS of forum suggest me to create thread here(@fresh_42, @bhobba ...) about that question. I have only 1 LEFT QUESTION :

you can easily see objective value change with new set of solution is ##c_s##*q(xs) on left hand side of equation ##\upsilon+c_sq = \upsilon+c^*_sq + \sum_{x_k \in B_i} c^*_k\left(b_k-a_{ks}q\right)## I got it , BUT I can not understand why the objective change on the right hand side of previous equation is ##c^*_sq + \sum_{x_k \in B_i} c^*_k\left(b_k-a_{ks}q\right)##?? WHY do we multiply ##c^*_s## with xs(q) or ##c^*_k## with ##b_k-a_{ks}*q## TO UPDATE OBJECTIVE??
Please @tnich @Ray Vickson I am waiting your help...
 
  • #28
mertcan said:
Actually I do not try to deviate from or stray away from the main discussion, just eager to comprehend the proof. Also MENTORS of forum suggest me to create thread here(@fresh_42, @bhobba ...) about that question. I have only 1 LEFT QUESTION :

you can easily see objective value change with new set of solution is ##c_s##*q(xs) on left hand side of equation ##\upsilon+c_sq = \upsilon+c^*_sq + \sum_{x_k \in B_i} c^*_k\left(b_k-a_{ks}q\right)## I got it , BUT I can not understand why the objective change on the right hand side of previous equation is ##c^*_sq + \sum_{x_k \in B_i} c^*_k\left(b_k-a_{ks}q\right)##?? WHY do we multiply ##c^*_s## with xs(q) or ##c^*_k## with ##b_k-a_{ks}*q## TO UPDATE OBJECTIVE??
Please @tnich @Ray Vickson I am waiting your help...
I suggest that you follow the author's plan: write out a tableau with a degenerate solution, pivot to bring one of the fickle variables ##x_s## into the basis, set ##x_s=q##, and then perform a pivot to bring a new fickle variable into the basis. See what you get. Then compare your result with ##\upsilon+c_sq = \upsilon+c^*_sq + \sum_{x_k \in B_i} c^*_k\left(b_k-a_{ks}q\right)##.
 
  • #29
tnich said:
I suggest that you follow the author's plan: write out a tableau with a degenerate solution, pivot to bring one of the fickle variables ##x_s## into the basis, set ##x_s=q##, and then perform a pivot to bring a new fickle variable into the basis. See what you get. Then compare your result with ##\upsilon+c_sq = \upsilon+c^*_sq + \sum_{x_k \in B_i} c^*_k\left(b_k-a_{ks}q\right)##.
you suggest me to do simplex iterations using infeasible solutions I think. if I bring ##x_s## into basis and set q=5 for instance then other basic variables change ##b_k-a_{ks}q## afterwards you want me to continue to do simplex with those solutions?? Am I right??
 
  • #30
mertcan said:
you suggest me to do simplex iterations using infeasible solutions I think. if I bring ##x_s## into basis and set q=5 for instance then other basic variables change ##b_k-a_{ks}q## afterwards you want me to continue to do simplex with those solutions?? Am I right??
I think you should first do the simplex pivots with ##x_s=0##. Then start again with ##x_s=q## to see how things change.
 
  • #31
Can I ask what is your objective here?

Do you want to understand a specific proof of the Bland Rule or are you after understanding the Bland Rule itself?

If the former be aware people that write textbooks, articles, papers etc are human like the rest of us and make mistakes. When I did my degree there were often mistakes in the lectures, problems sets etc. I remember one of the first subjects I did - Calculus and Analysis A had the wrong answer of a problem. I worked through it time and time again using all sorts of different approaches - I got my answer each time. The lecturers answer was wrong. At the next tutorial he said - its can't be wrong - I have been using the same tutorials for years and nobody ever bought a mistake to my attention. He went though it with me and lo and behold - it was wrong. Since then I have had many such experiences - some where I had to find the correct answer myself. In many of my classes I was told anybody that spots a mistake I made in the material will instantly pass. I usually found a number. I asked one lecturer - why do you do it. He laughed and said anyone good enough to spot mistakes will pass anyway - likely get an honor - so it was just a challenge to good students.

You are obviously having trouble with this - but an internet search on 'bland rule proof' brings back a number of hits. Why not try to understand some of those first then come back to your proof?. It may be the proof you have is wrong or missing important details. That's what I would do. I often read papers and come across issues I need to nut out - only rarely do I need somebody else's help - I do an internet search, look in my personal library, even make the rather lengthy trip to my old Alma Mata and use their library which as an ex student I am allowed to do. It is in a middle of Brisbane and parking is murder - but if I have to do it, I have to do it.

I don't know if this is post-grad or undergrad. If undergrad get your lecturer to help. If post-grad it 's all part of post-grad - you are expected to do this yourself - same if you are studying it simply for pleasure/interest - it's part and parcel of this learning thing.

Here we generally prefer to give hints - you learn better that way. As they say - give a man a fish and he eats for one meal - teach him how to fish and he never starves. Same here. Of course if required we will explicitly give the answer - if we can - you have picked an area that not many regulars know much about. But please, do what I suggest and post back with how you went.

Thanks
Bill
 
  • Like
Likes tnich
  • #32
mertcan said:
Actually I do not try to deviate from or stray away from the main discussion, just eager to comprehend the proof. Also MENTORS of forum suggest me to create thread here(@fresh_42, @bhobba ...) about that question. I have only 1 LEFT QUESTION :

you can easily see objective value change with new set of solution is ##c_s##*q(xs) on left hand side of equation ##\upsilon+c_sq = \upsilon+c^*_sq + \sum_{x_k \in B_i} c^*_k\left(b_k-a_{ks}q\right)## I got it , BUT I can not understand why the objective change on the right hand side of previous equation is ##c^*_sq + \sum_{x_k \in B_i} c^*_k\left(b_k-a_{ks}q\right)##?? WHY do we multiply ##c^*_s## with xs(q) or ##c^*_k## with ##b_k-a_{ks}*q## TO UPDATE OBJECTIVE??
Please @tnich @Ray Vickson I am waiting your help...

Okay, I will relent this once and continue to reply.

The presentation of Bland's rule proof you gave in your link is mostly the same as the one on pages 37--38 of the book "Linear Programming", by Vasek Chvatal, but with some aspects missing.

First, the actual Bland rule involves using the smallest subscript rule for both entering and leaving variables, but your attachment talks about entering variables only; however, maybe in the lectures this was discussed more fully.

Next, your attachment has variable ##x_t## leaving in some dictionary D but entering again in some other dictionary D*, and for that to happen we may need more than one "cycle". That is, we might not have ##x_t## leave and then re-enter in the sequence of dictionaries ##D_0, D_1, \ldots D_k = D_0##. For example, ##x_t## may be non-basic in ##D_0##, then enter at some ##D_p## then leave again at some ##D_q > D_p##, never to enter again before ##D_k##. The cited document does not account for such a case.

The argument of Chvatal gets around this potential problem by noting that if we always apply the same rules and get cycling, then the extended sequence ##D_0, D_1, \ldots, D_k (= D_0) , D_1, D_2, \ldots## will always be obtained and it will always contain a subsequence in which ##x_t## leaves in one ##D## and enters again in some later ##D*##. Then the arguments given in your cited document go through, and are essentially a copy of the arguments given by Chvatal.

Finally, I will say that if you Google the keywords "Bland's rule" you will see many different proofs of the rule. If you are having trouble following one proof, nothing is stopping you from going on-line to look at other proofs.
 
Last edited:
  • Like
Likes bhobba
  • #33
Ray Vickson said:
Okay, I will relent this once and continue to reply.

The presentation of Bland's rule proof you gave in your link is mostly the same as the one on pages 37--38 of the book "Linear Programming", by Vasek Chvatal, but with some aspects missing.

First, the actual Bland rule involves using the smallest subscript rule for both entering and leaving variables, but your attachment talks about entering variables only; however, maybe in the lectures this was discussed more fully.

Next, your attachment has variable ##x_t## leaving in some dictionary D but entering again in some other dictionary D*, and for that to happen we may need more than one "cycle". That is, we might not have ##x_t## leave and then re-enter in the sequence of dictionaries ##D_0, D_1, \ldots D_k = D_0##. For example, ##x_t## may be non-basic in ##D_0##, then enter at some ##D_p## then leave again at some ##D_q > D_p##, never to enter again before ##D_k##. The cited document does not account for such a case.

The argument of Chvatal gets around this potential problem by noting that if we always apply the same rules and get cycling, then the extended sequence ##D_0, D_1, \ldots, D_k (= D_0) , D_1, D_2, \ldots## will always be obtained and it will always contain a subsequence in which ##x_t## leaves in one ##D## and enters again in some later ##D*##. Then the arguments given in your cited document go through, and are essentially a copy of the arguments given by Chvatal.

Finally, I will say that if you Google the keywords "Bland's rule" you will see many different proofs of the rule. If you are having trouble following one proof, nothing is stopping you from going on-line to look at other proofs.
bhobba said:
Can I ask what is your objective here?

Do you want to understand a specific proof of the Bland Rule or are you after understanding the Bland Rule itself?

If the former be aware people that write textbooks, articles, papers etc are human like the rest of us and make mistakes. When I did my degree there were often mistakes in the lectures, problems sets etc. I remember one of the first subjects I did - Calculus and Analysis A had the wrong answer of a problem. I worked through it time and time again using all sorts of different approaches - I got my answer each time. The lecturers answer was wrong. At the next tutorial he said - its can't be wrong - I have been using the same tutorials for years and nobody ever bought a mistake to my attention. He went though it with me and lo and behold - it was wrong. Since then I have had many such experiences - some where I had to find the correct answer myself. In many of my classes I was told anybody that spots a mistake I made in the material will instantly pass. I usually found a number. I asked one lecturer - why do you do it. He laughed and said anyone good enough to spot mistakes will pass anyway - likely get an honor - so it was just a challenge to good students.

You are obviously having trouble with this - but an internet search on 'bland rule proof' brings back a number of hits. Why not try to understand some of those first then come back to your proof?. It may be the proof you have is wrong or missing important details. That's what I would do. I often read papers and come across issues I need to nut out - only rarely do I need somebody else's help - I do an internet search, look in my personal library, even make the rather lengthy trip to my old Alma Mata and use their library which as an ex student I am allowed to do. It is in a middle of Brisbane and parking is murder - but if I have to do it, I have to do it.

I don't know if this is post-grad or undergrad. If undergrad get your lecturer to help. If post-grad it 's all part of post-grad - you are expected to do this yourself - same if you are studying it simply for pleasure/interest - it's part and parcel of this learning thing.

Here we generally prefer to give hints - you learn better that way. As they say - give a man a fish and he eats for one meal - teach him how to fish and he never starves. Same here. Of course if required we will explicitly give the answer - if we can - you have picked an area that not many regulars know much about. But please, do what I suggest and post back with how you went.

Thanks
Bill
tnich said:
I think you should first do the simplex pivots with ##x_s=0##. Then start again with ##x_s=q## to see how things change.
tnich said:
I suggest that you follow the author's plan: write out a tableau with a degenerate solution, pivot to bring one of the fickle variables ##x_s## into the basis, set ##x_s=q##, and then perform a pivot to bring a new fickle variable into the basis. See what you get. Then compare your result with ##\upsilon+c_sq = \upsilon+c^*_sq + \sum_{x_k \in B_i} c^*_k\left(b_k-a_{ks}q\right)##.

Initially, thanks for your attention, not being relentless...I have been suffocating in that problem for 2 months, I strived for digging something valuable out of internet but I encountered the same proof related to Bland rule( I bet you can not find different kind of proof about that either, ALL PROOFS ARE SAME believe me ), so I decided to discuss my problem with you, did not created a thread as soon as I had a problem, also I already have the Chavatal book by the way I will present the CONTRADICTION about the proof even in Chavatal, thus I would like to share 2 different screenshot from Chavatal:
In the picture 1 he updates the non basic variable ##x_s## with a some number "t" and according to new value of ##x_s## he also updates 2 different dictionaries. I totally agree what he did in picture 1.
BUT IN PICTURE 2, he again updates the non basic variable ##x_s## with a some number y ( it was "t" in picture 1, nothing different ), while everything is going as I want then he adds the equation in the RED BOX which is ## \sum_{i\in B} c^*_i\left(b_i-a_{is}y\right)##. Why he adds RED BOX??

He also expresses in GREEN BOX that the equation in ORANGE BOX which is ##z = \upsilon+ \sum_{j=1} c^*_jx_j## obtained from algebraic MANIPULATIONS. WHAT is this ALGEBRAIC MANIPULATION??

I think there is an impasse, mystery...
 

Attachments

  • picture 1.png
    picture 1.png
    34.2 KB · Views: 401
  • picture 2.png
    picture 2.png
    60 KB · Views: 436
  • #34
mertcan said:
Initially, thanks for your attention, not being relentless...I have been suffocating in that problem for 2 months, I strived for digging something valuable out of internet but I encountered the same proof related to Bland rule( I bet you can not find different kind of proof about that either, ALL PROOFS ARE SAME believe me ), so I decided to discuss my problem with you, did not created a thread as soon as I had a problem, also I already have the Chavatal book by the way I will present the CONTRADICTION about the proof even in Chavatal, thus I would like to share 2 different screenshot from Chavatal:
In the picture 1 he updates the non basic variable ##x_s## with a some number "t" and according to new value of ##x_s## he also updates 2 different dictionaries. I totally agree what he did in picture 1.
BUT IN PICTURE 2, he again updates the non basic variable ##x_s## with a some number y ( it was "t" in picture 1, nothing different ), while everything is going as I want then he adds the equation in the RED BOX which is ## \sum_{i\in B} c^*_i\left(b_i-a_{is}y\right)##. Why he adds RED BOX??

He also expresses in GREEN BOX that the equation in ORANGE BOX which is ##z = \upsilon+ \sum_{j=1} c^*_jx_j## obtained from algebraic MANIPULATIONS. WHAT is this ALGEBRAIC MANIPULATION??

I think there is an impasse, mystery...
Once again, I suggest that you mechanically work through the pivots and see what you get.
 

1. What is the Bland rule in linear programming?

The Bland rule, also known as the minimum index rule, is a method used in linear programming to select the entering variable in the simplex method. It states that the variable with the smallest positive coefficient in the objective function will be selected as the entering variable.

2. Why is the Bland rule important in linear programming?

The Bland rule is important because it ensures that the simplex method will always terminate in a finite number of steps. It also helps to avoid cycling, which is when the same basic feasible solution is repeated in the simplex method.

3. How does the Bland rule proof work?

The Bland rule proof is based on the concept of degeneracy in linear programming. It proves that if the Bland rule is followed, the simplex method will always terminate in a finite number of steps, even if there are degenerate solutions.

4. Can the Bland rule be applied to all linear programming problems?

Yes, the Bland rule can be applied to all linear programming problems. It is a general rule that can be used for any type of linear programming problem, regardless of the number of variables or constraints.

5. Are there any limitations to the Bland rule?

The Bland rule may not always result in the optimal solution in certain cases, such as when there are multiple optimal solutions. In these cases, other methods may need to be used to find the optimal solution.

Similar threads

  • General Math
Replies
14
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
1K
Replies
7
Views
1K
  • Linear and Abstract Algebra
Replies
1
Views
4K
Replies
0
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
11
Views
4K
  • General Math
Replies
22
Views
3K
  • General Math
4
Replies
125
Views
16K
  • STEM Academic Advising
Replies
4
Views
2K
  • Math Proof Training and Practice
6
Replies
175
Views
19K
Back
Top