Bland rule proof linear programming

Click For Summary
The discussion centers on Bland's rule in the simplex algorithm, specifically addressing the flexibility of the variable $$x_s$$ in proofs related to anticycling. Participants express confusion over why $$x_s$$ can take any value, including non-zero, when it seems that it should only be zero during degenerate iterations. The author of the proof introduces the concept of potentially infeasible solutions to demonstrate that the algorithm's dictionaries are equivalent, allowing for a broader understanding of variable behavior. This approach aims to reveal relationships between coefficients that would not be apparent if only feasible solutions were considered. Ultimately, the discussion highlights the complexity of understanding variable roles in linear programming proofs.
mertcan
Messages
343
Reaction score
6
<Moderator's note: Continued from a technical forum and thus no template. Re-opening has been approved by moderator.>

Hi, my question is related to simplex algorithm anticycling rule called Bland's rule. While I was working with the proof in the link https://www.math.ubc.ca/~anstee/math340/340blandsrule.pdf I got stuck with the very important part and my question is so simple by the way. Why the variable $$xs=q$$ can be everything??
I believe $$xs$$ only be zero because among degenerate iterations entering variable and leaving variable must be 0...

Also if you look at this link you can examine a similar proof (of my previous link) in the link http://profs.sci.univr.it/~rrizzi/classes/LP/homeworks/Bland/Bland.html. And there is sentence that "it must admit any solution which is admitted from
img7-gif.gif
. In particular, it must be satisfied by the following choice of values": $$xs=y$$
Why must it admit any solution thus why must the choice of values be $$xs=y$$??
Again I consider that it must be only 0 can not be any choice of value..
 

Attachments

  • img7-gif.gif
    img7-gif.gif
    163 bytes · Views: 1,096
Physics news on Phys.org
The author is careful to point out that the hypothetical solution with ##x_s = q## is not necessarily feasible, implying that the restriction that ##x_s = 0## does not apply. This also implies that the solution with ##x_s = q## is not necessarily the actual solution at iteration ##i## of the simplex algorithm.
 
tnich said:
The author is careful to point out that the hypothetical solution with ##x_s = q## is not necessarily feasible, implying that the restriction that ##x_s = 0## does not apply. This also implies that the solution with ##x_s = q## is not necessarily the actual solution at iteration ##i## of the simplex algorithm.
I do not understand forgive me could you be more explicit??
I consider xs=0 must be the only solution because in degenerate iterations entering and leaving variable must be zero to ensure the same optimal value. Thus if xs always equals 0 this proof must be totally WRONG. because this proof actually mainly depends on the fact that xs can be all kind of value...
 
mertcan said:
I do not understand forgive me could you be more explicit??
I consider xs=0 must be the only solution because in degenerate iterations entering and leaving variable must be zero to ensure the same optimal value. Thus if xs always equals 0 this proof must be totally WRONG. because this proof actually mainly depends on the fact that xs can be all kind of value...
To quote the author in the first paragraph of the proof, "We already know in a cycle that each pivot will be a degenerate pivot (you should know why?) and hence, if xj is fickle, xj = 0 in each basic feasible solution during the cycle." Then he deliberately introduces a not-necessarily-feasible solution which will not be produced by the simplex algorithm. That means that ##x_s## in that solution does not have to be zero.
 
  • Like
Likes mertcan
mertcan said:
<Moderator's note: Continued from a technical forum and thus no template. Re-opening has been approved by moderator.>

Hi, my question is related to simplex algorithm anticycling rule called Bland's rule. While I was working with the proof in the link https://www.math.ubc.ca/~anstee/math340/340blandsrule.pdf I got stuck with the very important part and my question is so simple by the way. Why the variable $$xs=q$$ can be everything??
I believe $$xs$$ only be zero because among degenerate iterations entering variable and leaving variable must be 0...

Also if you look at this link you can examine a similar proof (of my previous link) in the link http://profs.sci.univr.it/~rrizzi/classes/LP/homeworks/Bland/Bland.html. And there is sentence that "it must admit any solution which is admitted from View attachment 221622 . In particular, it must be satisfied by the following choice of values": $$xs=y$$
Why must it admit any solution thus why must the choice of values be $$xs=y$$??
Again I consider that it must be only 0 can not be any choice of value..

I am not sure my answer here will clarify things for you sufficiently for you to then follow the proof to its end, but here goes, anyway.

Remember, a dictionary is a set of equations written so that some variables are functions of the others. The variables on the left of the "=" signs are called basic (or dependent), while those on the right are called non-basic (or independent). You get a basic solution by deliberately setting the non-basic variables to zero, but you are certainly allowed to set non-basic variables to non-zero values in a dictionary. The result will be a solution that is non-basic, and may even be "infeasible" (with some basic variables < 0).

Remember also, that different dictionaries are algebraically equivalent, so if we look at solutions with some non-basic variable ##x_n = q \neq 0## in some dictionary, the variable will be replaced by the value ##q## in all dictionaries. (I mean that if we eliminate the non-basic variable ##x_n## from a dictionary, and just replace it by a number ##q##, that number ##q## will migrate algebraically among all the dictionaries as you do pivoting.)
 
  • Like
Likes mertcan
tnich said:
To quote the author in the first paragraph of the proof, "We already know in a cycle that each pivot will be a degenerate pivot (you should know why?) and hence, if xj is fickle, xj = 0 in each basic feasible solution during the cycle." Then he deliberately introduces a not-necessarily-feasible solution which will not be produced by the simplex algorithm. That means that ##x_s## in that solution does not have to be zero.
According to Bland rule, simple algorithm should not get into cycle whereas FEASIBLE solutions are produced. But why the author assumes that there may be infeasible solution?? or why author pretends that xs may not be feasible in order to ensure the proof?? what is the contribution of assumption of xs may be not feasible to the proof??

We already know that simple algorithm can not accept infeasible xs, so why does author say that xs may not be feasible although xs must be feasible in simplex??
infeasible xs does not make sense for me because we can not encounter such a weird situation...
 
Last edited:
mertcan said:
According to Bland rule, simple algorithm should not get into cycle whereas FEASIBLE solutions are produced. But why the author assumes that there may be infeasible solution?? or why author pretends that xs may not be feasible in order to ensure the proof?? what is the contribution of assumption of xs may be not feasible to the proof??

We already know that simple algorithm can not accept infeasible xs, so why does author say that xs may not be feasible although xs must be feasible in simplex??
The author is showing that there exists some variable ##x_r \in B_i## for which ##c_r^* a_{rs} < 0##. He introduced the not-necessarily-feasible solution to do that. Follow his reasoning and you will see that is true.
 
  • Like
Likes mertcan
mertcan said:
According to Bland rule, simple algorithm should not get into cycle whereas FEASIBLE solutions are produced. But why the author assumes that there may be infeasible solution?? or why author pretends that xs may not be feasible in order to ensure the proof?? what is the contribution of assumption of xs may be not feasible to the proof??

We already know that simple algorithm can not accept infeasible xs, so why does author say that xs may not be feasible although xs must be feasible in simplex??
infeasible xs does not make sense for me because we can not encounter such a weird situation...

He does not say that infeasible solutions will ever be found during the sequence of pivots. He says that all the dictionaries are equivalent, so that you can look at how possibly-infeasible solutions are also propagated among the dictionaries. By allowing for non-basic, possibly infeasible solutions to the equations, you can get some extra information that would not be obvious from feasible solutions alone.
 
  • Like
Likes mertcan
tnich said:
The author is showing that there exists some variable ##x_r \in B_i## for which ##c_r^* a_{rs} < 0##. He introduced the not-necessarily-feasible solution to do that. Follow his reasoning and you will see that is true.
Ray Vickson said:
He does not say that infeasible solutions will ever be found during the sequence of pivots. He says that all the dictionaries are equivalent, so that you can look at how possibly-infeasible solutions are also propagated among the dictionaries. By allowing for non-basic, possibly infeasible solutions to the equations, you can get some extra information that would not be obvious from feasible solutions alone.
OK you say that no matter what xs(q) is (feasible or infeasible) ##\left(c_s-c^*_s+\sum_{x_k\in B_i} c^*_k a_{ks}\right)*q = \sum_{x_k\in B_i} c^*_k b_k = 0 ##
Also @tnich says The author is showing that there exists some variable ##x_r \in B_i## for which ##c_r^* a_{rs} < 0##. He introduced the not-necessarily-feasible solution to do that. BUT let^s say q is feasible and is 0, then ##\left(c_s-c^*_s+\sum_{x_k\in B_i} c^*_k a_{ks}\right)*q = 0##. Here it is if q is feasible (means zero) then ##\left(c_s-c^*_s+\sum_{x_k\in B_i} c^*_k a_{ks}\right)## does not have to be zero so it can be every value and we can not precisely express that there exists some variable ##x_r \in B_i## for which ##c_r^* a_{rs} < 0##. What do you think about that??

Also @Ray Vickson author says q can be everything INCLUDING zero but you can see above we can not accurately say there exists some variable ##x_r \in B_i## for which ##c_r^* a_{rs} < 0##
You are right about the author by allowing for non-basic, possibly infeasible solutions to the equations, he try to get some extra information and try to generalise the case, but we are capable of applying the real case (q=0) and c^*_k a_{ks} can be everything??
 
  • #10
mertcan said:
OK you say that no matter what xs(q) is (feasible or infeasible) ##\left(c_s-c^*_s+\sum_{x_k\in B_i} c^*_k a_{ks}\right)*q = \sum_{x_k\in B_i} c^*_k b_k = 0 ##
Also @tnich says The author is showing that there exists some variable ##x_r \in B_i## for which ##c_r^* a_{rs} < 0##. He introduced the not-necessarily-feasible solution to do that. BUT let^s say q is feasible and is 0, then ##\left(c_s-c^*_s+\sum_{x_k\in B_i} c^*_k a_{ks}\right)*q = 0##. Here it is if q is feasible (means zero) then ##\left(c_s-c^*_s+\sum_{x_k\in B_i} c^*_k a_{ks}\right)## does not have to be zero so it can be every value and we can not precisely express that there exists some variable ##x_r \in B_i## for which ##c_r^* a_{rs} < 0##. What do you think about that??

Also @Ray Vickson author says q can be everything INCLUDING zero but you can see above we can not accurately say there exists some variable ##x_r \in B_i## for which ##c_r^* a_{rs} < 0##
You are right about the author by allowing for non-basic, possibly infeasible solutions to the equations, he try to get some extra information and try to generalise the case, but we are capable of applying the real case (q=0) and c^*_k a_{ks} can be everything??

No: he shows that
$$\left( c_s - c^*_s + \sum_{x_k \in B_i} x^*_k a_{ks} \right) q = \sum_{x_k \in B_i} c^*_k b_k \: \hspace{4em}(1)$$
for ALL ##q##. If the left-hand side of (1) varies with ##q## but the right-hand-side does not, then we have a contradiction. Therefore, the left-hand-side of (1) cannot vary with ##q##, and that means that the coefficient of ##q## must equal zero. That also implies that the right-hand-side equals 0 as well.
 
  • Like
Likes mertcan and tnich
  • #11
Ray Vickson said:
No: he shows that
$$\left( c_s - c^*_s + \sum_{x_k \in B_i} x^*_k a_{ks} \right) q = \sum_{x_k \in B_i} c^*_k b_k \: \hspace{4em}(1)$$
for ALL ##q##. If the left-hand side of (1) varies with ##q## but the right-hand-side does not, then we have a contradiction. Therefore, the left-hand-side of (1) cannot vary with ##q##, and that means that the coefficient of ##q## must equal zero. That also implies that the right-hand-side equals 0 as well.
BUT ##\sum_{x_k \in B_i} c^*_k b_k## has to be zero anyway. For instance if at the beginning of degenerate iterations ##x_k## is in basic and has value 5 (##b_k##) then after some degenerate iterations this value remains same and ##c^*_k## remains zero also. Or at the beginning of degenerate iterations, if ##x_k## is in basic and has value 0 (##b_k##), then after some degenerate iterations yes it can be non basic but always has zero value anyway. What do you say about that situation?? Can it be such a situation that ##\sum_{x_k \in B_i} c^*_k b_k## is not always zero??
 
  • Like
Likes tnich
  • #12
mertcan said:
BUT ##\sum_{x_k \in B_i} c^*_k b_k## has to be zero anyway. For instance if at the beginning of degenerate iterations ##x_k## is in basic and has value 5 (##b_k##) then after some degenerate iterations this value remains same and ##c^*_k## remains zero also. Or at the beginning of degenerate iterations, if ##x_k## is in basic and has value 0 (##b_k##), then after some degenerate iterations yes it can be non basic but always has zero value anyway. What do you say about that situation?? Can it be such a situation that ##\sum_{x_k \in B_i} c^*_k b_k## is not always zero??

I don't say anything about such a situation: it does not arise. The quantity exhibited above is zero (provably so by more than one method), so speculating on what we ought to do if, somehow, it could be nonzero seems fruitless to me. It is zero---period.
 
Last edited:
  • Like
Likes mertcan
  • #13
Ray Vickson said:
I don't say anything about such a situation: it does not arise. The quantity exhibited above is zero (provably so by more than one method), so speculating on what we ought to do if, somehow, it could be nonzero seems fruitless to me. It is zero---period.
Initially forgive me still I have some trouble, Let's say that like author xs=q can be both feasible and infeasible. BUT HOW does the equation ##\upsilon+c_sq = \upsilon+c^*_sq + \sum_{x_k \in B_i} c^*_k\left(b_k-a_{ks}q\right)## still holds even xs(q) is INFEASIBLE?? I know this holds for feasible xs or q (while they are 0) but HOW do we prove that even in infeasible case previous equation always holds??

For instance while ##c_s## enters the basis, we are sure ##c_s## > 0. Actually in feasible case xs=q=0 so change in optimal value is zero due to cs*q=0. BUT in INFEASIBLE case let's say that cs=10 and xs=q=5 (think that xs will have non zero value when it enters the basis), so optimal value changes by 50 due to cs*q=50. Well how the change of optimal value by 50 is ensured in the equation ##\upsilon+c^*_sq + \sum_{x_k \in B_i} c^*_k\left(b_k-a_{ks}q\right)## ??

your return towards that question really crucial for me to comprehend...
 
Last edited:
  • #14
mertcan said:
Initially forgive me still I have some trouble, Let's say that like author xs=q can be both feasible and infeasible. BUT HOW does the equation ##\upsilon+c_sq = \upsilon+c^*_sq + \sum_{x_k \in B_i} c^*_k\left(b_k-a_{ks}q\right)## still holds even xs(q) is INFEASIBLE?? I know this holds for feasible xs or q (while they are 0) but HOW do we prove that even in infeasible case previous equation always holds??

For instance while ##c_s## enters the basis, we are sure ##c_s## > 0. Actually in feasible case xs=q=0 so change in optimal value is zero due to cs*q=0. BUT in INFEASIBLE case let's say that cs=10 and xs=q=5 (think that xs will have non zero value when it enters the basis), so optimal value changes by 50 due to cs*q=50. Well how the change of optimal value by 50 is ensured in the equation ##\upsilon+c^*_sq + \sum_{x_k \in B_i} c^*_k\left(b_k-a_{ks}q\right)## ??

your return towards that question really crucial for me to comprehend...

It holds because the author just put ##x_s = q## in one dictionary and then changed to another dictionary. All dictionaries are algebraically equivalent, in the sense that if some numbers ##(x_1, x_2, \ldots, x_n)## satisfy the equations in one dictionary they will also satisfy the equations in another dictionary, and if they fail to satisfy some of the equations in one dictionary they will also fail to satisfy some of the equations in another dictionary. The new equation is just obtained by going to a new dictionary but with the same values of all the ##x_j##s.

This is the absolutely crucial point that you keep missing: ALL DICTIONARIES ARE EQUIVALENT!
 
  • Like
Likes mertcan
  • #15
mertcan said:
Initially forgive me still I have some trouble, Let's say that like author xs=q can be both feasible and infeasible. BUT HOW does the equation ##\upsilon+c_sq = \upsilon+c^*_sq + \sum_{x_k \in B_i} c^*_k\left(b_k-a_{ks}q\right)## still holds even xs(q) is INFEASIBLE?? I know this holds for feasible xs or q (while they are 0) but HOW do we prove that even in infeasible case previous equation always holds??

For instance while ##c_s## enters the basis, we are sure ##c_s## > 0. Actually in feasible case xs=q=0 so change in optimal value is zero due to cs*q=0. BUT in INFEASIBLE case let's say that cs=10 and xs=q=5 (think that xs will have non zero value when it enters the basis), so optimal value changes by 50 due to cs*q=50. Well how the change of optimal value by 50 is ensured in the equation ##\upsilon+c^*_sq + \sum_{x_k \in B_i} c^*_k\left(b_k-a_{ks}q\right)## ??

your return towards that question really crucial for me to comprehend...
I think you could look at it this way. Given a feasible basic solution at pivot i, the author has perturbed the solution, adding a delta (called ##q##) to ##x_s##. The equation ##\upsilon+c_sq = \upsilon+c^*_sq + \sum_{x_k \in B_i} c^*_k\left(b_k-a_{ks}q\right)## shows what happens to the z value as a result. He is saying, "What if we moved the solution at pivot ##i## off of the bounding constraint? How would it affect the solution at pivot f?"
 
  • #16
tnich said:
I think you could look at it this way. Given a feasible basic solution at pivot i, the author has perturbed the solution, adding a delta (called ##q##) to ##x_s##. The equation ##\upsilon+c_sq = \upsilon+c^*_sq + \sum_{x_k \in B_i} c^*_k\left(b_k-a_{ks}q\right)## shows what happens to the z value as a result. He is saying, "What if we moved the solution at pivot ##i## off of the bounding constraint? How would it affect the solution at pivot f?"
By the way, as @Ray Vickson points out, it is the same solution, just expressed in a different way.
 
  • Like
Likes mertcan
  • #17
Ray Vickson said:
It holds because the author just put ##x_s = q## in one dictionary and then changed to another dictionary. All dictionaries are algebraically equivalent, in the sense that if some numbers ##(x_1, x_2, \ldots, x_n)## satisfy the equations in one dictionary they will also satisfy the equations in another dictionary, and if they fail to satisfy some of the equations in one dictionary they will also fail to satisfy some of the equations in another dictionary. The new equation is just obtained by going to a new dictionary but with the same values of all the ##x_j##s.

This is the absolutely crucial point that you keep missing: ALL DICTIONARIES ARE EQUIVALENT!
tnich said:
I think you could look at it this way. Given a feasible basic solution at pivot i, the author has perturbed the solution, adding a delta (called ##q##) to ##x_s##. The equation ##\upsilon+c_sq = \upsilon+c^*_sq + \sum_{x_k \in B_i} c^*_k\left(b_k-a_{ks}q\right)## shows what happens to the z value as a result. He is saying, "What if we moved the solution at pivot ##i## off of the bounding constraint? How would it affect the solution at pivot f?"
tnich said:
By the way, as @Ray Vickson points out, it is the same solution, just expressed in a different way.
Ok I understand that dictionaries in cycle have same set of solutions thus objective value remains same. Also even we change the solutions in a feasible or infeasible way objective remains same again in all dictionaries in cycle. Besides @tnich and @Ray Vickson you express your NICE answers in a different way, so I would like to say let's focus on perturbation side/method (As @tnich says we can think of perturbed solution) : On the left side and right side of equation ##\upsilon+c_sq = \upsilon+c^*_sq + \sum_{x_k \in B_i} c^*_k\left(b_k-a_{ks}q\right)## perturbation must be same. While doing the simplex in a normal way our objective changes by reduced cost coefficient of entering variable*the value it will take after goes into basis. On the left side of equation we can witness that situation, we say objective change by ##c_s##*xs(q) when xs enters the basis and xt leaves the basis and assume xs takes the arbitrary value q. BUT I do not understand how this perturbation is ensured on the right side of equation?? For instance, for the left side to ensure perturbation we use only ##c_s## but why do we use ##c^*_s## or ##c^*_i## ?? Is there a formulation about that situation(perturbation of objective when solutions deviate)??
 
  • #18
mertcan said:
Ok I understand that dictionaries in cycle have same set of solutions thus objective value remains same. Also even we change the solutions in a feasible or infeasible way objective remains same again in all dictionaries in cycle. Besides @tnich and @Ray Vickson you express your NICE answers in a different way, so I would like to say let's focus on perturbation side/method (As @tnich says we can think of perturbed solution) : On the left side and right side of equation ##\upsilon+c_sq = \upsilon+c^*_sq + \sum_{x_k \in B_i} c^*_k\left(b_k-a_{ks}q\right)## perturbation must be same. While doing the simplex in a normal way our objective changes by reduced cost coefficient of entering variable*the value it will take after goes into basis. On the left side of equation we can witness that situation, we say objective change by ##c_s##*xs(q) when xs enters the basis and xt leaves the basis and assume xs takes the arbitrary value q. BUT I do not understand how this perturbation is ensured on the right side of equation?? For instance, for the left side to ensure perturbation we use only ##c_s## but why do we use ##c^*_s## or ##c^*_i## ?? Is there a formulation about that situation(perturbation of objective when solutions deviate)??
The author carefully sets this up by explaining:
##x_s = q##
##x_j = 0## for ##x_j \in B_i## and ##j \neq s##
##x_k = b_k − a_{ks}q## for ##x_k \in B_i##
##z = v + c_sq##
Given this information, you can determine the value of ##z## given the dictionary ##D_f##. As the author explains, "The equation for ##z## in ##D_f## is obtained from the equation from ##D_i## by adding suitable multiples of equations of ##D_i## to the ##z## row." Since ##x_s## is not in the basis in ##D_i## but it is in ##D_f##, you will have to account for that.
 
  • Like
Likes mertcan
  • #19
tnich said:
The author carefully sets this up by explaining:
##x_s = q##
##x_j = 0## for ##x_j \in B_i## and ##j \neq s##
##x_k = b_k − a_{ks}q## for ##x_k \in B_i##
##z = v + c_sq##
Given this information, you can determine the value of ##z## given the dictionary ##D_f##. As the author explains, "The equation for ##z## in ##D_f## is obtained from the equation from ##D_i## by adding suitable multiples of equations of ##D_i## to the ##z## row." Since ##x_s## is not in the basis in ##D_i## but it is in ##D_f##, you will have to account for that.
Initially @tnich ##x_s## does not not have to be in basis (##D_f##) because it says ##c^*_s## <=0 which means it can be both non basic and basic. Also @tnich and @Ray Vickson RIGHT hand side of equation ##\upsilon+c_sq = \upsilon+c^*_sq + \sum_{x_k \in B_i} c^*_k\left(b_k-a_{ks}q\right)## can be stemmed from REVISED simplex method?? The right side can be updated according to revised simplex method for objective value??

For instance in normal simplex we proceed 1 by 1 in each iteration, 1 new basic variable emerges and 1 new non basic variable emerges. BUT in REVISED simplex we are capable of creating 3,4,5.. new basic variable at the same time and 3,4,5.. non basic variable at the same time, so the amount of change of objective value equals previous reduced cost coefficient of those variable*their last basis value. Am I RİGHT??
 
  • #20
mertcan said:
Initially @tnich ##x_s## does not not have to be in basis (##D_f##) because it says ##c^*_s## <=0 which means it can be both non basic and basic.
I think you are right about that.
 
  • Like
Likes mertcan
  • #21
mertcan said:
RIGHT hand side of equation ##\upsilon+c_sq = \upsilon+c^*_sq + \sum_{x_k \in B_i} c^*_k\left(b_k-a_{ks}q\right)## can be stemmed from REVISED simplex method?? The right side can be updated according to revised simplex method for objective value??

For instance in normal simplex we proceed 1 by 1 in each iteration, 1 new basic variable emerges and 1 new non basic variable emerges. BUT in REVISED simplex we are capable of creating 3,4,5.. new basic variable at the same time and 3,4,5.. non basic variable at the same time, so the amount of change of objective value equals previous reduced cost coefficient of those variable*their last basis value. Am I RİGHT??[/USER]

I think that is a valid way of looking at it. Whether you do sequential pivots or change to a new set of basis variables all at once, you should get the same result.
 
  • Like
Likes mertcan
  • #22
mertcan said:
Initially @tnich ##x_s## does not not have to be in basis (##D_f##) because it says ##c^*_s## <=0 which means it can be both non basic and basic. Also @tnich and @Ray Vickson RIGHT hand side of equation ##\upsilon+c_sq = \upsilon+c^*_sq + \sum_{x_k \in B_i} c^*_k\left(b_k-a_{ks}q\right)## can be stemmed from REVISED simplex method?? The right side can be updated according to revised simplex method for objective value??

For instance in normal simplex we proceed 1 by 1 in each iteration, 1 new basic variable emerges and 1 new non basic variable emerges. BUT in REVISED simplex we are capable of creating 3,4,5.. new basic variable at the same time and 3,4,5.. non basic variable at the same time, so the amount of change of objective value equals previous reduced cost coefficient of those variable*their last basis value. Am I RİGHT??
@Ray Vickson what are your opinions about my last question that I quoted here? By the way @tnich agrees to look at that way (like in revised simplex) to update the objective values with new set of solutions
 
  • #23
mertcan said:
@Ray Vickson what are your opinions about my last question that I quoted here? By the way @tnich agrees to look at that way (like in revised simplex) to update the objective values with new set of solutions

Most implementations of the revised simplex method I have seen still do one-at-a time changes, rather than your cited "block-pivoting" method. Basically, it is not easy to see if changing several basic variables simultaneously can maintain feasibility or whether it will help or hurt the objective. When we change bases one-variable-at-a time we are sure we maintain feasibliity and are sure of improvement eventually (possibly after a number of degenerate pivots) or else termination at an optimum or revelation of an unbounded "solution".

Nevertheless, block-pivoting strategies and methods have been studied and developed; Google "block pivoting in linear programming" so see some studies.
 
  • #24
Ray Vickson said:
Most implementations of the revised simplex method I have seen still do one-at-a time changes, rather than your cited "block-pivoting" method. Basically, it is not easy to see if changing several basic variables simultaneously can maintain feasibility or whether it will help or hurt the objective. When we change bases one-variable-at-a time we are sure we maintain feasibliity and are sure of improvement eventually (possibly after a number of degenerate pivots) or else termination at an optimum or revelation of an unbounded "solution".

Nevertheless, block-pivoting strategies and methods have been studied and developed; Google "block pivoting in linear programming" so see some studies.
As you can see we are sure that author updates the objective value of the right hand side of equation ##\upsilon+c_sq = \upsilon+c^*_sq + \sum_{x_k \in B_i} c^*_k\left(b_k-a_{ks}q\right)## changing the new set of several basic variables at the same time (he employs same solution set like dictionary ##D_i## in dictionary ##D_f## . For the left hand side he only involves ##c_s## and update objective via this coefficient, this situation totally regards normal simplex method( enters and leaves 1 variable in each iteration). According to you, we can not explain the update of objective value via REVISED simplex ( you may be right about that also it may be 1 by 1 iteration for entering or leaving) ?? For you, what method does author use to update the objective on the right side of previous equation involving MULTIPLE BASIS variables change at the same time?? (you say block pivoting is used to update basis variable in dictionary ##D_f## by author??)
Also AM I WRONG about thinking that in dictionary ##D_f## MULTİPLE BASIS change is created employing the same set of solutions in dictionary ##D_i##??

By the way @tnich, @Ray Vickson not totally consider that REVISED simplex may be a valid way of looking at the update of solution set at the same time in dictionary ##D_f##, because this method progresses by 1 by for entering or leaving the basis as well.
 
Last edited:
  • #25
mertcan said:
As you can see we are sure that author updates the objective value of the right hand side of equation ##\upsilon+c_sq = \upsilon+c^*_sq + \sum_{x_k \in B_i} c^*_k\left(b_k-a_{ks}q\right)## changing the new set of several basic variables at the same time (he employs same solution set like dictionary ##D_i## in dictionary ##D_f## . For the left hand side he only involves ##c_s## and update objective via this coefficient, this situation totally regards normal simplex method( enters and leaves 1 variable in each iteration). According to you, we can not explain the update of objective value via REVISED simplex ( you may be right about that also it may be 1 by 1 iteration for entering or leaving) ?? For you, what method does author use to update the objective on the right side of previous equation involving several basis variables change at the same time?? (you say block pivoting is used to update basis variable in dictionary ##D_f## by author??)

By the way @tnich, @Ray Vickson not totally consider that REVISED simplex may be a valid way of looking at the update of solution set at the same time in dictionary ##D_f##, because this method progresses by 1 by for entering or leaving the basis as well.
mertcan said:
As you can see we are sure that author updates the objective value of the right hand side of equation ##\upsilon+c_sq = \upsilon+c^*_sq + \sum_{x_k \in B_i} c^*_k\left(b_k-a_{ks}q\right)## changing the new set of several basic variables at the same time (he employs same solution set like dictionary ##D_i## in dictionary ##D_f## . For the left hand side he only involves ##c_s## and update objective via this coefficient, this situation totally regards normal simplex method( enters and leaves 1 variable in each iteration). According to you, we can not explain the update of objective value via REVISED simplex ( you may be right about that also it may be 1 by 1 iteration for entering or leaving) ?? For you, what method does author use to update the objective on the right side of previous equation involving several basis variables change at the same time?? (you say block pivoting is used to update basis variable in dictionary ##D_f## by author??)

By the way @tnich, @Ray Vickson not totally consider that REVISED simplex may be a valid way of looking at the update of solution set at the same time in dictionary ##D_f##, because this method progresses by 1 by for entering or leaving the basis as well.
I think you have misunderstood both my comments and those of @Ray Vickson. I suggest that you do a little research to find out what is normally meant by "revised simplex method". The revised simplex method produces the same pivots as the simplex method with the same result at each pivot.
 
  • #26
mertcan said:
As you can see we are sure that author updates the objective value of the right hand side of equation ##\upsilon+c_sq = \upsilon+c^*_sq + \sum_{x_k \in B_i} c^*_k\left(b_k-a_{ks}q\right)## changing the new set of several basic variables at the same time (he employs same solution set like dictionary ##D_i## in dictionary ##D_f## . For the left hand side he only involves ##c_s## and update objective via this coefficient, this situation totally regards normal simplex method( enters and leaves 1 variable in each iteration). According to you, we can not explain the update of objective value via REVISED simplex ( you may be right about that also it may be 1 by 1 iteration for entering or leaving) ?? For you, what method does author use to update the objective on the right side of previous equation involving several basis variables change at the same time?? (you say block pivoting is used to update basis variable in dictionary ##D_f## by author??)

By the way @tnich, @Ray Vickson not totally consider that REVISED simplex may be a valid way of looking at the update of solution set at the same time in dictionary ##D_f##, because this method progresses by 1 by for entering or leaving the basis as well.

You seem to be suggesting that the author is performing several basis changes at the same time. NOT TRUE: he introduces a single non-basic variable and then looks at some non-basic solutions where that variable takes a range of possible values ##q##, using that to gain some additional information about the structure of the dictionaries.

Aside from that your message is unclear and I cannot figure out what you are trying to say. However, I think it goes beyond what PF allows in the way of a discussion. You had a specific question, and that was answered. Now, it seems, you want to go beyond the confines of that question and get into additional topics. That needs a new thread. Furthermore, as it is probably no longer a "homework" problem, it likely belongs in a different forum.

In any case I am now leaving this discussion, as I have nothing more to say that is pertinent to the original question.
 
  • #27
Ray Vickson said:
You seem to be suggesting that the author is performing several basis changes at the same time. NOT TRUE: he introduces a single non-basic variable and then looks at some non-basic solutions where that variable takes a range of possible values ##q##, using that to gain some additional information about the structure of the dictionaries.

Aside from that your message is unclear and I cannot figure out what you are trying to say. However, I think it goes beyond what PF allows in the way of a discussion. You had a specific question, and that was answered. Now, it seems, you want to go beyond the confines of that question and get into additional topics. That needs a new thread. Furthermore, as it is probably no longer a "homework" problem, it likely belongs in a different forum.

In any case I am now leaving this discussion, as I have nothing more to say that is pertinent to the original question.
tnich said:
I think you have misunderstood both my comments and those of @Ray Vickson. I suggest that you do a little research to find out what is normally meant by "revised simplex method". The revised simplex method produces the same pivots as the simplex method with the same result at each pivot.
Actually I do not try to deviate from or stray away from the main discussion, just eager to comprehend the proof. Also MENTORS of forum suggest me to create thread here(@fresh_42, @bhobba ...) about that question. I have only 1 LEFT QUESTION :

you can easily see objective value change with new set of solution is ##c_s##*q(xs) on left hand side of equation ##\upsilon+c_sq = \upsilon+c^*_sq + \sum_{x_k \in B_i} c^*_k\left(b_k-a_{ks}q\right)## I got it , BUT I can not understand why the objective change on the right hand side of previous equation is ##c^*_sq + \sum_{x_k \in B_i} c^*_k\left(b_k-a_{ks}q\right)##?? WHY do we multiply ##c^*_s## with xs(q) or ##c^*_k## with ##b_k-a_{ks}*q## TO UPDATE OBJECTIVE??
Please @tnich @Ray Vickson I am waiting your help...
 
  • #28
mertcan said:
Actually I do not try to deviate from or stray away from the main discussion, just eager to comprehend the proof. Also MENTORS of forum suggest me to create thread here(@fresh_42, @bhobba ...) about that question. I have only 1 LEFT QUESTION :

you can easily see objective value change with new set of solution is ##c_s##*q(xs) on left hand side of equation ##\upsilon+c_sq = \upsilon+c^*_sq + \sum_{x_k \in B_i} c^*_k\left(b_k-a_{ks}q\right)## I got it , BUT I can not understand why the objective change on the right hand side of previous equation is ##c^*_sq + \sum_{x_k \in B_i} c^*_k\left(b_k-a_{ks}q\right)##?? WHY do we multiply ##c^*_s## with xs(q) or ##c^*_k## with ##b_k-a_{ks}*q## TO UPDATE OBJECTIVE??
Please @tnich @Ray Vickson I am waiting your help...
I suggest that you follow the author's plan: write out a tableau with a degenerate solution, pivot to bring one of the fickle variables ##x_s## into the basis, set ##x_s=q##, and then perform a pivot to bring a new fickle variable into the basis. See what you get. Then compare your result with ##\upsilon+c_sq = \upsilon+c^*_sq + \sum_{x_k \in B_i} c^*_k\left(b_k-a_{ks}q\right)##.
 
  • #29
tnich said:
I suggest that you follow the author's plan: write out a tableau with a degenerate solution, pivot to bring one of the fickle variables ##x_s## into the basis, set ##x_s=q##, and then perform a pivot to bring a new fickle variable into the basis. See what you get. Then compare your result with ##\upsilon+c_sq = \upsilon+c^*_sq + \sum_{x_k \in B_i} c^*_k\left(b_k-a_{ks}q\right)##.
you suggest me to do simplex iterations using infeasible solutions I think. if I bring ##x_s## into basis and set q=5 for instance then other basic variables change ##b_k-a_{ks}q## afterwards you want me to continue to do simplex with those solutions?? Am I right??
 
  • #30
mertcan said:
you suggest me to do simplex iterations using infeasible solutions I think. if I bring ##x_s## into basis and set q=5 for instance then other basic variables change ##b_k-a_{ks}q## afterwards you want me to continue to do simplex with those solutions?? Am I right??
I think you should first do the simplex pivots with ##x_s=0##. Then start again with ##x_s=q## to see how things change.
 

Similar threads

  • · Replies 14 ·
Replies
14
Views
3K
  • · Replies 9 ·
Replies
9
Views
4K
  • · Replies 1 ·
Replies
1
Views
4K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 11 ·
Replies
11
Views
4K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 22 ·
Replies
22
Views
4K
  • · Replies 125 ·
5
Replies
125
Views
20K
  • · Replies 175 ·
6
Replies
175
Views
26K
Replies
24
Views
8K