Linear programming and resolution theorem

mertcan
Messages
343
Reaction score
6
Homework Statement
Show that there exist an optimal solution which is a convex combination of L+1 extreme points of P
Relevant Equations
minkowski or resolution equations(maybe caratheodory theorem)
Screen Shot 2020-11-14 at 15.22.19.png

Hi everyone hope you are well, I would like to express what I have done for this question:

Proving and employing caratheodory theorem we can say that any point in polyhedron can be expressed as a convex combination of at most n+1 points (where n is the space dimension) in same polyhedron that also implies at most n+1 extreme points can be used to define any point as a convex combination. As you see in the question optimal point should be the element of P as well as side constraints but really do not know how can we exactly an optimal solution as a convex combination of L+1 extreme points of P even we have a hint in the question. Could you help me?
 
Physics news on Phys.org
What is the resolution theorem? It might help to consider what this is saying when ##L=0## to start, and then when ##L=1##.
 
Office_Shredder said:
What is the resolution theorem? It might help to consider what this is saying when ##L=0## to start, and then when ##L=1##.
Screen Shot 2020-11-14 at 17.49.52.png

that means in polyhedron every point can be expressed as convex combination of extreme points and conic combination of extreme rays belong to polyhedron
 
Got it.

I still think trying to figure out what this statement is saying for ##L=0## and proving it, and at least roughly describing why it is true for ##L=1## as well will point you in the right direction for this problem. Can you do those?
 
Office_Shredder said:
Got it.

I still think trying to figure out what this statement is saying for L=0 and proving it, and at least roughly describing why it is true for L=1 as well will point you in the right direction for this problem. Can you do those?
Actually I must confess that I have been struggling for 10 days for that question but nothing comes to my mind I really do not know how to start how to proceed...
 
Can you try drawing a picture in 2 dimensions? The problem says it's a bounded polyhedron, so it's a polygon. Draw a polygon. The function ##c \cdot x ## increases as you move in the direction of ##c##. Pick a random vector ##c##. Now can you just circle the point in your polygon where this function is maximized?

To think a bit more to the general case, if ##P## is bounded it means that your representation of ##P## can be a bit simpler than what you posted. What can you eliminate?
 
Office_Shredder said:
Can you try drawing a picture in 2 dimensions? The problem says it's a bounded polyhedron, so it's a polygon. Draw a polygon. The function ##c \cdot x ## increases as you move in the direction of ##c##. Pick a random vector ##c##. Now can you just circle the point in your polygon where this function is maximized?

To think a bit more to the general case, if ##P## is bounded it means that your representation of ##P## can be a bit simpler than what you posted. What can you eliminate?
I got what you mean for instance if L=0 then optimal point would be the convex combination of just 1 point of P then optimal point actually would be the extreme point of P, if L=1 then optimal point may be the convex combination of 2 adjacent extreme point of P, but how can I proceed then using mathematics? Is there a induction proof?
 
How can you simplify your representation of ##P## since it's bounded?
 
Office_Shredder said:
How can you simplify your representation of ##P## since it's bounded?
I understand when we think of the question in small dimensions then I am ok, but how can we generalise it?
 
  • #10
You said

in polyhedron every point can be expressed as convex combination of extreme points and conic combination of extreme rays belong to polyhedron

If the polyhedron is bounded, the representation is a little bit simpler. What part of this can you remove?
 
  • #11
Office_Shredder said:
You said
If the polyhedron is bounded, the representation is a little bit simpler. What part of this can you remove?
we have no extreme rays
 
  • #12
Great. So every point in the polyhedron is of the form

$$\sum_i \lambda_i x^i.$$
Let ##f## be the function which takes ##x## to ##c\cdot x##. This is linear, so

$$f\left( \sum_i \lambda_i x^i\right) = \sum_i \lambda_i f(x^i).$$

Any thoughts on what to do next?
 
  • #13
mertcan said:
we have no extreme rays
Office_Shredder said:
Great. So every point in the polyhedron is of the form

$$\sum_i \lambda_i x^i.$$
Let ##f## be the function which takes ##x## to ##c\cdot x##. This is linear, so

$$f\left( \sum_i \lambda_i x^i\right) = \sum_i \lambda_i f(x^i).$$

Any thoughts on what to do next?
Office_Shredder said:
Great. So every point in the polyhedron is of the form

$$\sum_i \lambda_i x^i.$$
Let ##f## be the function which takes ##x## to ##c\cdot x##. This is linear, so

$$f\left( \sum_i \lambda_i x^i\right) = \sum_i \lambda_i f(x^i).$$

Any thoughts on what to do next?
last equation you just written implies we have convex combination of cost function at extreme points of the region which is the intersection of P and additional equality constraints...
 
  • #14
Right now we're pretending there are no equality constraints, L=0. So it's a convex combination of the extreme points of ##P##. How would you pick the ##\lambda_i## to make that sum as large as possible?
 
  • #15
Office_Shredder said:
Right now we're pretending there are no equality constraints, L=0. So it's a convex combination of the extreme points of ##P##. How would you pick the ##\lambda_i## to make that sum as large as possible?
I would assign 1 to the biggest cost function
 
  • #16
Exactly! Do you see how that proves the result for ##L=0##?
 
  • #17
Office_Shredder said:
Exactly! Do you see how that proves the result for ##L=0##?
yes I got what we just done but I am still in a impasse when other equations added...
 
  • #18
Office_Shredder said:
Exactly! Do you see how that proves the result for ##L=0##?
Could we also proceed with L=1 case to grasp the entire logic if you do not mind?
 
  • #19
For ##L=1## I would guess there are a couple ways you can do this. Here's the one that is most obvious to me, it relies on a theorem about the faces of a polytope.

If you consider the subspace ##V## defined by the one linear constraint, we have ##V\cap P## is a polytope in ##V##, and ##f## is a linear function on ##V##. You can apply the thing you just proved to ##V\cap P##. Where does the point you get from that live in ##P##?
 
Back
Top