Undergrad Linear Programming: how to identify conflicting equations

Click For Summary
In discussing the Simplex method for solving Linear Programs, the focus is on identifying conflicting equations that lead to infeasibility. It is suggested that removing equations with artificial variables greater than zero (an > 0) is a potential approach, although this method may be arbitrary and not unique. The possibility of conflicting equations having an = 0 is acknowledged, as well as the idea that one equation might constrain others, necessitating a more strategic removal. Visualization tools in R and Python are mentioned as helpful for understanding equation interactions. The conversation highlights the importance of considering which constraints can be reasonably altered in real-world applications.
cjs94
Messages
16
Reaction score
0
I'm reading about the Simplex method to solve a Linear Program. If a program is rejected as being infeasible, is there a method to identify which equations are causing it to be infeasible and a technique for reducing the program in an optimal way? (I'm not sure what 'optimal' is exactly, but my guess would be removing the fewest equations.)

My understanding is that each equation can have an artificial variable an and that the test for feasibility is that all an must be zero. If so, that leads to some questions:
  1. It would seem obvious then that all I have to do is identify and remove all equations where an > 0. Is that fair?
  2. Is is possible for one or more conflicting equations to end up with an = 0?
  3. Is it possible that an equation, while not causing a direct conflict, constrains two other equations such that they conflict? In that case, removing the first, single equation would be a better solution.
Thanks,
Chris
 
Physics news on Phys.org
There may be a lot of combinations of equations whose removal would make the problem feasible. Just because one running of the Symplex method has a particular set of an > 0 does not mean that the set is unique. If you are looking for any reduced set that would be feasible, removing those an > 0 constraints should do it. But that is very arbitrary and may not be satisfying.
 
There may be some packages with R & Python, which could visualize what is happening between the equations. I am learning about optimization methods, myself. Something to think about - if this is based on a real world situation, then which constraints Can be changed in a reasonable way?

I came across this video, describing using pulp in Python, with optimizing some everyday decisions such as a book reading list or vacation options. Pretty interesting. I downloaded her notebooks from GitHub and ran some for myself. I use Azure Notebooks.
 
Thread 'How to define a vector field?'
Hello! In one book I saw that function ##V## of 3 variables ##V_x, V_y, V_z## (vector field in 3D) can be decomposed in a Taylor series without higher-order terms (partial derivative of second power and higher) at point ##(0,0,0)## such way: I think so: higher-order terms can be neglected because partial derivative of second power and higher are equal to 0. Is this true? And how to define vector field correctly for this case? (In the book I found nothing and my attempt was wrong...

Similar threads

  • · Replies 17 ·
Replies
17
Views
3K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 16 ·
Replies
16
Views
3K
Replies
2
Views
2K
  • · Replies 1 ·
Replies
1
Views
4K
Replies
2
Views
3K