Why Linear Programming at all?

Click For Summary

Discussion Overview

The discussion centers around the relevance and efficiency of linear programming (LP) methods compared to Lagrange multipliers for solving optimization problems with linear constraints and objective functions. Participants explore theoretical and practical aspects of LP, including its systematic approach and applications.

Discussion Character

  • Debate/contested
  • Technical explanation
  • Conceptual clarification

Main Points Raised

  • One participant questions the necessity of LP theories, suggesting that Lagrange multipliers could suffice for problems with linear constraints.
  • Another participant argues that LP provides a systematic method for identifying active constraints, which is crucial for larger problems with many variables and constraints.
  • It is noted that LP is a significant success in Mathematical Optimization, with established theorems ensuring global solutions under certain conditions.
  • Efficiency is highlighted as a key reason for using LP, as it can potentially reduce computation time compared to other methods.
  • Some participants assert that LP solutions are always found at corner points or as combinations of corner points, contrasting with nonlinear programs that may have interior solutions.
  • A participant reiterates the question of whether using Lagrange multipliers could be faster than LP methods, specifically referencing the simplex method.

Areas of Agreement / Disagreement

Participants express differing views on the efficiency and applicability of LP versus Lagrange multipliers. There is no consensus on which method is superior, and the discussion remains unresolved regarding the comparative advantages of each approach.

Contextual Notes

Participants reference various mathematical theorems and concepts, such as the Weierstrass theorem and Kuhn-Tucker conditions, without fully resolving their implications for the discussion at hand. The complexity of real-world problems and the nature of constraints are also acknowledged but not definitively addressed.

flyingpig
Messages
2,574
Reaction score
1
I am taking Linear Programming and I haven't completed the course yet, but here is my question.

I've notice that all these problems could have been solved just as easily with Lagrange multipliers.

We got a bunch of linear inequality constraints and an obj function, we can use Lagrange.

Why do we have all of these weird theories about corner points and duals and all?
 
Physics news on Phys.org
The basic idea of the LP method is a systematic way to finding which inequality constraints are really equalities (i.e. active constraints) and which are not. Because it is systematic, it is relatively easy to write a computer program to implement it.

If you try to solve the problem using Lagrange multipliers, you have to figure out which constraints are active for yourself, from the Lagrange equations. That might be OK for a problem with 2 or 3 variables and 2 or 3 constraints, but if you try it on a "real world" problem with say 20 variables and 100 constraints (almost all of which are inactive, but you don't know which ones), you need a systematic procedure.

Apart from that, ideas like duals, Kuhn-Tucker conditions, etc also apply to general (nonlinear) optimisation problems, not just to LP problems.
 
Linear programming is one of the successes of Mathematical Optimization (some may say OR). If you look at your Weierstrass theorem and the Local-Global, you can have a very useful result for all LP problems that have a bounded and non-empty constraint set (solution set, also called opportunity set). Thus, what you basically need is an algorithm to find a solution, and by the previous theorems you know is Global.

In terms of applications, many EXIST. You can probably google that. You'll find more than plenty.
 
It is about efficiency. Many areas of mathematics are about solving problems that are easy to solve in principle. There are theoretical and practical reasons for wanting to know if a problem you know can be solved in 10^6 hours with 10^3 computers could be solved in 10^2 hours with 10^1 computers.
 
But we are talking about linear constraints and obj function here. Is it really faster?
 
flyingpig said:
But we are talking about linear constraints and obj function here. Is it really faster?

YES, it is faster because the preference direction (or gradient) of the obj function is constant, and thus the solution for LP is always a CORNER or a convex combination of two corners!. All algorithms look on the boundaries!. In contrast, NonLinear Programs can have a interior solution besides boundary solutions.
 
flyingpig said:
I am taking Linear Programming and I haven't completed the course yet, but here is my question.

I've notice that all these problems could have been solved just as easily with Lagrange multipliers.

We got a bunch of linear inequality constraints and an obj function, we can use Lagrange.

Why do we have all of these weird theories about corner points and duals and all?

The simplex method provides a fast way to solve a linear program on the computer. Do you know of a faster way using Lagrange multipliers?
 

Similar threads

  • · Replies 0 ·
Replies
0
Views
2K
  • · Replies 19 ·
Replies
19
Views
4K
  • · Replies 17 ·
Replies
17
Views
3K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
Replies
2
Views
3K
  • · Replies 1 ·
Replies
1
Views
4K
  • · Replies 4 ·
Replies
4
Views
3K