next up previous contents index CD CD Algorithms
Next: Random Number Generation Up: Numerical Problems Previous: Constrained and Unconstrained Optimization

Linear Programming

  

  figure9533

Input description: A set S of n linear inequalities on m variables tex2html_wrap_inline27279 , tex2html_wrap_inline27281 , and a linear optimization function tex2html_wrap_inline27283 .

Problem description: Which variable assignment X' maximizes the objective function f while satisfying all inequalities S?

Discussion: Linear programming is the most important problem in mathematical optimization and operations research.     Applications include:

The standard algorithm for linear programming is called the simplex method.   Each constraint in a linear programming problem acts like a knife that carves away a region from the space of possible solutions. We seek the point within the remaining region that maximizes (or minimizes) f(X). By appropriately rotating the solution space, the optimal point can always be made to be the highest point in the region. Since the region (simplex) formed by the intersection of a set of linear constraints is convex, we can find the highest point by starting from any vertex of the region and walking to a higher neighboring vertex. When there is no higher neighbor, we are at the highest point.   

While the basic simplex algorithm is not too difficult to program, there is a considerable art to producing an efficient implementation capable of solving large linear programs. For example, large programs tend to be sparse (meaning that most inequalities use few variables), so sophisticated data structures must be used.   There are issues of numerical stability and robustness, as well as which neighbor we should walk to next (so called pivoting rules).   Finally, there exist sophisticated interior-point methods, which cut through the interior of the simplex instead of walking along the outside, that beat simplex in many applications.  

The bottom line on linear programming is this: you are much better off using an existing LP code than writing your own. Further, you are much better off paying money than surfing the net.   Linear programming is one algorithmic problem of such economic importance that commercial implementations are far superior to free versions.

Issues that arise in linear programming include:

Implementations: A very useful resource on solving linear programs is the USENET frequently asked question (FAQ) list, maintained by John W. Gregory. In particular, it provides a list of available codes with descriptions of experiences.   Check out the plaintext version at ftp://rtfm.mit.edu/pub/usenet/sci.answers/linear-programming-faq or a slicker WWW version at http://www.skypoint.com/ tex2html_wrap_inline27299 ashbury/linear-programming-faq.html.

The noncommercial code of choice appears to be lp_solve,     written in ANSI C by Michel Berkelaar, who has solved problems as large as 30,000 variables and 50,000 constraints. Lp_solve can also handle (smaller) integer and mixed-integer problems. It is available by anonymous ftp from ftp://ftp.es.ele.tue.nl/pub/lp_solve but is not in the public domain. A user community for lp_solve exists, which has ported it to a variety of different platforms.

NEOS (Network-Enabled Optimization System) provides a unique service, an opportunity to solve your problem on computers and software at Argonne National Laboratory via the WWW.   Linear programming and unconstrained optimization are both supported. This is worth checking out at http://www.mcs.anl.gov/home/otc/Server/ if you need an answer instead of a program.

If you are serious about solving large linear programs, you likely need a commercial implementation. The book [MW93] provides an overview of commercial linear programming systems, online at http://www.mcs.anl.gov/home/otc/Guide/SoftwareGuide/index.html. Surveys of commercial LP codes appear in [SR95, Sha93] and in the linear programming FAQ. I have heard good things from various people about CPLEX and AMPL, but do your own research before spending money.   

For low-dimensional linear programming problems, computational geometry algorithms can outperform more general LP codes.   See ftp://icemcfd.com/pub/linprog.a for a C language implementation of Seidel's randomized incremental LP algorithm, by Mike Hohmeyer.  

Algorithm 551 [Abd80] and Algorithm 552 [BR80] of the Collected Algorithms of the ACM are simplex-based codes for solving overdetermined systems of linear equations, in Fortran.     See Section gif for details.

Pascal implementations of the revised and dual simplex methods for linear programming, as well as cutting plane and explicit enumeration algorithms for integer programming, are provided in [SDK83].   See Section gif. These are likely to work only for small problems.

Sedgewick [Sed92] provides a bare bones implementation of the simplex algorithm in C++.   See Section gif for details.

Notes: Good expositions on the simplex and ellipsoid algorithms for linear programming include [PS82, Chv83].   Expositions on low-dimensional linear programming include [PS85]. For an implementation-oriented exposition on linear and integer programming, with references to experimental work, see [SDK83].

The need for optimization via linear programming arose in logistics problems in World War II. The simplex algorithm was invented by George Danzig in 1947 [Dan63]. Klee and Minty [KM72] proved that the simplex algorithm is exponential in worst case, but it is very efficient in practice. Khachian's ellipsoid algorithm [Kha79] proved that linear programming was polynomial in 1979.   Karmarkar's algorithm [Kar84] is an interior-point method that has proven to be both a theoretical and practical improvement of the ellipsoid algorithm, as well as a challenge for the simplex method.

Linear programming is P-complete under log-space reductions [DLR79]. This makes it unlikely to have an NC parallel algorithm, where a problem is in NC iff it can be solved on a PRAM in polylogarithmic time using a polynomial number of processors.   Any problem that is P-complete under log-space reduction cannot be in NC unless P=NC.   See [GHR95] for a thorough exposition of the theory of P-completeness, including an extensive list of P-complete problems.

Related Problems: Constrained and unconstrained optimization (see page gif), network flow (see page gif).    


next up previous contents index CD CD Algorithms
Next: Random Number Generation Up: Numerical Problems Previous: Constrained and Unconstrained Optimization

Algorithms
Mon Jun 2 23:33:50 EDT 1997