next up previous contents index CD CD Algorithms
Next: War Story: Evolution of Up: Breaking Problems Down Previous: Minimum Weight Triangulation

Limitations of Dynamic Programming

Dynamic programming can be applied to any problem that observes the principle of optimality. Roughly stated, this means that partial solutions can be optimally extended with regard to the state after the partial solution instead of the partial solution itself. For example, to decide whether to extend an approximate string matching by a substitution, insertion, or deletion, we do not need to know exactly which sequence of operations was performed to date. In fact, there may be several different edit sequences that achieve a cost of C on the first p characters of pattern P and t characters of string T. Future decisions will be made based on the consequences of previous decisions, not the actual decisions themselves.  

Problems do not satisfy the principle of optimality if the actual operations matter, as opposed to just the cost of the operations. Consider a form of edit distance where we are not allowed to use combinations of operations in certain particular orders. Properly formulated, however, most combinatorial problems respect the principle of optimality.

The biggest limitation on using dynamic programming is the number of partial solutions we must keep track of. For all of the examples we have seen, the partial solutions can be completely described by specifying the stopping places in the input. This is because the combinatorial objects being worked on (strings, numerical sequences, and polygons) all have an implicit order defined upon their elements. This order cannot be scrambled without completely changing the problem. Once the order is fixed, there are relatively few possible stopping places or states, so we get efficient algorithms. If the objects are not firmly ordered, however, we have an exponential number of possible partial solutions and are doomed to need an infeasible amount of memory.

To illustrate this, consider the following dynamic programming algorithm for the traveling salesman problem, discussed in greater detail in [RND77]. Recall that solving a TSP means finding the order that visits each site exactly once, while minimizing the total distance traveled or cost paid. Let C(i,j) to be the edge cost to travel directly from i to j. Define tex2html_wrap_inline24726 to be the cost of the optimal tour from i to 1 that goes through each of the cities tex2html_wrap_inline24728 exactly once, in any order. The cost of the optimal TSP tour is thus defined to be tex2html_wrap_inline24730 and can be computed recursively by identifying the first edge in this sequence:

displaymath24720

using the basis cases

displaymath24721

 

This recurrence, although somewhat complicated to understand, is in fact correct. However, each partial solution is described by a vertex subset tex2html_wrap_inline24732 . Since there are tex2html_wrap_inline24734 subsets of n vertices, we require tex2html_wrap_inline24736 time and space to evaluate this recurrence. Whenever the input objects do not have an inherent left-right order, we are typically doomed to having an exponential-sized state space. Occasionally this is manageable - indeed, tex2html_wrap_inline24738 is a big improvement over enumerating all O(n!) possible TSP tours. Still, dynamic programming is most effective on well-ordered objects.


next up previous contents index CD CD Algorithms
Next: War Story: Evolution of Up: Breaking Problems Down Previous: Minimum Weight Triangulation

Algorithms
Mon Jun 2 23:33:50 EDT 1997