Lect 38

(Guest post by Seon McDonald)

In Friday’s (Dec 3rd) lecture, Professor briefly recapped the previous lecture and noted


  • the property of OPT as  OPT(j) =  MAX(vj + OPT(p(j)), OPT(j-1)}.
  • recursion  + memory = iteration as an efficient way to compute OPT, that is, memorize the first computed value of OPT then use that value in future recursive calls.
  • by induction you can prove M[j] = OPT[j]
  • O[n] runtime

Friday’s lecture then delved into the question “When to use Dynamic Programming?”

Roughly 3 step guideline

–          polynominally many sub problems- OPT(n) at most n sub problems

–          optimal solution can be computed from solutions to the sub problem

–          iterative solution: Since the sub problems are ordered (only depends on the solutions that come before not after)

Unlike previous Shortest Path Problems where we assumed that the graph G had no edges with negative costs, there exists problems where the cost can be negative.

Dynamic Programming for the Shortest Path Problem

INPUT: Directed Graph G = (V,E)

Ce for all e ϵ E and t ϵ V

OUTPUT: Shortest Path from every s to t

ASSUMPTION: No negative cycles.

*note* A cycle v1 , v2 , … , vk = v1 is a negative cycle if )  < 0

The reason we are using Dynamic Programming as another alternative to solving the Shortest Path problem is because the Dijkstra’s method doesn’t work if at least 1 edge is negative. When the cost of the edges is negative the Dijkstra’s algorithm computes an incorrect shortest path.

Dynamic programming are similar to Divide and Conquer algorithms in that they shrink the problem size to easier sub problems however the sub problems in Dynamic Programming approach may be stranger than the original problem in the sense that it must solve a “stranger” sub problem whose solution gives more information to solve the problem on a whole.

LEMMA: If G has no negative cycles then the shortest s-t path is a simple path (straight line)

Proof Idea: No cycles, implies no negative cycles which means which means you can find the shortest path even for negative cost edges.

This ultimately implies that we only need to look for paths with ≤ n-1 edges

Recursive Definition of Optimal s-t path

What sub problems should we consider?

OPT(v, i) = shortest path from v to t with ≤  (at most) i edges where (0 ≤ i ≤ n-1)

= ∞ if no such path exists

OPT(v, n-1) = length of the shortest v-t path with ≤ n2 values

*note* there are n choices of v and i in OPT(v,i)

Next step: Express OPT in terms of the other sub problems

OPT(v,i)

Case 1: Shortest path from v to t has < i edges

OPT(v,i) = OPT(v, i-1)

Case 2: There exists a shortest path from v to s with exactly i edges

OPT(v,i) = minw ϵ v and (v,w) ϵ E Cv,w + OPT(w, i-1)

Final Formula

OPT(v, i) = min(case 1, case 2)

= min (OPT(v, i-1) , minw ϵ v and (v,w) ϵ E Cv,w + OPT(w, i-1)

This entry was posted in fall 2010. Bookmark the permalink.

1 Response to Lect 38

  1. Pingback: Lect 38: Shortest Path Problem with Negative Weights « Introduction to Algorithm Analysis and Design

Leave a comment