Lecture 22 CSE 331 10/22/10

(Guest post by Alex Janikowski)
The main focus of friday’s lecture was analyzing the Interval Scheduling problem. (Namely, proving the correctness of the greedy algorithm used to solve it)
First we went over the problem as discussed during the previous lecture.

This problem has two main properties:
1: Our output must contain no interval conflicts (That is, intervals which run over the same ith time slot.)
2: Our output must be an optimal solution. (It must contain the maximum number of intervals possible)

Next we defined our input/output:
input:  n intervals [s(i), f(i)] for 1 \leq i \leq n
(each interval starts at s_i and ends before f_i)
ouput: A schedule S of the n intervals with no two intervals in S conflicting.  |S| is maximized, that is, S contains the maximum number of intervals possible.

Formal definition of algorithm:
(I) A \Leftarrow \emptyset, R \Leftarrow {1, …, n}
(II) While R \neq \emptyset{
Choose i \epsilon R with the earliest finish time
Add i to A
Remove all intervals from R that conflict with i
}
(III) Return A

Now by definition we know that A has no conflicts.
i does not repeat in A as it conflicts with itself by definition and is removed.
However, we still need to prove that it produces an optimal solution.
(for every input you have at least one optimal solution.  To find the optimal you look at all valid schedules (those with no conflicts) and find the one with the most intervals.)

So next we moved on to the meat of the day’s lecture:  Proving the correctness of our greedy algorithm.

Theorem:  The greedy algorithm returns an optimal solution.
(There could, however, be more than one optimal solution.)
We then went into the proof of this theorem.

Let \Theta be an optimal solution

Proof idea 1: A = \Theta
This doesn’t work as there can be multiple optimal solutions.  So we came up with a new proof idea.

Proof idea 2: |A| = |\Theta|
Now this was more promising.  If A is an optimal solution then it will have the same number of elements as \Theta.

So then, going with proof idea 2, we formally defined A and \Theta to use in the proof.
A is {i_1,…,i_k}
\Theta is {j_1,…,j_m}
So what we want to prove is k = m.
We decided to use the “Greedy stay ahead” paradigm.

We then made the claim: f(i_\ell) \leq f(j_\ell)
So quite literally we claim that A always “stays ahead” of \Theta.

Before proving this claim however we first prove the theorem.  We assumed k < m and encountered a contradiction.
(Basically this boiled down the the fact that if there is some interval j_k+1 not included in i that our algorithm could also schedule it thus creating a contradiction.)

Next we proved the claim by induction on \ell
Our base case was \ell = 1.  This holds as f(i_1) \leq f(j_1) by algorithm definition.
(as our algorithm always picks the interval with the earliest finish time for i_1)

Then we assumed it holds true for some \ell greater than 1 but less than k.
Finally we proved the claim for \ell+1.
This completed the proof

The lecture was wrapped up with a brief claim of the algorithms run time:  O(n logn)

This entry was posted in fall 2010. Bookmark the permalink.

1 Response to Lecture 22 CSE 331 10/22/10

  1. Pingback: Lect 22: Optimal Algorithm for Interval Scheduling « Introduction to Algorithm Analysis and Design

Leave a comment