Assessment schedule
Quiz 2 During tutorial time on 27 September
Assignment 2 11:59 PM Sun 29 September 2019
Note: Some MCQ with checkboxes could have multiple (or single) choices correct. No partial marks awarded.
The University of Sydney
Page 1
Dynamic Programming I
The University of Sydney
Page 2
Fast Fourier Transform
General techniques in this course
Greedy algorithms
Divide & Conquer algorithms
Dynamic programming algorithms
[today and next week] Network flow algorithms
The University of Sydney
Page 3
Algorithmic Paradigms
Greed. Build up a solution incrementally, myopically optimizing some local criterion.
Divide-and-conquer. Break up a problem into a number of sub-problems, solve each sub-problem independently, and combine solution to sub-problems to form solution to original problem.
Dynamic programming. Break up a problem into a series of overlapping sub-problems, and build up solutions to larger and larger sub-problems.
The University of Sydney Page 4
Dynamic Programming Applications
Areas.
Bioinformatics.
Control theory.
Information theory.
Operations research.
Computer science: theory, graphics, AI, systems, .
Some famous dynamic programming algorithms. Viterbi for hidden Markov models.
Unix diff for comparing two files.
Smith-Waterman for sequence alignment.
Bellman-Ford for shortest path routing in networks.
Cocke-Kasami-Younger for parsing context free grammars.
The University of Sydney
Page 5
The Fibonacci Sequence
n
0
1
2
3
4
5
6
7
8
9
Fn
0
1
1
2
3
5
8
13
21
34
Fn = Fn-1 + Fn-2 where F0=0, F1=1
fib(n)
if n is 1 or 2
return 1
return fib(n-1) + fib(n-2)
// base case
// recursive case
The University of Sydney
Page 6
Fibonacci: Recursive Call Tree
The University of Sydney Page 7
Fibonacci: Recursive Call Tree
The University of Sydney Page 8
Fibonacci: Recursive Call Tree
The University of Sydney Page 9
Memoization: Dont re-do unnecessary work!
6
4
Lets leverage all the repeats!
5
332
3 2
1
4
22
12 110 111010
0
Memorization: Store previous results so that in future executions, you dont have to recalculate them a.k.a. remember what you have already done!
0
The University of Sydney Page 10
Memoisation: Dont re-do unnecessary work!
4
22
6
4
Lets leverage all the repeats!
5
332
3
221110 10
12 110
The University of Sydney
0
1
0
0 fib(n)
if n is 1 or 2
return 1;
if M[n] is not zero return M[n];
M[n] = fib(n-1) + fib(n-2); return M[n]; Page 11
Weighted Interval Scheduling
The University of Sydney Page 12
Recall Interval Scheduling
Interval scheduling.
Input: Set of n jobs. Each job i starts at time sj and finishes at time fj.
Two jobs are compatible if they dont overlap in time.
Goal: find maximum subset of mutually compatible jobs.
b
a
c
d
e
f
g
h
0 1 2 3 4 5 6 7 8 9 10 11 The University of Sydney
Time
Page 13
Recall Interval Scheduling
Theorem: Greedy algorithm [Earliest finish time] is optimal. Proof: (by contradiction)
Assume greedy is not optimal, and lets see what happens.
Let i1, i2, ik denote the set of jobs selected by greedy.
Let J1, J2, Jm denote the set of jobs in an optimal solution with i1 = J1, i2 = J2, , ir = Jr for the largest possible value of r.
job ir+1 finishes before Jr+1
Greedy: OPT:
i1
i1
ir
ir+1
J1
J2
Jr
Jr+1
The University of Sydney
Page 14
Exchange argument!
solution still feasible and optimal, but contradicts maximality of r.
Recall Interval Scheduling
There exists a greedy algorithm [Earliest finish time] that computes the optimal solution in O(n log n) time.
The University of Sydney Page 15
Weighted Interval Scheduling
Weighted interval scheduling problem.
Job j starts at sj, finishes at fj, and has weight or value vj .
Two jobs compatible if they dont overlap.
Goal: find maximum weight subset of mutually compatible jobs.
b
a
c
d
e
f
g
h
0 1 2 3 4 5 6 7 8 9 10 11 The University of Sydney
Time
Page 16
Unweighted Interval Scheduling Review
Recall. Greedy algorithm works if all weights are 1.
Consider jobs in ascending order of finish time.
Add job to subset if it is compatible with previously chosen jobs.
Observation. Greedy algorithm can fail if arbitrary weights are allowed.
b
a
weight = 999 weight = 1
doesnt work
Time
The University of Sydney
Page 17
0 1 2 3 4 5 6 7 8 9 10 11
Weighted Interval Scheduling
Notation. Label jobs by finishing time: f1 f2 . . . fn .
Def. p(j) = largest index i < j such that job i is compatible with j. Ex: p(8)=5,p(7)=3,p(2)=0. 1 2 3 4 5 6 7 80 1 2 3 4 5 6 7 8 9 10 11 The University of SydneyTimePage 18 Dynamic Programming: Binary ChoiceNotation. OPT(j) = value of optimal solution to the problemconsisting of job requests 1, 2, …, j. Case 1: OPT selects job j. can’tuseincompatiblejobs{p(j)+1,p(j)+2,…,j-1} must include optimal solution to problem consisting of remaining compatible jobs 1, 2, …, p(j) Case 2: OPT does not select job j. must include optimal solution to problem consisting of remaining compatible jobs 1, 2, …, j-1 OPT(j)=ii 0 if j=0 imax{vj +OPT(p(j)), OPT(j-1)} otherwiseThe University of SydneyPage 19 Example maximumThe University of Sydney Page 20 ExamplemaximumThe University of Sydney Page 21 Weighted Interval Scheduling: Brute Force Brute force algorithm. Input: n, s1,…,sn , f1,…,fn , v1,…,vnSort jobs by finish times so that f1 f2 … fn.Compute p(1), p(2), …, p(n)Compute-Opt(j) { if (j = 0)return 0 elsereturn max(vj + Compute-Opt(p(j)), Compute-Opt(j-1)) } The University of Sydney Page 22 Weighted Interval Scheduling: CorrectnessTheorem: Compute-Opt(j) correctly computes OPT(j) Proof: Proof by induction. Base case: OPT(0) = 0 Induction hypothesis (i
print j
Find-Solution(p(j))
else
Find-Solution(j-1)
}
# of recursive calls n O(n). The University of Sydney
Page 28
Weighted Interval Scheduling: Bottom-Up
Bottom-up dynamic programming Avoid recursion
Input: n, s1,,sn , f1,,fn , v1,,vn
Sort jobs by finish times so that f1 f2 fn.
Compute p(1), p(2), , p(n)
Iterative-Compute-Opt {
M[0] = 0
for j = 1 to n
M[j] = max(vj + M[p(j)], M[j-1])
}
The University of Sydney Page 29
Maximum-sum contiguous subarray
The University of Sydney Page 30
Maximum-sum contiguous subarray
Given an array A[ ] of n numbers, find the maximum sum found in any contiguous subarray
A zero length subarray has maximum 0
Example:
1
-2
7
5
6
-5
5
8
1
-6
The University of Sydney Page 31
Maximum-sum contiguous subarray
Given an array A[ ] of n numbers, find the maximum sum found in any contiguous subarray
A zero length subarray has maximum 0
Example:
1
-2
7
5
6
-5
5
8
1
-6
The University of Sydney Page 32
Brute-force algorithm
All possible contiguous subarrays
A[1..1], A[1..2], A[1..3], , A[1..(n-1)], A[1..n]
A[2..2], A[2..3], , A[2..(n-1)], A[2..n]
A[(n-1)..(n-1)], A[(n-1)..n]
A[n..n]
How many of them in total? O(n2)
O(n)
Algorithm: For each subarray, compute the sum. Report the subarray that has the maximum sum.
Total time: O(n3)
The University of Sydney
Page 33
Divide-and-conquer algorithm for MCS
Maximum contiguous subarray (MCS) in A[1..n]
Three cases:
a) MCS in A[1..n/2]
b) MCS in A[n/2+1..n]
c) MCS that spans across A[n/2]
(a) & (b) can be found recursively
(c) can be found in two steps
Consider MCS in A[1..n/2] ending in A[n/2].
Consider MCS in A[n/2+1..n] starting at A[n/2+1]. Sum these two maxima
The University of Sydney
Page 34
Idea of divide-and-conquer
Example:
max on L (recursion) max on R (recursion) mid extend to L mid extend to R
10 15 -3 -4 -2 -1 8 5 10 15
10 15
-3
-4
8 -1 8
5 5
-2
Possible candidates:
25, 13, 28 (=18+10) overall maximum 28.
The University of Sydney
Page 35
Idea of divide-and-conquer
Example:
max on L (recursion) max on R (recursion) mid extend to L
mid extend to R
Possible candidates:
5, 3, 4 (=4+0)
overall maximum 5
-2 5 5
-1 -5 2 -1 2 2 -1 2
5 -1
not take any
The University of Sydney
Page 36
Divide-and-conquer algorithm
Maximum contiguous subarray (MCS) in A[1..n]
Threecases:
a) MCS in A[1..n/2]
b) MCS in A[n/2+1..n]
c) MCS that spans across A[n/2]
2T(n/2)
(a) & (b) can be found recursively
(c) can be found in two steps
Consider MCS in A[1..n/2] ending in A[n/2].
Consider MCS in A[n/2+1..n] starting at A[n/2+1].
Sum these two maximum The University of Sydney
Page 37
Divide-and-conquer algorithm: Step c
MCS in A[1..n/2] ending in A[n/2]
1
n/2
1
-2
7
5
6
-5
5
8
1
-6
The University of Sydney
Page 38
Divide-and-conquer algorithm: Step c
MCS in A[1..n/2] ending in A[n/2]
1
n/2
-6
1
-2
7
5
6
-5
5
8
1
-6
The University of Sydney
Page 39
Divide-and-conquer algorithm: Step c
MCS in A[1..n/2] ending in A[n/2]
1
n/2
-5 -6
1
-2
7
5
6
-5
5
8
1
-6
The University of Sydney
Page 40
Divide-and-conquer algorithm: Step c
MCS in A[1..n/2] ending in A[n/2]
1
n/2
3 -5 -6
1
-2
7
5
6
-5
5
8
1
-6
The University of Sydney
Page 41
Divide-and-conquer algorithm: Step c
MCS in A[1..n/2] ending in A[n/2]
1
n/2
8 3 -5 -6
1
-2
7
5
6
-5
5
8
1
-6
The University of Sydney
Page 42
Divide-and-conquer algorithm: Step c
MCS in A[1..n/2] ending in A[n/2]
1
n/2
8 3 -5 -6
1
-2
7
5
6
-5
5
8
1
-6
The University of Sydney
Page 43
3
Divide-and-conquer algorithm: Step c
MCS in A[1..n/2] ending in A[n/2]
1
n/2
1
-2
7
5
6
-5
5
8
1
-6
The University of Sydney
Page 44
20 19 21 14 9 3 8 3 -5 -6
Divide-and-conquer algorithm: Step c
MCS in A[1..n/2] ending in A[n/2]
1
n/2
1
-2
7
5
6
-5
5
8
1
-6
The University of Sydney
Page 45
20 19 21 14 9 3 8 3 -5 -6 Time: O(n)
Divide-and-conquer algorithm
Maximum contiguous subarray (MCS) in A[1..n]
MCSinA[1..n/2]
a. MCS in A[n/2+1..n]
b. MCS that spans across A[n/2]
2T(n/2)
(a) & (b) can be found recursively
(c) can be found in two steps
Consider MCS in A[1..n/2] ending in A[n/2].
Consider MCS in A[n/2+1..n] starting at A[n/2+1]. O(n) Sum these two maximum
Total time: T(n) = 2T(n/2) + O(n) = O(n log n) The University of Sydney Page 46
Dynamic programming algorithm
Example 1:
MaxHere = 6 6
3 -1 2
MaxHere = 3 MaxHere = 1 MaxHere = 4 MaxHere = 3 MaxHere = 5
6 -3
6 -3 -2 6 -3 -2 6 -3 -2 6 -3 -2
3
3 -1
3 -1 2
6 -3 -2
The University of Sydney
Page 47
MaxHere[i] optimal solution ending at i
Dynamic programming algorithm
Example 2:
MaxHere = 2 2
2 -6 -1 3
-1 2 -2
MaxHere = 0 MaxHere = 0 MaxHere = 3 MaxHere = 2 MaxHere = 4 MaxHere = 2
2 -6
2 -6 -1
2 -6 -1 3 2 -6 -1 3 2 -6 -1 3 2 -6 -1 3
-1
-1 2
-1 2 -2
The University of Sydney
Page 48
MaxHere[i] optimal solution ending at i
Dynamic programming algorithm
Example 3:
MaxHere = 0 -2
MaxHere = 5 MaxHere = 4 MaxHere = 0 MaxHere = 3 MaxHere = 2 MaxHere = 4
-2 5
-1 -5 -2 5 -1
3 -1 2
-2 5
-2 5 -2 5 -2 5 -2 5
-1 -5 -1 -5 -1 -5 -1 -5
3
3 -1
3 -1 2
The University of Sydney
Page 49
MaxHere[i] optimal solution ending at i
Dynamic programming algorithm
Example 3:
MaxHere = 0 -2
MaxHere = 5 MaxHere = 4 MaxHere = 0 MaxHere = 3 MaxHere = 2 MaxHere = 4
-2 5
-1 -5 -2 5 -1
3 -1 2
-2 5
-2 5 -2 5 -2 5 -2 5
-1 -5 -1 -5 -1 -5 -1 -5
3
3 -1
3 -1 2
OPT[i] optimal solution ending at i OPT[i] = max{OPT[i-1]+A[i],0}
The University of Sydney
Page 50
Pseudocode
MaxHere[1] = max(A[1], 0) for i = 2 to n do
MaxHere[i] = max(MaxHere[i-1]+A[i], 0)
MaxSum = MaxHere[1] for i = 2 to n do
MaxSum = max(MaxSum, MaxHere[i])
Total time: O(n)
The University of Sydney
Page 51
MaxHere[i] optimal solution ending at i
Knapsack Problem
The University of Sydney Page 52
Knapsack Problem
A 1998 study of the Stony Brook University Algorithm Repository showed that, out of 75 algorithmic problems, the knapsack problem was the 18th most popular and the 4th most needed after kd-trees, suffix trees, and the bin packing problem.
First mentioned by Mathews in 1897. Knapsack problem by Dantzig in 1930.
The University of Sydney Page 53
Knapsack Problem
Knapsackproblem.
Givennobjectsandaknapsack.
Item i weighs wi > 0 kilograms and has value vi > 0. KnapsackhascapacityofWkilograms.
Goal: fill knapsack so as to maximize total value.
Item
Value
Weight
1
1
1
2
6
2
3
18
5
4
22
6
5
28
7
Greedy: repeatedly add item with maximum ratio vi / wi.
Example: {5,2,1} achieves only value = 35
But: {3,4}hasvalue40. The University of Sydney
W = 11
Greedy NOT working
Page 54
Dynamic Programming: False Start
Definition. OPT(i) = max profit subset of items 1, , i. Case 1: OPT does not select item i.
OPTselectsbestof{1,2,,i-1} Case 2: OPT selects item i.
accepting item i does not immediately imply that we will have to reject other items
without knowing what other items were selected before i, we dont even know if we have enough room for i
Conclusion: Need more sub-problems!
The University of Sydney Page 55
Dynamic Programming: Adding a New Variable
Definition. OPT(i, w) = max profit subset of items 1, , i with weight limit w.
Case 1: OPT does not select item i.
OPT selects best of { 1, 2, , i-1 } using weight limit w
Case 2: OPT selects item i.
newweightlimit=wwi
OPT selects best of { 1, 2, , i1 } using this new weight limit
i 0 if i=0 OPT(i, w) = iiOPT(i -1, w) if wi > w
iimax{OPT(i-1,w), vi + OPT(i-1,w-wi)} otherwise
The University of Sydney
Page 56
Knapsack Problem: Bottom-Up
Knapsack. Fill up an n-by-W array.
Input: n, w1,,wN, v1,,vN for w = 0 to W
M[0, w] = 0
for i = 1 to n
M[i, 0] = 0
for w = 1 to W if (wi > w)
M[i, w] = M[i-1, w]
else
M[i, w] = max {M[i-1, w], vi + M[i-1, w-wi ]} return M[n, W]
The University of Sydney Page 57
Knapsack Algorithm
W+1
0
1
2
3
4
5
6
7
8
9
10
11
0
0
0
0
0
0
0
0
0
0
0
0
{1}
0
1
1
1
1
1
1
1
1
1
1
1
{ 1, 2 }
0
1
6
7
7
7
7
7
7
7
7
7
{ 1, 2, 3 }
0
1
6
7
7
18
19
24
25
25
25
25
{ 1, 2, 3, 4 }
0
1
6
7
7
18
22
24
28
29
29
40
{ 1, 2, 3, 4, 5 }
0
1
6
7
7
18
22
28
29
34
35
40
n+1
Item
Value
Weight
1
1
1
2
6
2
3
18
5
4
22
6
5
28
OPT: {4,3}
value = 22 + 18 = 40
W = 11
The University of Sydney
7
Page 58
Knapsack Problem: Running Time
Running time: Q(nW).
Not polynomial in input size!
Pseudo-polynomial.
Decision version of Knapsack is NP-complete. [Lecture 11]
Knapsack approximation algorithm. There exists a polynomial algorithm that produces a feasible solution that has value within 0.01% of optimum.
The University of Sydney Page 59
Longest increasing subsequence
The University of Sydney Page 60
Longest increasing subsequence
Given a sequence of numbers X[1..n] find the longest increasing subsequence (i1, i2, , ik), that is a subsequence where numbers in the sequence increase.
The University of Sydney
Page 61
52813697
Longest increasing subsequence
Given a sequence of numbers X[1..n] find the longest increasing subsequence (i1, i2, , ik), that is a subsequence where numbers in the sequence increase.
The University of Sydney
Page 62
52813697
Longest increasing subsequence
Define a vector L[]:
L[i] = length of the longest increasing subsequence that ends at X[i].
L[1]=1 52813697
L
Example: L[1] = 1
L[2] = 1 L[3] = 2
1
1
1
1
1
1
1
1
The University of Sydney
Page 63
12345678
L[4] = 1 L[5] = 2 L[6] = 3
L[7] = 4
Longest increasing subsequence
Define a vector L[]:
L[i] = length of the longest increasing subsequence that ends at X[i].
L[1]=1 52813697
Dynamic programming formula:
Running time: ? The University of Sydney
L[i] = max { L[j] +1 | X[j] < X[i]}n timesj
Reviews
There are no reviews yet.