[SOLVED] 代写代考

30 $

File Name: 代写代考.zip
File Size: 113.04 KB

SKU: 5497152105 Category: Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

Or Upload Your Assignment Here:


§1. Lecture V Page 1

“A nickel ain’t worth a dime anymore.”1

Copyright By PowCoder代写加微信 assignmentchef

“You know that I write slowly. This is chiefly because I am never satisfied until I3
have said as much as possible in a few words, and writing briefly takes far more4
time than writing at length.”5

— Gauss6
(1777-1855)7

Lecture V8
THE GREEDY APPROACH9

¶1. An algorithmic approach is called “greedy” when it makes decisions for each step based10
on what seems best at the current step. Moreover, once a decision is made, it is never revoked.11
It may seem that this approach is rather limited. Nevertheless, many important problems have

greedy with no

special features that allow optimal solutions using this approach. Since we do not revoke our13
greedy decisions, such algorithms tend to be simple and efficient.14

To make this concept of “greedy decisions” concrete, suppose we have some “gain” function15
G(x) which quantifies the gain we expect with each possible decision x. View the algorithm as16
making a sequence x1, x2, . . . , xn of decisions, where each xi ∈ Xi for some set Xi of feasible17
choices at the ith step. Greediness amounts to choosing the xi ∈ Xi which maximizes the value18

Even when our simple greedy method is not optimal, we may be able to prove that it is20
only some factor off from optimal, say, a factor of 2. In this case, we say it has “approximation21
ratio” of 2. This leads to another theme in this chapter: the design of greedy methods with22
good approximation ratios.23

The greedy method is supposed to exemplify the idea of “local search”. But a closer exam-24
ination of many greedy algorithms will reveal some global information being used. The global25
information is usually minimal or easily obtained, such as doing a global sort. Indeed, the26
preferred data structure for delivering this sorting information is the priority queue. The ith27
step in the above decision sequence follows this sorting order.28

We begin with three classes of greedy problems: the bin packing problems, the coin changing29
problems and interval selection problems. Next we discuss the Huffman coding problem and30
minimum spanning trees. An abstract setting for the minimum spanning tree problem is based31
on matroid theory and the associated maximum independent set problem. This abstract32
framework captures the essence of a large class of problems with greedy solutions.33

§1. and Bin Packing34

We start with an extremely simple problem called linear bin packing. But it will open the35
door to a major topic in combinatorial algorithms called bin packing. In fact, the general bin36
packing problem is an example of the NP -complete problems which are not known to have37
polynomial-time solutions.38

© Fun. Algorithms, Spring 2022: Basic Version November 4, 2022

§1. Lecture V Page 2

¶2. Amusement Park Problem. Suppose we have a joy ride in an amusement park where39
riders arrive in a queue. We want to assign riders into cars, where the cars are empty as they40
arrive and we can only load one car at a time. There is a weight limit M > 0 for all cars. The41
number of riders in a car is immaterial, as long as their total weight is ≤M pounds. We may42
assume that no rider has weight > M . The way that riders are put into cars is controlled by43
two policies:44

(1) Online policy: this means we must make a decision for each rider at the moment that he45
or she arrives at the head of the queue. This decision may depend on earlier riders in the46
queue, but not depend on subsequent riders. In the current model, this decision is binary:47
“take the current car” or “take the next car”.48

(2) First come, first ride policy (FCFR). In other words, there is no “jumping the queue”49
in getting into a car.50

Therefore, whenever we make the decision “take the next car”, we must immediately dispatch51
the current car, and call for the next empty car in order to satisfy the FCRC policy. That is52
because our joy ride model assumes that there is only one car for loading riders. In Exercises,53
we explore policies where if you have a “two loading cars” model or are allowed to peek ahead54
into the queue.55

These two policies are independent. (1) Suppose we have the online policy without the56
FCRC policy: after we ask a rider to “take the next car”, we need not call for the new car! We57
can continue loading riders into the current car. In the meantime, we have a secondary queue58
of riders waiting to take the next car. Of course, this secondary queue must never exceed M59
in total weight. (2) We can also have the FCRC policy without the online policy. This setting60
becomes necessary with negative weights. See Exercise below.61

Example. Let the weight limit be M = 400 (in pounds) and the weights of the riders in the62
queue be63

w = (30, 190, 80, 210, 100, 80, 50) (1)

where 30 represents the front of the queue. We can load the riders into 3 cars as follows:

Solution1 : (30, 190, 80; 210, 100, 80; 50)

where the semi-colons separate successive car loads. For easy read, we henceforth drop the last64
digit of 0 from these weights, and simply write65

Solution1 : (3, 19, 8; 21, 10, 8; 5). (2)

What is our goal in this problem? It is to minimize the number of cars used. Solution1 uses
three cars. We may verify that Solution1 conforms to the Online and FCFR policies. We call
Solution1 the greedy solution for the instance (1). In the greedy solution, we always make
the decision “take the current car” if it does not violate the weight limitation. It is easy to see
that the greedy solution is always exist and is unique. But Solution1 is not the only one to
satisfy our policies. Here are two others:

Solution2 : (3, 19; 8, 21; 10, 8, 5).

Solution3 : (3, 19; 8, 21; 10, 8; 5).

© Fun. Algorithms, Spring 2022: Basic Version November 4, 2022

§1. Lecture V Page 3

They do not improve upon the greedy solution in terms of the number of cars. In fact, Solution366
is worse because it uses 4 cars. Nevertheless we are prompted to ask if there exists instances67
where a non-greedy solution is better than the greedy one?68

Leaving this question aside for now, let us illustrate our introductory remark about global69
information in greedy algorithms. Suppose we place riders into the cars in two phases. In phase70
one, we sort the weights w in (1), giving us a new queue:71

Sort(w) = (3, 5, 8, 8, 10, 19, 21). (3)

In phase two, we apply the greedy algorithm to Sort(w), giving us the solution

Solution4 : (3, 5, 8, 8, 10; 19, 21).

This uses only two cars, improving our greedy algorithm. Decisions exploiting sorting is using72
global information about the queue w. Thus Solution4 has violated both our policies.73

¶3. Linear Bin Packing. The solutions compliant with the FCFR Policy can be formalized74
as “linear solutions”. Given an integer M and a sequence w = (w1, . . . , wn) of weights satisfying75
0 < wi ≤M (i = 1, . . . , n), a linear solution of (M,w) is given by a sequence760 < t(1) < t(2) < · · · < t(k) = n. (4)of k breakpoints (for some k ≥ 1). These k breakpoints define k bins, B1, . . . , Bk where77Bi := (wt(i−1)+1, . . . , wt(i)). If i = 1, let t(0) = 0 by definition. We say the linear solution is78feasible if each Bi contains a total weight at most M . E.g., the greedy solution (2) is a feasible79linear solution with three breakpoints: t(1) = 3, t(2) = 6, t(3) = 7. A feasible linear solution is80optimal if k is minimum over all feasible solutions. The linear bin packing problem is to81compute an optimal linear solution for any input M,w.82¶4. Greedy Algorithm. It is instructive to write out the Greedy Algorithm for linear bin83packing problem (a.k.a. joy ride problem) in pseudo-code. An important criterion for pseudo-84code that it exposes the control-loop structures of the algorithm. Critical variables used in these85control-loops should also be exposed. We are happy with English descriptions of variables, etc.86Let w = (w1, w2, . . . , wn) be the input sequence of weights and C denote a container (or bin87or car) that is being filled with elements of w. Let the variable W keep track of the sum of the88weights in C.89Greedy Algorithm for Linear Bin Packing:Input: (M,w) where w = (w1, . . . , wn), each wi ≤M .Output: Container sequence (C1;C2; · · · ;Cm) representing a linear solution.. InitializationC ← ∅, W ← 0for i = 1 to n. INVARIANT: M ≥W =if (i = n or M < W + wi)Append C as Ci to the output sequenceC ← ∅, W ← 0W ←W + wi; C ← C ∪ {wi}.© Fun. Algorithms, Spring 2022: Basic Version November 4, 2022§1. Lecture V Page 4Pseudo-code is for human understanding. It is deliberately short of any actual programming92language because programming languages are meant for computers (to be compiled). Our93pseudo-code exploits the power of mathematical notations and the linguistic structures of En-94glish which humans understand best. Of course, English can be replaced by any other natural95language. Also there are many possible levels of detail, depending on the goals of the pseudo-96code. The above pseudo-code achieves two informal goals:97(P1) It can be “directly” transcribed into common programming languages, assuming you know98how to use common data types like arrays or linked lists.99(P2) It exposes enough details for its complexity to be analyzed up to Θ-order. The Θ-order100complexity of common data types are assumed known.101We urge the student to check out (G1). To illustrate (G2), we prove that the complexity is102O(n): There is a loop with n iterations. Inside the loop, each step is O(1). For instance, we103may assume that the container C is represented by a linked list, and so the step C ← C ∪{wi}104amounts to appending to a linked list.105We need an important assumption in (G2) which is often taken for granted: we assume106the real RAM computational model of Chapter 1. In this model, we can compare and perform107arithmetic operations on any two real numbers in constant time.108¶5. Optimality of Greedy Algorithm. It may not be obvious why the greedy algorithm109produces an optimal linear solution. The proof is instructive.110Theorem 1 The greedy solution is optimal for linear bin packing.111Proof. Suppose the greedy algorithm outputs k bins as defined by the sequence of breakpoints0 = t(0) < t(1) < t(2) < · · · < t(k) = n.Let us compare the greedy solution with some optimal solution with these breakpoints0 = s(0) < s(1) < s(2) < · · · < s(`) = n.By way of contradiction, assume the greedy algorithm is not optimal, i.e.,112` < k. (5)Now we compare s(i) with t(i) for i = 1, . . . , `. Note that113(a) We have s(1) ≤ t(1) because t(1) is obtained by the greedy method.114(b) We have s(`) > t(`) because
t(`) < t(k) = n = s(`).It follows that ` > 1. So there is a smallest index i∗ (2 ≤ i∗ ≤ `) such that s(i∗) > t(i∗). Then115

s(i∗ − 1) ≤ t(i∗ − 1) < t(i∗) < s(i∗). (6)© Fun. Algorithms, Spring 2022: Basic Version November 4, 2022§1. Lecture V Page 5The weights in the i∗-th bin of the optimal solution is given by the subsequence w(s(i∗ −1161) .. s(i∗)]. But according to (6), this subsequence contains117w(t(i∗ − 1) .. t(i∗) + 1]. (7)This expression is well-defined since t(i∗) + 1 ≤ s(i∗) ≤ n. By definition of the greedy118algorithm, the total weight in (7) exceeds M (otherwise the greedy algorithm would have119added wt(i∗)+1 to its i∗th car). This is our desired contradiction. Q.E.D.120¶6. Global Bin Packing. The joy ride problem is a restricted version of the following122(Global) Bin Packing Problem:123Given a multiset S = {w1, . . . , wn} of weights where each wi ∈ (0, 1], find a partition1124S = S1 ] S2 ] · · · ] Sm (8)of S into the minimum number m ≥ 1 of multisets where each Si has a total weight at most 1.125Call (8) an optimal solution to the Bin Packing instance S and also denote the minimumm by Opt(S). Note that we have departed from the Linear Bin Packing problem in two ways:(1) The bin capacity is now M = 1. This is not an significant departure since we can dividethe original input weights by the capacity M to obtain wi ∈ (0, 1]. This process is callednormalization, and wi are the normalized weights. In fact, we will often write{w1, . . . , wn}where wi are the original weights.126(2) The input weights are not ordered. This is a more profound change because we are no longer127dealing with a “linear” problem. Alternatively, if the input is given as a list (w1, . . . , wn), we128are free to reorder them anyway we like.129Figure 1: Bin packing solution.E.g., if S = 1{1, 1, 1, 3, 2, 2, 1, 3, 1} then one global solution (before normalization) is130{3, 2} , {2, 3} , {1, 1, 1, 1, 1}, illustrated in Figure 1. This solution uses 3 bins, and it is clearly131optimal since each bin is filled to capacity.132Suppose that instead of computing the global solution (8), we only want to compute the133value of Opt(S). That is, you only want to know the optimal number of bins, not how the134weights are put into bins. We may call this the #Bin Packing Problem, and it is a simple135but useful surrogate for Bin Packing. Of course, if you can compute (8), you can get Opt(S).136The converse is less clear.1371Recall that ] is a way of annotating set union to say that the sets are disjoint.© Fun. Algorithms, Spring 2022: Basic Version November 4, 2022§1. Lecture V Page 6As far as we know, one cannot do much better than the brute force method for computing138Opt(S). What do we mean by “brute force”? It is to try all possibilities. But what are these139possibilities? First, we must give concrete representations of S and its solutions. First, we140represent the set S by any listing of its elements in an arbitrary order:141w = (w1, . . . , wn). (9)If Sn denote the set of all permutations of [1..n], and σ ∈ Sn, then σ(w) := (wσ(1), . . . , wσ(n)).142Likewise, we represent any solution B1, . . . , Bm (8) to S by any permutation π ∈ Sn in which143π(w) = (wπ(1), . . . , wπ(n)) (10)lists the weights in B1 first (in any order), followed by the weights in B2, etc. It is clear that144there are many representation of any solution; that is OK since Sn is a very large set!145OBSERVATION: if (10) represents a global solution to the bin packing instance (9), then146Grd(π(w)) = Opt(w)147where Grd(w′) is the number of bins used by greedy algorithm on any input w′.148It follows thatOpt(w) = minGrd(σ(w)).The “brute force” algorithm for Opt(w) will use the greedy algorithm of linear bin packingsolution as a subroutine: it cycles through each of the n! permutations of Sn, and for each σ,computes Grd(σ(w)) in O(n) time. The minimum of these values is Opt(w). We conclude thatthe brute force method has a complexity ofΘ(n · n!) = Θ((n/e)n+(3/2)).Here, we assume that we can generate all n-permutations in O(n!) time. This is a nontrivialUse the Θ-form ofStirling’sapproximationassumption; see §8 for details on how to do this.150Karp and Held noted that we can improve the preceding algorithm by a factor of n, since151without loss of generality, we may restrict to permutations that begins with an arbitrary w1.152Since there are (n− 1)! such permutations, we obtain:153Lemma 2 (Karp-Held) The global bin packing problem can be optimally solved in O(n!) =154O((n/e)n+(1/2)) time in the real RAM model.155See Exercise where we further exploit this idea to improve the brute force algorithm by any156polynomial factor. The relation between global bin packing and linear bin packing is worth157noting: by imposing restrictions on the space of possible solutions, we have turned a difficult158problem like global bin packing into a feasible one like linear bin packing. The latter problem159is in turn useful as a subroutine for the original problem.160© Fun. Algorithms, Spring 2022: Basic Version November 4, 2022§1. Lecture V Page 7¶7. How good is the Greedy Solution? How good is the greedy algorithm Grd when161viewed as an algorithm for global bin packing? We are not interested in goodness in terms of162computational complexity. Instead we are interested in the quality of the output of Grd, namely163the number of bins, Grd(w). We shall compare Grd(w) to Opt(w), the optimal solution. Since164Grd(x)/Opt(w) ≥ 1, we want to upper bound the ratio Grd(w)/Opt(w).165Theorem 3 For any unit weight sequence w,166Opt(w) ≥ 1 + bGrd(w)/2c (11)Moreover, for each n, there are weight sequences with Opt(w) = n for which (11) is an equality.167Proof. Suppose Grd(w) = k. Let the weight of ith output bin be Wi for i = 1, . . . , k. The169following inequality holds:170Wi +Wi+1 > 1. (12)

To see this, note that the first weight v to be put into the i+ 1st bin by the greedy algorithm
must satisfy Wi + v > 1. This implies (12) since Wi+1 ≥ v. Summing up (12) for i =
1, 3, 5, . . . , 2 bk/2c−1, we obtain

i=1Wi > bk/2c. The last inequality implies that the optimal

solution needs more than bk/2c bins, i.e., Opt(w) ≥ 1 + bk/2c. This proves (11). To see that
the inequality (11) is sharp, consider the following2 unit weight sequence of length n:

, . . . , 1, 1

) if n = even,

, . . . , 1, 1

) if n = odd.

The greedy solution uses n bins, but clearly Opt(w) = 1 + bn/2c.171

¶8. Absolute and Asymptotic Approximation Ratios. We said the global bin packing174
is an important problem for which there are no polynomial-time algorithms. So it is essential175
to seek good approximations to global bin packing. Let A be any bin packing algorithm. To176
evaluate the performance of A, consider the simplest definition that comes to mind: define the177
absolute approximation ratio of A as178

α0(A) := sup
A(w)/Opt(w) (13)

where w range over all non-empty weight sequences. By definition α0(A) ≥ 1. Our goal is to179
design algorithms A with small α0(A).180

What is not to like about α0? Let us consider the absolute approximation ratio of Grd. The181
problem is that α0(A) may determined by a single input w rather by those w with arbitrarily182
large Opt(w). For instance, suppose A1 has the property that183

A1(w) = 2 · Opt(w) + 1 (14)

for all w. For each n ∈ N, there is some weight sequences wn such Opt(wn) = n. We would
conclude that α1(A1) = 3 since

α0(A1) = sup

2 · Opt(w) + 1

2Thanks to. Lee (2008) for this simplification.

© Fun. Algorithms, Spring 2022: Basic Version November 4, 2022

§1. Lecture V Page 8

since supw

= 1. The worst case is controlled by the “trivial” input w1. This

conclusion seems artificial. Intuitively, we think that a correct definition of approximation ratio
ought to yield the conclusion that “α(A1) = 2”. Here is the definition to achieve this: Define
the (asymptotic) approximation ratio of algorithm A as

α(A) := lim sup

where an = sup

: Opt(w) = n

. Recall from mathematical analysis that the limit

superior of an infinite sequence of numbers (x1, x2, . . .) is given by

{sup {xi : i ≥ n}} .

For the algorithm A1, we define an = sup

: Opt(w) = n

= (2n+1)/n. Then

source Wikipedia

α(A1) = lim supn→∞ an = 2. The behavior of our hypothetical A1 is rather like Grd, as seen185
in Theorem 3.186

Lemma 4 The greedy algorithm Grd for Linear Bin Packing has (asymptotic) approximation187

α(Grd) = 2. (15)

Proof. Note that Theorem 3 implies that Opt(w) ≥ 1 + bGrd(w)/2c ≥ Grd(w)

. This implies190

This implies α(Grd) ≤ 2 (Careful: we are not entitled to say “α(Grd) < 2”). Moreover,192Theorem 3 also shows that for each odd integer n ≥ 2, there are inputs with Opt(w) = n for193which (16) is an equality. This implie程序代写 CS代考加微信: assignmentchef QQ: 1823890830 Email: [email protected]

Reviews

There are no reviews yet.

Only logged in customers who have purchased this product may leave a review.

Shopping Cart
[SOLVED] 代写代考
30 $