[SOLVED] 代写 R C data structure algorithm math parallel graph statistic network Go We must all hang together, gentlemen,

30 $

File Name: 代写_R_C_data_structure_algorithm_math_parallel_graph_statistic_network_Go_We_must_all_hang_together,_gentlemen,.zip
File Size: 1073.88 KB

SKU: 8167435231 Category: Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

Or Upload Your Assignment Here:


We must all hang together, gentlemen,
or else we shall most assuredly hang separately.
Benjamin Franklin, at the signing of the Declaration of Independence July , 6
I remember seeking advice from someonewho could it have been?about whether this work was worth submitting for publication; the reasoning it uses is so very simple.. . . Fortunately he advised me to go ahead, and many years passed before another of my publications became as wellknown as this very simple one.
.Joseph Kruskal, describing his shortestspanningsubtree algorithm
Clean ALL the things!
Allie Brosh, This is Why Ill Never be an Adult, Hyperbole and a Half, June , .

Minimum Spanning Trees
Suppose we are given a connected, undirected, weighted graph. This is a graph GV, E together with a function w: E ! R that assigns a real weight we to each edge e, which may be positive, negative, or zero. This chapter describes several algorithms to find the minimum spanning tree of G, that is, the spanning tree T that minimizes the function
wT:Xwe. e2T
. Distinct Edge Weights
An annoying subtlety in the problem statement is that weighted graphs can have more than one spanning tree with the same minimum weight; in particular, if every edge in G has weight 1, then every spanning tree of G is a minimum
See Figure . for an example.

. MST
85 10 23
18
12
16 30
14
4 26
Figure .. A weighted graph and its minimum spanning tree.
spanning tree, with weight V1. This ambiguity complicates the development of our algorithms; everything would be much simpler if we could simply assume that minimum spanning trees are unique.
Fortunately, there is an easy condition that implies the uniqueness we want.
Lemma .. If all edge weights in a connected graph G are distinct, then G has a unique minimum spanning tree.
Proof: Let G be an arbitrary connected graph with two minimum spanning trees T and T0; we need to prove that some pair of edges in G have the same weight. The proof is essentially a greedy exchange argument.
Each of our spanning trees must contain an edge that the other tree omits. Let e be a minimumweight edge in TT0, and let e0 be a minimumweight edge in T0T breaking ties arbitrarily. Without loss of generality, suppose wewe0. 0
ThesubgraphT econtainsexactlyonecycleC,whichpassesthrough the edge e. Let e00 be any edge of this cycle that is not in T. At least one such edge must exist, because T is a tree. We may or may not have e00e0. Because e 2 T, we immediately have e00 6 e and therefore e00 2 T0T. It follows that we00we0we. 00 0 00 00
Now consider the spanning tree TTee . This new tree T might be equal to T. We immediately have wT00wT0wewe00wT0. But T0 is a minimum spanning tree, so we must have wT00wT0; in other words, T00 is also a minimum spanning tree. We conclude that wewe00, which completes the proof. E
If we already have an algorithm that assumes distinct edge weights, we can still run it on graphs where some edges have equal weights, as long as we have a consistent method for breaking ties. One such method uses the following
The converse of this lemma is false; a connected graph with repeated edge weights can still have a unique minimum spanning tree. As a trivial example, suppose G is a tree!
8

SEi, j, k, l if wi, jwk, l ifwi,jwk,l
if mini, jmink, l if mini, jmink, l if maxi, jmaxk, l hhif maxi,jmaxk,l ii
then return i, j then return k, l then return i, j then return k, l then return i, j return k, l
In light of Lemma . and this tiebreaking rule, we will safely assume for the rest of this chapter that edge weights are always distinct, and therefore minimum spanning trees are always unique. In particular, we can freely discuss the minimum spanning tree with no confusion.
. The Only Minimum Spanning Tree Algorithm
There are many algorithms to compute minimum spanning trees, but almost all of them are instances of the following generic strategy. The situation is similar to graph traversal, where several dierent algorithms are all variants of the generic traversal algorithm whateverfirst search.
The generic minimum spanning tree algorithm maintains an acyclic sub graph F of the input graph G, which we will call the intermediate spanning forest. At all times, F satisfies the following invariant:
Initially, F consists of V onevertex trees. The generic algorithm connects trees in F by adding certain edges between them. When the algorithm halts, F consists of a single spanning tree; our invariant implies that this must be the minimum spanning tree of G. Obviously, we have to be careful about which edges we add to the evolving forest, because not every edge is in the minimum spanning tree.
At any stage of its evolution, the intermediate spanning forest F induces two special types of edges in the rest of the graph.
An edge is useless if it is not an edge of F, but both its endpoints are in the same component of F.
An edge is safe if it is the minimumweight edge with exactly one endpoint in some component of F.
.. The Only Minimum Spanning Tree Algorithm
algorithm in place of simple weight comparisons. SE takes as input four integers i, j, k, l, representing four not necessarily distinct vertices, and decides which of the two edges i, j and k, l has smaller weight. Because the input graph undirected, the pairs i, j andj, i represent the same edge.
F is a subgraph of the minimum spanning tree of G.

. MST
The same edge could be safe for two dierent components of F. Some edges of GF are neither safe nor useless; we call these edges undecided.
All minimum spanning tree algorithms are based on two simple observations. The first observation was proved by Robert Prim inalthough it is implicit in several earlier algorithms, and the second is immediate.
Lemma . Prim. The minimum spanning tree of G contains every safe edge.
Proof: In fact we prove the following stronger statement: For any subset S of the vertices of G, the minimum spanning tree of G contains the minimumweight edge with exactly one endpoint in S. Like the previous lemma, we prove this claim using a greedy exchange argument.
Let S be an arbitrary subset of vertices of G, and let e be the lightest edge with exactly one endpoint in S. Our assumption that all edge weights are distinct implies that e is unique. Let T be an arbitrary spanning tree that does not contain e; we need to prove that T is not the minimum spanning tree of G.
Because T is connected, it contains a path from one endpoint of e to the other. Because this path starts at a vertex of S and ends at a vertex not in S, it must contain at least one edge with exactly one endpoint in S; let e0 be any such edge. Because T is acyclic, removing e0 from T yields a spanning forest with exactly two components, one containing each endpoint of e. Thus, adding e to this forest gives us a new spanning tree T0Te0e. The definition of e implies we0we, which implies that T0 has smaller total weight than T. Thus, T is not the minimum spanning tree of G, which completes the proof. E
e
e
Figure .. Every safe edge is in the minimum spanning tree. Black vertices are in the subset S.
Lemma .. The minimum spanning tree contains no useless edge.
Proof: Adding any useless edge to F would introduce a cycle. E
Our generic minimum spanning tree algorithm repeatedly adds safe edges to the evolving forest F. If F is not yet connected, there must be at least one safe edge, because the input graph G is connected. Thus, no matter which safe edges we add in each iteration, our generic algorithm eventually connects F. By induction, Lemma . implies that the resulting tree is in fact the minimum
6

spanning tree. Whenever we add new edges to F, some undecided edges may become safe, and other undecided edges may become useless. Once an edge becomes useless, it stays useless forever. To fully specify a particular algorithm, we must describe which safe edges to add in each iteration, and how to find those edges.
. Boruvkas Algorithm
The oldest and arguably simplest minimum spanning tree algorithm was discov ered by the Czech mathematician Otakar Borvka in , about a year after Jindich Saxel asked him how to construct an electrical network connecting severalcitiesusingtheleastamountofwire. Thealgorithmwasrediscoveredby Gustav Choquet in , rediscovered again by a team of Polish mathematicians led by Jozef ukaszewicz in , and rediscovered again by George Sollin in . Although Sollin never published his rediscovery, it was carefully described and credited in one of the first textbooks on graph algorithms; as a result, this algorithm is sometimes called Sollins algorithm.
The BorvkaChoquetFlorekukaziewiczPerkalSteinhausZubrzyckiPrimSollinBrosh algorithm can be summarized in one line:
85 10 23
18 1618 1230 12
14 14 426 26
Figure .. Boruvkas algorithm run on the example graph. Thick red edges are in F ; dashed edges are useless. Arrows point along each components safe edge. The algorithm ends after just two iterations.
Here is Borvkas algorithm in more detail. The algorithm calls the C AL algorithm from Chapteron pageto count the components of F and label each vertex v with an integer compv indicating its component.
Saxel was an employee of the West Moravian Power Company, described by Borvka as very talented and hardworking, who was later executed by the Nazis as a person of Jewish
descent.
Go read everything in Hyperbole and a Half. And then go buy the book. And an extra copy
for your cat. Whats that? You dont have a cat? What kind of a monster are you? Go get a cat, and then buy it an extra copy of Hyperbole and a Half.
.. Boruvkas Algorithm
B: Add ALL the safe edges and recurse.
6

. MST
BV, E:
FV, ?
count CALF
while count1 AASEE, F, count count CALF
return F
6
It remains only to describe how to identify and add all the safe edges to F. Suppose F has more than one component, since otherwise were already done. The following subroutine computes an array safe1 .. Vof safe edges, where safei is the minimumweight edge with one endpoint in the ith component of F, by a brute force examination of every edge in G. For each edge uv, if u and v are in the same component, then uv is either useless or already an edge in F. Otherwise, we compare the weight of uv to the weights of safecompu and safecompv and update the array entries if necessary. Once we have identified all the safe edges, we add each edge safei to F.
AASEE, F, count: for i 1 to count
safei N
for each edge uv 2 E
if compu 6 compv
for i
ifsafecompuN or wuvwsafecompu safecompu uv
ifsafecompvN or wuvwsafecompv safecompv uv
1 to count add safei to F
Each call to CAL runs in OVtime, because the forest F has at most V1 edges. AddAllSafeEdges runs in OVE time, because we spend constant time on each vertex, each edge of G, and each component of F. Because the input graph is connected, we have VE1. It follows that each iteration of the while loop of B takes OE time.
Each iteration reduces the number of components of F by at least a factor of twoin the worst case, the components of F coalesce in pairs. Because F initially has V components, the while loop iterates at most Olog Vtimes. We conclude that the overall running time of Borvkas algorithm is OE log V.
This is the MST Algorithm You Want
Despite its relatively obscure origin, early Western algorithms researchers were aware of Borvkas algorithm, but dismissed it as being too complicated.

As a result, despite its simplicity and eciency, most algorithms and data structures textbooks unfortunately do not even mention Borvkas algorithm. This omission is a serious mistake; Borvkas algorithm has several distinct advantages over other classical MST algorithms.
BorvkasalgorithmoftenrunsfasterthanitsOElogVworstcaserunning time. The number of components in F can drop by significantly more than a factor of 2 in a single iteration, reducing the number of iterations below the worstcase dlog2 V e.
A slight reformulation of Borvkas algorithm actually closer to Borvkas original presentation actually runs in OE time for a broad class of interesting graphs, including graphs that can be drawn in the plane without edge crossings. In contrast, the time analysis for the other two algorithms applies to all graphs.
Borvkas algorithm allows for significant parallelism; in each iteration, each component of F can be handled in a separate independent thread. This implicit parallelism allows for even faster performance on multicore or distributed systems. In contrast, the other two classical MST algorithms are intrinsically serial.
Several more recent minimumspanningtree algorithms are faster even in the worst case than the classical algorithms described here. All of these faster algorithms are generalizations of Borvkas algorithm.
In short, if you ever need to implement a minimumspanningtree algorithm, use Borvka. On the other hand, if you want to prove things about minimum spanning trees eectively, you really need to know the next two algorithms as well.
. Jarniks Prims Algorithm
The next oldest minimum spanning tree algorithm was first described by the Czech mathematician Vojtch Jarnik in aletter to Borvka; Jarnik published his discovery the following year. The algorithm was independently rediscovered by Joseph Kruskal in , arguably by Robert Prim in , by Harry Loberman and Arnold Weinberger in , and finally by Edsger Dijkstra in . Prim, Lobermand and Weinberger, and Dijkstra all eventually knew of and even cited Kruskals paper, but since Kruskal also described two other minimum spanningtree algorithms in the same paper, this algorithm is usually called Prims algorithm, or sometimes the PrimDijkstra algorithm, even though byDijkstra already had another algorithm inappropriately named after him.
In Jarniks algorithm, the intermediate forest F has only one nontrivial component T ; all the other components are isolated vertices. Initially, T consists
.. Jarniks Prims Algorithm
6

. MST
of a single arbitrary vertex of the graph. The algorithm repeats the following step until T spans the whole graph:
85858585 10 10 10 10 232323 3
18
16 18 16 18 12 30 12 30
30
16
30
16
14 14
426 26 26 26
85
16
30 16 30
26 26
6
J: Repeatedly add Ts safe edge to T.
Figure .. Jarniks algorithm run on the example graph, starting with the bottom vertex. At each stage, thick red edges are in T , an arrow points along T s safe edge; and dashed edges are useless.
To implement Jarniks algorithm, we keep all the edges adjacent to T in a priority queue. When we pull the minimumweight edge out of the priority queue, we first check whether both of its endpoints are in T . If not, we add the edge to T and then add the new neighboring edges to the priority queue. In other words, Jarniks algorithm is a variant of bestfirst search, as described at the end of Chapter ! If we implement the underlying priority queue using a standard binary heap, Jarniks algorithm runs in OE log EOE log V time.
TMImproving Jarniks Algorithm
We can improve Jarniks algorithm using a more complex priority queue data structure called a Fibonacci heap, first described by Michael Fredman and Robert Tarjan in . Just like binary heaps, Fibonacci heaps support the standard priority queue operations I, EM, and DK. However, unlike standard binary heaps, which require Olog n time for every operation, Fibonacci heaps support I and DK in constant amortized time. The amortized cost of EM is still Olog n.
Amortized time is an accounting trick that allows us to ignore infrequent fluctuations in the time for a single data structure operation. A Fibonacci heap can execute any intermixed sequence of I Is, D DKs, and X EMs in OIDX log n time, in the worst case. So the average I and the average DK each take constant time, and the average EM takes Olog n time; however, some individual operations may take

To apply this faster data structure, we keep the vertices of G in the priority queue instead of edges, where the priority of each vertex v is either the minimum weight edge between v and the evolving tree T, or 1 if there is no such edge. We can I all the vertices into the priority queue at the beginning of the algorithm; then, whenever we add a new edge to T , we may need to decrease the priorities of some neighboring vertices.
To make the description easier, we break the algorithm into two parts. JI initializes the priority queue; JL is the main algorithm. The input consists of the vertices and edges of the graph, along with the start vertex s. For each vertex v, we maintain both its priority priorityv and the incident edge edgev such that wedgevpriorityv.
JV,E,s: JIV,E,s JLV,E,s
.. Kruskals Algorithm
JIV,E,s:
for each vertex v 2 Vs
if vs 2 E edgev
priorityv
vs
wvs
N
1
else Iv
edgev priorityv
JLV,E,s:
T for i
s, ?
1 to V 1
v EM
add v and edgev to T for each neighbor u of v
if u 2 T and priorityuwuv edgeu uv
DKu, wuv
Figure .. Jarniks minimum spanning tree algorithm, ready to be used with a Fibonacci heap
The operations I and EM are each called OVtimes once for each vertex except s, and DK is called OE times, at most twice for each edge. Thus, if we use a Fibonacci heap, the improved algorithm runs in OEV log V time, which is faster than Borvkas algorithm unless EOV .
In practice, however, this improvement is rarely faster than the naive implementation using a binary heap, unless the graph is extremely large and dense. The Fibonacci heap algorithms are quite complex, and the hidden constants in both the running time and space are significantnot outrageous, but certainly bigger than the hidden constant 1 in the Olog n time bound for binary heap operations.
. Kruskals Algorithm
The last minimum spanning tree algorithm well consider was first described by Joseph Kruskal in , in the same paper where he rediscovered Jarniks algo
longer in the worst case. Amortization uses statistical averaging over the sequence of operations; there is no assumption of randomness here, either in the input data or in the algorithm.
6

. MST
rithm. Kruskal was motivated by a typewritten translation of obscure origin of Borvkas original paper that had been floating around the Princeton math department. Kruskal found Borvkas algorithm unnecessarily elaborate. The same algorithm was rediscovered inby Harry Loberman and Arnold Weinberger, but somehow avoided being renamed after them.
Like our earlier minimumspanning tree algorithms, Kruskals algorithm has a memorable oneline description:
K: Scan all edges by increasing weight; if an edge is safe, add it to F.
5
23
85 10
4
238 10
18 16
12 30 14
14
4 26
12
16 18
30
26
Figure .6. Kruskals algorithm run on the example graph. Thick red edges are in F ; thin dashed edges are useless.
The simplest method to scan the edges in increasing weight order is to sort the edges by weight, in OE log E time, and then use a simple forloop over the sorted edge list. As we will see shortly, this preliminary sorting dominates the running time of the algorithm.
Because we examine the edges in order from lightest to heaviest, any edge we examine is safe if and only if its endpoints are in dierent components of the forest F. Suppose we encounter an edge e that joins two components A and B but is not safe. Then there must be a lighter edge e0 with exactly one endpoint in A. But this is impossible, because inductively every previously examined edge has both endpoints in the same component of F.
Just as in Borvkas algorithm, each vertex of F needs to know which component of F contains it. Unlike Borvkas algorithm, however, we do
To be fair, Borvkas first paper was unnecessarily elaborate, in part because it was written for mathematicians in the formal language of linear algebra, rather than in the language of graphs. Borvkas followup paper, also published inbut in an electrotechnical journal, was written in plain language for a much broader audience, essentially in its current modern form. Kruskal was apparently unaware of Borvkas second paper. Stupid Iron Curtain.
66

.. Kruskals Algorithm
not recompute all component labels from scratch every time we add an edge. Instead, when two components are joined by an edge, the smaller component inherits the label of the larger component; that is, we traverse the smaller component via whateverfirst search. This traversal requires O1 time for each vertex in the smaller component. Each time the component label of a vertex changes, the component of F containing that vertex grows by at least a factor of 2; thus, each vertex label changes at most Olog Vtimes. It follows that the total time spent updating vertex labels is only OV log V .
More generally, Kruskals algorithm maintains a partition of the vertices of G into disjoint subsets in our case, the components of F, using a data structure that supports the following operations:
MSvCreate a set containing only the vertex v.
FvReturn an identifier unique to the set containing v.
Uu, vReplace the sets containing u and v with their union. This operation decreases the number of sets.
Heres a complete description of Kruskals algorithm in terms of these operations:
KV, E:
sort E by increasing weight F V,?
for each vertex v 2 V
MSv
for i 1 to E
uv ith lightest edge in E if Fu 6 Fv
Uu, v add uv to F
return F
After the initial sort, the algorithm performs exactly V MS operations one for each vertex, 2E F operations two for each edge, and V1 U operations one for each edge in the minimum spanning tree. We just described a disjointset data structure for which MS and F require O1 time, and U runs in Olog Vamortized time. Using this implementation, the total time spent maintaining the set partition is OEV log V .
But recall that we already need OE log EOE log Vtime just to sort the edges. Because this is larger than the time spent maintaining the UF data structure, the overall running time of Kruskals algorithm is OE log V,
A dierent disjointset data structure, which uses a strategy called unionbyrank with path compression, performs each U or F in OVamortized time, whereis the almost butnotquiteconstant inverse Ackerman function. If you dont feel like consulting Wikipedia, just think of Vas 4. Using this implementation, the total time spent maintaining the set partition is OEV, which is slightly faster when V is large and E is very close to V.
6

. MST
68
exactly the same as Borvkas algorithm, or Jarniks algorithm with a normal nonFibonacci heap.
Exercises
. Let GV, E be an arbitrary connected graph with weighted edges.
a Prove that for any cycle in G, the minimum spanning tree of G excludes
the maximumweight edge in that cycle.
b Prove or disprove: The minimum spanning tree of G includes the
minimumweight edge in every cycle in G.
. Throughout this chapter, we assumed that no two edges in the input graph have equal weights, which implies that the minimum spanning tree is unique. In fact, a weaker condition on the edge weights implies MST uniqueness.
a Describe an edgeweighted graph that has a unique minimum spanning tree, even though two edges have equal weights.
b Prove that an edgeweighted graph G has a unique minimum spanning tree if and only if the following conditions hold:
For any partition of the vertices of G into two subsets, the minimum weight edge with one endpoint in each subset is unique.
The maximumweight edge in any cycle of G is unique.
c Describe and analyze an algorithm to determine whether or not a graph
has a unique minimum spanning tree.
. Most classical minimumspanningtree algorithms use the notions of safe and useless edges described in the text, but there is an alternate formulation. Let G be a weighted undirected graph, where the edge weights are distinct. We say that an edge e is dangerous if it is the longest edge in some cycle in G, and useful if it does not lie in any cycle in G.
a Prove that the minimum spanning tree of G contains every useful edge.
b Prove that the minimum spanning tree of G does not contain any
dangerous edge.
c Describe and analyze an ecient implementation of the following algorithm, first described by Joseph Kruskal in the samepaper where he proposed Kruskals algorithm. Examine the edges of G in decreasing order; if an edge is dangerous, remove it from G. Hint: It wont be as fast as Kruskals usual algorithm.
. a Describe and analyze an algorithm to compute the maximumweight spanning tree of a given edgeweighted graph.

b A feedback edge set of an undirected graph G is a subset F of the edges such that every cycle in G contains at least one edge in F. In other words, removing every edge in F makes the graph G acyclic. Describe and analyze a fast algorithm to compute the minimumweight feedback edge set of a given edgeweighted graph.
. Suppose we are given both an undirected graph G with weighted edges and a minimum spanning tree T of G.
a Describe an algorithm to update the minimum spanning tree when the weight of a single edge e is decreased.
b Describe an algorithm to update the minimum spanning tree when the weight of a single edge e is increased.
In both cases, the input to your algorithm is the edge e and its new weight; your algorithms should modify T so that it is still a minimum spanning tree. Hint: Consider the cases e 2 T and e 62 T separately.
. a Describe and analyze an algorithm to find the second smallest spanning tree of a given graph G, that is, the spanning tree of G with smallest total weight except for the minimum spanning tree.
TMb Describeandanalyzeanecientalgorithmtocompute,givenaweighted undirected graph G and an integer k, the k spanning trees of G with smallest weight.
. A graph GV,E is dense if EV2. Describe a modification of Jarniks minimumspanning tree algorithm that runs in OV2 time inde pendent of E when the input graph is dense, using only elementary data structuresin particular, without using Fibonacci heaps. This variant of Jarniks algorithm was first described by Edsger Dijkstra in .
. Minimumspanning tree algorithms are often formulated using an operation called edge contraction. To contract the edge uv, we insert a new node, redirect any edge incident to u or v except uv to this new node, and then delete u and v. After contraction, there may be multiple parallel edges between the new node and other nodes in the graph; we remove all but the lightest edge between any two nodes.
The three classical minimumspanning tree algorithms described in this chapter can all be expressed cleanly in terms of contraction as follows. All three algorithms start by making a clean copy G0 of the input graph G and then repeatedly contract safe edges in G0; the minimum spanning tree consists of the contracted edges.
Exercises
6

. MST
8555
10 23
88 10
3
3
18 12 30 16 18 30 16 12 30 16
12 14 14
4 26 4 26 4 26
Figure .. Contracting an edge and removing redundant parallel edges.
B: Mark the lightest edge leaving each vertex, contract all marked edges, and recurse.
J: Repeatedly contract the lightest edge incident to some fixed root vertex.
K: Repeatedly contract the lightest edge in the graph.
a Describe an algorithm to execute a single pass of Borvkas contraction algorithm in OVE time. The input graph is represented in an adjacency list.
b Consider an algorithm that first performs k passes of Borvkas contrac tion algorithm, and then runs Jarniks algorithm with a Fibonacci heap on the resulting contracted graph.
i. What is the running time of this hybrid algorithm, as a function of V, E, and k?
ii. For which value of k is this running time minimized? What is the resulting running time?
c Call a family of graphs nice if it has the following properties:
Contracting an edge of a nice graph yields another nice graph.Every nice graph with V vertices has only OVedges.
For example, planar graphsgraphs that can be drawn in the plane with no crossing edgesare nice. Contracting any edge of a planar graph leaves a smaller planar graph, and Eulers formula implies that every planar graph with V vertices has at most 3V6 edges.
Prove that Borvkas contraction algorithm computes the minimum spanning tree of any nice graph in OVtime.
. Consider a path between two vertices s and t in a undirected weighted graph G. The width of this path is the minimum weight of any edge in the path. The bottleneck distance between s and t is the width of the widest path from s to t. If there are no paths from s to t, the bottleneck distance is 1; on the other hand, the bottleneck distance from s to itself is 1.
a Prove that the maximum spanning tree of G contains widest paths between every pair of vertices.
14

Exercises
t
1 11 6 43
59
10
12
s
The bottleneck distance between s and t is 9.
b Describe an algorithm to solve the following problem in OVE time: Given a undirected weighted graph G, two vertices s and t, and a weight W, is the bottleneck distance between s and t at most W?
c Suppose B is the bottleneck distance between s and t.
i. ProvethatdeletinganyedgewithweightlessthanBdoesnotchange
the bottleneck distance between s and t.
ii. Prove that contracting any edge with weight greater than B does not change the bottleneck distance between s and t. If contraction creates parallel edges, delete all but the heaviest edge between each pair of nodes.
TMd Describeanalgorithmtocomputeaminimumbottleneckpathbetweens and t in OVE time. Hint: Start by finding the medianweight edge in G.
. Borvkas algorithm can be reformulated to use a standard disjointset data structure to identify safe edges, just like Kruskals algorithm, instead of explicitly counting and labeling components of the evolving spanning forest F in each iteration.
In this variant, each component of F is represented by an uptree; each vertex v stores a pointer parentv to its parent, or to v itself if v is the root of its uptree. The subroutine Fv returns the root of vs uptree, but also applies path compression, reassigning all parent pointers from v to the root to point directly to the root, to speed up future F operations. The subroutine U combines two uptrees into one by making one of the two root nodes the parent of the other.
Path compression is a form of memoization!
Normally, U is implemented more carefully to ensure that the root of the larger or older uptree does not change; however, those details dont matter here.
7 82

. MST

Fv:
if parentvv
return v
else
vFparentv parentv vreturn v
In the modified version of Borvkas algorithm, in addition to the parent pointers, the root vertex vof each component of F maintains an edge safev , which at the end of FSE is the lightest edge with one endpoint in that component.
Prove that each call to FSE and ASE requires only OE time. Hint: What is the depth of the uptrees when FSE ends? It follows that this variant of B also runs in OE log Vtime.
Uu, v:
uFu
vFv
either
or parentuv
parentvu
FSEV, E: for each vertex v 2 V
safev N found F
for each edge uv 2 E
uFu
vFv
if u6 v
if safeu N or wuvwsafeu
safeuuv
if safev N or wuvwsafev
safevuv found T
return found
ASEV, E, F: for each vertex v 2 V
if safev 6 N x y safev
if Fx 6 Fy Ux, y add x y to F
BV, E: F?
for each vertex v 2 V parentv v
while FSEV, E ASEV, E, F
return F

Reviews

There are no reviews yet.

Only logged in customers who have purchased this product may leave a review.

Shopping Cart
[SOLVED] 代写 R C data structure algorithm math parallel graph statistic network Go We must all hang together, gentlemen,
30 $