[SOLVED] 代写 data structure algorithm graph ==============================================================================

30 $

File Name: 代写_data_structure_algorithm_graph_==============================================================================.zip
File Size: 1092.72 KB

SKU: 3610832403 Category: Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

Or Upload Your Assignment Here:


==============================================================================
CSC 263Lecture Summary for Week 1 Fall 2019
==============================================================================

READINGS: Chapters 2, 3; Sections 4.5, 5.1, 5.2.

—–
Admin
—–

*Land acknowledgement
I wish to acknowledge this land on which the University of Toronto operates. For thousands of years it has been the traditional land of the Huron-Wendat, the Seneca, and most recently, the Mississaugas of the Credit River. Today, this meeting place is still the home to many Indigenous people from across Turtle Island and I am grateful to have the opportunity to work on this land.
*Pronouns
*Accessibility
*Course information sheet
*Office hours, help centre
*Piazza
*Registered study groups & Sydney Smith Commons
*Learning strategists
*How to ask questions

—————
Data Structures
—————

“Abstract Data Type” (ADT) = set of objects together with set of operations
on these objects, e.g.:

1. Objects: integers
Operations: ADD(x,y), MULTIPLY(x,y), etc.

2. Stack
Objects: lists (or sequences)
Operations: PUSH(S,v), POP(S), ISEMPTY(S)

ADT’s important for specification; provide modularity and reuse since usage
is independent of implementation.

“Data Structure” = implementation of an ADT: a way to represent objects, and
algorithms for every operation.

Example: stack implementations. [INTERACTIVE DISCOVERY]

(a) Array with counter (size of stack). Seems simple but what about overflow?

(b) Linked list (keep pointer to head).
ISEMPTY: test head == None.
PUSH: insert at front of list.
POP: remove front of list (if not empty).

In general, ADT describes _what_ (data and operations); data structure
describes _how_ (storing data, performing operations).

In this course: many ADT’s, and many data structures for these ADT’s. Use
careful analysis (of correctness and complexity) to compare possibilities.

——————
Algorithm Analysis (Review)
——————

“Complexity” = amount of resources required by an algorithm, measured as a
function of input size.

Why analyze complexity? For comparison, e.g., choose between different
implementations.

“Resource” = running time or memory (space), usually.

“Input size” is problem-dependent. Examples:
– For numbers, “size” = number of bits.
– For lists, “size” = number of elements.
– For graphs, “size” = number of vertices.
Important: measure must be roughly proportional to true bit-size (no. of bits
required to fully encode input). In practice, allow ourselves to ignore log
factors, e.g., we use size([a_1,a_2,…,a_n]) = n, when it is really
size(a_1) + … + size(a_n) — proportional to n only if each a_i has constant
size.

“Running time” usually measured at high-level also, using asymptotic notation
(big-Oh, big-Omega, big-Theta).

———————–
Worst-Case running time
———————–

For algorithm A, t(x) represents number of “steps” executed by A on input x.

_Worst-case_ running time of A on inputs of size n is denoted T_A(n) (omit
subscript “_A” when context is clear):

T(n) = max{ t(x) : x is an input of size n }

Example:

LinkedSearch(L, k):
z = L.head
while z != None and z.key != k:
z = z.next
return z

Input size? L.length (no. of elements in list).
Worst-case runtime? [BRIEF INTERACTIVE DISCOVERY]
T(n) = an + b for constants a,b — T(n) in Theta(n).

—————————–
Lower Bounds vs. Upper Bounds
—————————–

Upper bound: usually expressed using big-Oh notation, e.g., T(n) in O(g(n)).
Can be applied to any runtime function (worst-case, best-case, average-case).
For worst-case, want to prove:

T(n) = max{ t(x) : x has size n } <= c g(n), for all n >= B.

Twist: usually, exact expression for t(x) or T(n) is unknown. In practice,
argue that algorithm executes no more than c g(n) steps on _every_ input of
size n. (In particular, _cannot_ prove upper bound by arguing about only one
input, unless we also prove this input is worst — back to proving something
for every input!)

Lower bound: usually expressed using Omega notation — T(n) in Omega(f(n)).
Can be applied to any runtime function (worst-case, best-case, average-case).
For worst-case, want to prove:

T(n) = max{ t(x) : x has size n } >= c f(n), for all n >= B.

In practice, exhibit _one_ input on which algorithm executes at least c f(n)
steps. (Not necessary to argue every input takes at least f(n) time.)

[WRITE DEFINITION OF BIG OH, WORK OUT THE CONSTANTS]

[WRITE DEFINITION OF BIG OMEGA]

[EXPLAIN BIG THETA IN TERMS OF BIG OH AN BIG OMEGA]

MENTION THAT CLRS DOES NOT MAKE IT CLEAR THAT BIG OH IS A SET

————————-
Average-Case running time
————————-

For algorithm A, consider S_n = sample space of all inputs of size n. To
define “average”, require probability distribution over S_n — specifying
likelihood of each input.

Let t_n(x) = number of steps executed by A on input x in S_n.
Note: t_n is a random variable (assigns numerical value to each element of
probability space).

“Average” value of t_n(x) is E[t_n] (expected number of steps executed by A on
inputs of size n):

E[t_n] = sumt_n(x) * Pr(x)
x in S_n

where Pr(x) = probability of x (according to probability distribution).

Definition: average case running time of A is T'(n) = E[t_n].

Example: LinkedSearch.
Sample space S_n?
– Problem: infinitely many inputs!
Solution: behaviour of LinkedSearch determined by only one factor —
position of k inside L. Leaves exactly n+1 possibilities:
k occurs at position0in L,
k occurs at position1in L,

k occurs at position n-1 in L,
k does not occur in L.

So pick “representative” inputs, e.g.,
S_n = { (L,k) : L = [1,2,…,n] and k = 0,1,2,…,n }.

Done? No: need probability distribution!
Temptation: uniform (each input equally likely, in this case, with probability
1/(n+1)).
Really, impossible to tell: maybe k not in L is more likely, maybe not,
depends entirely on application. In general, could leave some of this as
parameter.

Now, compute E[t_n].
Need exact expression for t_n. Trick: pick “representative” operation to
count, instead of counting everything. For example, count comparisons: number
of comparisons within constant factor of number of operations, so answer is
correct in big-Oh terms.

{ 2kif 1 <= k <= n (then k occurs in position k)t_n(L,k) = { { 2n+1if k = 0(2 comparisons for every iteration, k iterations for value k, extra comparisonwhen k = 0 because of last test for z != None.)T'(n) = E[t_n] =sum( t_n(L,k) * Pr[L,k] ) (L,k) in S_nn= t_n(L,0) * 1/(n+1) + sum ( t_n(L,k) * 1/(n+1) ) k=1n= (2n+1)/(n+1) + 1/(n+1) * sum 2k k=1= (2n+1)/(n+1) + 2/(n+1) * n(n+1)/2= (2n+1)/(n+1) + n.Notice: worst-case T(n) = 2n+1; average-case T'(n) approx. n+2, half ofworst-case. Consistent with intuition: when elements equally likely to beanywhere in a list, on average examine roughly half the list to find element.Average slightly higher because of case when element not in list.———————-Best-case running time———————-Last possibility, just for completeness:min{ t(x) : x is an input of size n }Example: Theta(1) for LinkedSearch.Mostly useless.———————-Optional exercise———————-Function nothing(A)A[1] = -1n = A.size for i = 1 to nfor j = 1 to n – 1 if A[j]?= 1 – A[j + 1] then returnif i + j > n – 1 then return

Reviews

There are no reviews yet.

Only logged in customers who have purchased this product may leave a review.

Shopping Cart
[SOLVED] 代写 data structure algorithm graph ==============================================================================
30 $