[SOLVED] 代写 C algorithm Scheme parallel concurrency database graph theory Teaching material based on Distributed Systems: Concepts and Design, Edition 4, AddisonWesley 2005.

30 $

File Name: 代写_C_algorithm_Scheme_parallel_concurrency_database_graph_theory_Teaching_material_based_on_Distributed_Systems:_Concepts_and_Design,_Edition_4,_AddisonWesley_2005..zip
File Size: 1582.56 KB

SKU: 6001474908 Category: Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

Or Upload Your Assignment Here:


Teaching material based on Distributed Systems: Concepts and Design, Edition 4, AddisonWesley 2005.
Chapter 14: Distributed transactions
CopyrightGeorge Coulouris, Jean Dollimore, Tim Kindberg 2001
email: authorscdk2.net This material is made available for private study and for direct use by individual teachers.
It may not be included in any product or employed in any service without the written permission of the authors.
from:
Viewing: These slides must be viewed in slide show mode.
Coulouris, Dollimore and Kindberg
Distributed Systems: Concepts and Design
Edition 4,AddisonWesley 2005

Teaching material based on Distributed Systems: Concepts and Design, Edition 4, AddisonWesley 2005.
Distributed transactions
CopyrightGeorge Coulouris, Jean Dollimore, Tim Kindberg 2001
email: authorscdk2.net This material is made available for private study and for direct use by individual teachers.
It may not be included in any product or employed in any service without the written permission of the authors.
14.2 Flat and nested distributed transactions
Viewing: These slides must be viewed in slide show mode.
14.1 Introduction
14.3 Atomic commit protocols
14.4 Concurrency control in distributed transactions
14.5 Distributed deadlocks
14.6 Transaction recovery
14.7 Summary

14.1 Introduction
In chapter 13 we discussed flat and nested transactions that access objects on a single server.
In the distributed case multiple servers the atomicity property requires that either all of the servers involved commit the transactions or all of them abort it.
To realize this one of the server takes a coordinator role.
The most common protocol used is the twophase commit.
This protocol allows the servers to communicate to reach a joint decision as to whether to commit or to abort
3

14.1 Introduction
Concurrency control in distributed transactions is based on the methods already discussed.
Each server applies local concurrency control to its own objectslocally serialized.
Distributed transaction must be also globally serialized.
There is a variety of approach to ensure global seriability.
Transaction recovery is concerned with ensuring that all objects involved in the transaction are recoverable.
4

14.2 Flat and nested distributed transactions
A client transaction becomes distributed if it invokes operations on many servers.
Lets see some definitions and consequences of this situation.
5

Commitment of distributed transactionsintroduction
a distributed transaction refers to a flat or nested transaction that accesses objects managed by multiple servers
When a distributed transaction comes to an endthe either all of the servers commit the transaction
or all of them abort the transaction.
one of the servers is coordinator, it must ensure the same outcome at all of the servers.
the twophase commit protocol is the most commonly used protocol for achieving this.
6

Figure 14.1
Distributed transactions
a Flat transaction
b Nested transactions
T
X Client T1
N T12
Client
T2
T22
T
T21 Y
X
M T11
Y
T
Z
P
7

Distributed transactions
a Flat transaction
b Nested transactions
In a nested transaction, the topX level transaction can open subtransactions, and each subtransaction can open further subtransactions down to any
M T11
depth of nesting T Client
T
T
Y
In the nested case, Z subtransactions at the same level can run concurrently, so T1 and T2 are concurrent, and as they invoke objects in different servers, they can run in parallel.
P
Client
T1 T2
X
T12
N
Figure 14.1
T22
T21 Y
A flat client transaction completes each of
its requests before going on to the next
one. Therefore, each transaction accesses
servers objects sequentially
8

14.2 Flat and nested distributed transactions
Consider a distributed transaction in which a client transfers 10 from account A to account C and then transfers 20 from account B to account D.
Accounts A and B are on separate servers X and Y and account C and D on server Z.
If this transaction is structured as a set of four nested transactions the four requests can run in parallel.
Better performance.
9

Figure 14.2
Nested banking transaction
TopenTransaction openSubTransaction
Y
a.withdraw10;
T2
Z
B
b.withdraw20
openSubTransaction
b.withdraw20;
openSubTransaction
T3 T 4
C D
c.deposit10 d.deposit20
c.deposit10;
openSubTransaction
d.deposit20;
closeTransaction
Client
A
a.withdraw10
T1 T
10
X

Nested banking transaction
TopenTransaction requests can be
Y
openSubTransaction
T
run in parallela.withdraw10;
2
B
b.withdraw20
withopseneSvuebTraralnsaction b.withdraw20;
servers, the
Z 3
openSubTransaction
.
c deposit10
nested c.deposit10;
T T
C D
transaction is d.deposit20;
openSubTransaction
d.deposit20
more efficient
4
closeTransaction
Client
A
a.withdraw10
T1 T
Figure 14.2
client transfers 10 from A to C and then transfers 20 from B to 11

X

14.2.1 The coordinator of a distributed transaction
Lets see why and how this server can play its role.
12

The coordinator of a flat distributed transaction
Why might a participant abort a transaction?
Servers execute requests in a distributed transaction
when it commits they must communicate with one another to coordinate
their actions
a client starts a transaction by sending an openTransaction request to a
coordinator in any server next slide
it returns a TID unique in the distributed systeme.g. server IDlocal
transaction number
at the end, it will be responsible for committing or aborting it
each server managing an object accessed by the transaction is a participantit joins the transaction next slide
a participant keeps track of objects involved in the transaction
at the end it cooperates with the coordinator in carrying out the commit
protocol
note that a participant can call abortTransaction in coordinator
13

14.2.1 The coordinator of a distributed transaction
The next figure show a client whose flat banking transaction involves account A, B,C, and D at servers Branch X, Branch Y and Branch Z.
The client transaction T transfers 4 from account A to account C and then transfers 3 from account B to account D.
14

A flat distributed banking transaction
openTransaction goes to the coordinator
a clients flat banking transaction involves accounts A, B, C and D at servers BranchX, BranchY and BranchZ
openTransaction closeTransaction
join
participant A
Client Each server is shown
b.withdrawT, 3;
join
B
b.withdraw3;
TopenTransaction a.withdraw4;
BranchY
with a participant, which c.deposit4;
joins the transaction by
participant
b.withdraw3;
invoking the join method d.deposit3;
Figure 14.3
C
c.deposit4; d.deposit3;
in the coordinator
Note: the coordinator is in one of the servers, e.g. BranchX
closeTransaction
D BranchZ
T
BranchX participant
Note that the TID T is passed with each request e.g. withdrawT,3 15

.
a.withdraw4;
join

The join operation
The interface for Coordinator is shown in Figure 13.3 next slide
it has openTransaction, closeTransaction and abortTransaction
openTransaction returns a TID which is passed with each operation so that servers know which transaction is accessing its objects
The Coordinator interface provides an additional method, join, which is used whenever a new participant joins the transaction:
joinTrans, reference to participant
informs a coordinator that a new participant has joined the transaction
Trans.
the coordinator records the new participant in its participant list.
the fact that the coordinator knows all the participants and each participant knows the coordinator will enable them to collect the information that will be needed at commit time.
16

Figure 13.3
Operations in Coordinator interface
openTransactiontrans;
starts a new transaction and delivers a unique TID trans. This identifier will be used in the other operations in the transaction.
closeTransactiontranscommit, abort;
ends a transaction: a commit return value indicates that the transaction has committed; an abort return value indicates that it has aborted.
abortTransactiontrans;
aborts the transaction.
17

14.3 Atomic commit protocols
transaction atomicity requires that at the end,
eitherallofitsoperationsarecarriedoutornoneofthem.
in a distributed transaction, the client has requested the operations at more than one server
onephase atomic commit protocol
thecoordinatortellstheparticipantswhethertocommitorabort
whatistheproblemwiththat?
thisdoesnotallowoneoftheserverstodecidetoabortitmayhave discovered a deadlock or it may have crashed and been restarted
twophase atomic commit protocol
isdesignedtoallowanyparticipanttochoosetoabortatransaction
phase1eachparticipantvotes.Ifitvotestocommit,itisprepared.Itcannot change its mind. In case it crashes, it must save updates in permanent store
phase2theparticipantscarryoutthejointdecision
The decision could be commit or abortparticipants record it in permanent store18

Failure model for the commit protocols
Recall the failure model for transactions in Chapter 13thisappliestothetwophasecommitprotocol
Commit protocols are designed to work in
asynchronous system e.g. messages may take a very long time
serversmaycrash
messagesmaybelost.
assumecorruptandduplicatedmessagesareremoved.
nobyzantinefaultsserverseithercrashortheyobeytheirrequests
2PC is an example of a protocol for reaching a consensus.
Chapter11saysconsensuscannotbereachedinanasynchronoussystemif
processes sometimes fail.
however,2PCdoesreachconsensusunderthoseconditions.
becausecrashfailuresofprocessesaremaskedbyreplacingacrashed process with a new process whose state is set from information saved in permanent storage and information held by other processes.
19

Why does participant record updates in permanent storage at bthis stage? How many messages are sent between the coordinator and each participant?
The twophase commit protocol
During the progress of a transaction, the only communication between coordinator and participant is the join request
The client request to commit or abort goes to the coordinator
if client or participant request abort, the coordinator informs the participants
immediately
if the client asks to commit, the 2PC comes into use
2PC
voting phase: coordinator asks all participants if they can commit
if yes, participant records updates in permanent storage and then votescompletion phase: coordinator tells all participants to commit or abortthe next slide shows the operations used in carrying out the protocol
20

14.3.1 The twophase commit protocol . . .
The coordinator in a Distributed transaction communicates with the participants to carry out the 2PC by means of the following operations:
canCommit doCommit doAbort
These are methods in the interface of the participant
haveCommitted getDecision
These methods are in the coordinator interface
21

Figure 14.4
Operations for twophase commit protocol
canCommit?trans YesNo
Call from coordinator to participant to ask whether it can commit a transaction.
Participant replies with its vote.
doCommittrans
Call from coordinator to participant to tell participant to commit its part of a
transaction.
doAborttrans
Call from coordinator to participant to tell participant to abort its part of a
transaction.
haveCommittedtrans, participant
Call from participant to coordinator to confirm that it has committed the transaction.
getDecisiontransYesNo
Call from participant to coordinator to ask for the decision on a transaction after it has voted Yes but has still had no reply after some delay. Used to recover from server crash or delayed messages.
22

Operations for twophase commit protocol
canCommit?trans YesNo
This is a request with a reply
Call from coordinator to participant to ask whether it can commit a transaction.
Participant replies with its vote.
doCommittrans
Call from coordinator to participant to tell participant to commit its part of a
transaction.
These are asynchronous requests to avoid delays
doAborttrans
Call from coordinator to participant to tell participant to abort its part of a
transaction.
Asynchronous request
haveCommittedtrans, participant
Call from participant to coordinator to confirm that it has committed the transaction.
getDecisiontransYesNo
Call from participant to coordinator to ask for the decision on a transaction after it has voted Yes but has still had no reply after some delay. Used to recover from
server crash or delayed messages.
Figure 14.4
participant interface canCommit?, doCommit, doAbort
coordinator interface haveCommitted, getDecision 23

Figure 14.5
The twophase commit protocol
Phase 1 voting phase:
1. The coordinator sends a canCommit? request to each of the participants in the
transaction.
2. When a participant receives a canCommit? request it replies with its vote Yes or
No to the coordinator. Before voting Yes, it prepares to commit by saving objects
in permanent storage. If the vote is No the participant aborts immediately. Phase 2 completion according to outcome of vote:
3. The coordinator collects the votes including its own.
a If there are no failures and all the votes are Yes the coordinator decides to
commit the transaction and sends a doCommit request to each of the
participants.
b Otherwise the coordinator decides to abort the transaction and sends
doAbort requests to all participants that voted Yes.
4. Participants that voted Yes are waiting for a doCommit or doAbort request from the
coordinator. When a participant receives one of these messages it acts accordingly and in the case of commit, makes a haveCommitted call as confirmation to the coordinator.
24

The twophase commit protocol
Phase 1 voting phase:
1. The coordinator sends a canCommit? request to each of the participants in
the transaction.
2. When a participant receives a canCommit? request it replies with its vote
Yes or No to the coordinator. Before voting Yes, it prepares to commit by saving
objects in permanent storage. If the vote is No the participant aborts immediately.Phase 2 completion according to outcome of vote:
3. The coordinator collects the votes including its own.
w aIf there are no failures and all the votes are Yes the coordinator decides to
commit the transaction and sends a doCommit request to each of the participants. w bOtherwise the coordinator decides to abort the transaction and sends doAbort
requests to all participants that voted Yes.
4. Participants that voted Yes are waiting for a doCommit or doAbort request from the
coordinator. When a participant receives one of these messages it acts accordingly and in the case of commit, makes a haveCommitted call as confirmation to the coordinator.
25

Figure 14.5

Figure 14.6
Communication in twophase commit protocol
Coordinator
Participant
step status
step status
prepared to commit waiting for votes
canCommit?
1
3 committed
Yes doCommit
2
4 committed
done
haveCommitted
26
prepared to commit uncertain

CTohminmk uanbiocuatiothneincotworodipnhaatosreincosmtemp i1t prwothoactoils the problem?
Think about step 2what is the problem for the participant? Think about participant before step 2what is the problem?
Coordinator
Figure 14.6
Participant
step status
canCommit?
step status
prepared to commit waiting for votes
1
3 committed
Yes doCommit haveCommitted
2
4 committed
done
Timeout actions in the 2PC
to avoid blocking forever when a process crashes or a message is lost
uncertain participant step 2 has voted yes. it cant decide on its ownit uses getDecision method to ask coordinator about outcome
participant has carried out client requests, but has not had a Commit?from the coordinator. It can abort unilaterally
coordinator delayed in waiting for votes step 1. It can abort and send doAbort to participants.
27

prepared to commit uncertain

Performance of the twophase commit protocol
if there are no failures, the 2PC involving N participants requires
N canCommit? messages and replies, followed by N doCommit messages.
the cost in messages is proportional to 3N, and the cost in time is three rounds of messages.
The haveCommitted messages are not counted
there may be arbitrarily many server and communication failures
2PC is is guaranteed to complete eventually, but it is not possible to specify a time limit within which it will be completed
delays to participants in uncertain state
some 3PCs designed to alleviate such delays
theyrequiremoremessagesandmoreroundsforthenormalcase 28

14.3.2 Twophase commit protocol for nested transactions
Recall Fig 14.1b, toplevel transaction T and subtransactions T1, T2, T11, T12, T21, T22
A subtransaction starts after its parent and finishes before it
When a subtransaction completes, it makes an independent decision either to commit provisionally or to abort.
A provisional commit is not the same as being prepared: it is a local decision and is not backed up on permanent storage.
If the server crashes subsequently, its replacement will not be able to carry out a provisional commit.
A twophase commit protocol is needed for nested transactionsit allows servers of provisionally committed transactions that have
crashed to abort them when they recover.
29

Figure 14.7
Operations in coordinator for nested transactions
openSubTransactiontranssubTrans
Opens a new subtransaction whose parent is trans and returns a unique subtransaction identifier.
getStatustrans committed, aborted, provisional
Asks the coordinator to report on the status of the transaction trans. Returns values representing one of the following: committed, aborted, provisional.
30

ThFeigcTluIiDerneot f1fian4is.7uhbetsranseatcotiof neistaend etrxatnesnasciotinonosf ibtsy pcarlelingtsclToIsDe,Tsroanthsatcation or saubbotrrtaTnrasnascatiocanticoanninwothrek otoupttlheeveTl ItDraonfsathcetiotonp. level transaction.
Operations in coordinator for nested transactions
openSubTransactiontranssubTrans
Opens a new subtransaction whose parent is trans and returns a unique subtransaction identifier.
getStatustrans committed, aborted, provisional
Asks the coordinator to report on the status of the transaction trans. Returns values representing one of the following: committed, aborted, provisional.
This is the interface of the coordinator of a subtransaction.It allows it to open further subtransactions
It allows its subtransactions to enquire about its status
Client starts by using OpenTransaction to open a toplevel transaction.
This returns a TID for the toplevel transaction
The TID can be used to open a subtransaction
The subtransaction automatically joins the parent and a TID is returned. 31

Figure 14.8
Transaction T decides whether to commit
T
T12 T21
provisional commit at N provisional commit at N
T2
aborted at Y
T
1
T11 provisional commit at X
abort at M
32
T22
provisional commit at P

TArlathnosuagchtioTnTadnedcTideshawvheebthoethrtporocovimsimonitallycommitted,T has 12 11212
T has provisionally committed and T has aborted, but the fate of T
Supposeth21atTde2c2idestocommitalthoughT hasaborte2d,also
depends on its parent T and eventually on the toplevel transaction, T.
1
aborted and this means that T and T must also abort.
thatT decidestocommitalth21oughT1221hasaborted 1
T
T12 T21
provisional commit at N provisional commit at N

In the figure, each subtransaction has either provisionally committed or aborted
Figure 14.8
T11 abort at M provisional commit at X
T
aborted at Y
2
provisional commit at P
T
1
T22
1. Aparentcancommitevenifasubtransactionaborts
Recall that
2. Ifaparentaborts,thenitssubtransactionsmustabort
33

Figure 14.9
Information held by coordinators of nested transactions
Coordinator of transaction
T
T1
Child transactions T1, T2
T11, T12 T21, T 22
Participant
Provisional commit list
Abort list
T2
T11
T12, T21 T22
T2
yes
yes
no aborted no aborted T12 but notT21
T11, T2
no parent aborted T22
34
T1, T12
T1, T12 T11
T11 T21, T12

Information held by coordinators of nested transactions
When a toplevel transcation commits it carries out a 2PC
Each coordinator has a list of its subtransactions
At provisional commit, a subtransaction reports its status and the status of its descendents to its parent
If a subtransaction aborts, it tells its parent
Coordinator of transaction
T
T1
Child transactions T1, T2
T11, T12 T21, T 22
Participant
Provisional commit list
Abort list
T2
T11
T12, T21 T22
Figure 14.9
T21, T12 no parent abortedT22
yes
yes
no aborted no aborted T12 but notT21
T1, T12 T1, T12
T11, T2 T11
T2
T11
T
When T2 is aborted it tells T no information about descendents
and T share a coordinator as they both run at server N anAorspuhbatnraunsseascgteiotSntaetu.gs.toTaskani1t2ds pTareni2st1acbaoluletdthaenouotrcpohmaen. iIft sohnoeuoldfaitbsoartnifcietstpoarrsenatbhoarsts
21 22
35

Figure 14.10
canCommit? for hierarchic twophase commit protocol
canCommit?trans, subTransYesNo
Call a coordinator to ask coordinator of child subtransaction whether it can commit a subtransaction subTrans. The first argument trans is the transaction identifier of toplevel transaction. Participant replies with its vote YesNo.
36

canCommit? for hierarchic twophase commit protocol
canCommit?trans, subTransYesNo Figure 14.10 Call a coordinator to ask coordinator of child subtransaction whether it can commit a subtransaction subTrans. The first argument trans is the transaction identifier of toplevel transaction. Participant replies with its vote YesNo.
Toplevel transaction is coordinator of 2PC.
participant list:
thecoordinatorsofallthesubtransactionsthathaveprovisionallycommitted
butdonothaveanabortedancestor
E.g.T,T1andT12inFigure14.8
iftheyvoteyes,theypreparetocommitbysavingstateinpermanentstoreThe state is marked as belonging to the toplevel transaction
The 2PC may be performed in a hierarchic or a flat manner TheHiteraranrschaircgu2mPCentTisausskesdcwanhCenomsamvitn?gtotheTobajnedctTs inaspkesrmcanCenotmsmtoirt?agtoe T
The subTrans argument is use to find the subtr1ansacti1on to vote on. If absent,1v2ote no.
37

Figure 14.11
canCommit? for flat twophase commit protoco
canCommit?trans, abortListYesNo
Call from coordinator to participant to ask whether it can commit a transaction. Participant replies with its
vote YesNo.
38

Compare the advantages and disadvantages of the flat and nested approaches
canCommit? for flat twophase commit protocol
canCommit?trans, abortListYesNo
Figure 14.11
Call from coordinator to participant to ask whether it can commit a transaction. Participant replies with its vote YesNo.
Flat 2PC
the coordinator of the toplevel transaction sends canCommit? messages to the coordinators of all of the subtransactions in the provisional commit list.
inourexample,TsendstothecoordinatorsofT1andT12.
the trans argument is the TID of the toplevel transaction
theabortListargumentgivesallabortedsubtransactionse.g. server N has T12 prov committed and T21 aborted
On receiving canCommit, participant
looks in list of transactions for any that match trans e.g. T12 and T21 at N
it prepares any that have provisionally committed and are not in abortList and votes yes
if it cant find any it votes no
39

Timeout actions in nested 2PC
With nested transactions delays can occur in the same three places as before
when a participant is prepared to commit
when a participant has finished but has not yet received canCommit?when a coordinator is waiting for votes
Fourth place:
provisionally committed subtransactions of aborted subtransactions e.g.
T22 whose parent T2 has aborted
use getStatus on parent, whose coordinator should remain active for a while
If parent does not reply, then abort
40

Summary of 2PC
a distributed transaction involves several different servers.
A nested transaction structure allows
additional concurrency and
independent committing by the servers in a distributed transaction.
atomicity requires that the servers participating in a distributed transaction either all commit it or all abort it.
atomic commit protocols are designed to achieve this effect, even if servers crash during their execution.
the 2PC protocol allows a server to abort unilaterally.
it includes timeout actions to deal with delays due to servers crashing.
2PC protocol can take an unbounded amount of time to complete but is guaranteed to complete eventually.
41

14.4 Concurrency control in distributed transactions
Each server manages a set of objects and is responsible for ensuring that they remain consistent when accessed by concurrent transactions
therefore, each server is responsible for applying concurrency control to its own objects.
the members of a collection of servers of distributed transactions are jointly responsible for ensuring that they are performed in a serially equivalent manner
therefore if transaction T is before transaction U in their conflicting access to objects at one of the servers then they must be in that order at all of the servers whose objects are accessed in a conflicting manner by both T and U
42

14.4.1 Locking
In a distributed transaction, the locks on an object are held by the server that manages it.
The local lock manager decides whether to grant a lock or make the requesting transaction wait.
it cannot release any locks until it knows that the transaction has been committed or aborted at all the servers involved in the transaction.
the objects remain locked and are unavailable for other transactions during the atomic commit protocol
an aborted transaction releases its locks after phase 1 of the protocol.
43

Interleaving of transactions T and U at servers X and Y
in the example on page 579, we have
T before U at server X and U before T at server Y
different orderings lead to cyclic dependencies and distributed deadlock
detection and resolution of distributed deadlock in next section TU
WriteA at X
locks A
ReadB at Y
waits for U
WriteB at Y
locks B
ReadA at X 44
waits for T

14.4.2 Timestamp ordering concurrency control
Single server transactions
coordinatorissuesauniquetimestamptoeachtransactionbeforeitstartsserialequivalenceensuredbycommittingobjectsinorderoftimestamps
Distributed transactions
thefirstcoordinatoraccessedbyatransactionissuesagloballyunique
timestamp
as before the timestamp is passed with each object access
the servers are jointly responsible for ensuring serial equivalencethat is if T access an object before U, then T is before U at all objects
coordinators agree on timestamp ordering
a timestamp consists of a pair local timestamp, serverid.
the agreed ordering of pairs of timestamps is based on a comparison in which the serverid part is less significantthey should relate to time
45

Can the same ordering be achieved at all servers without clock synchronization?
Timestamp ordering concurrency control continued
Why is it better to have roughly synchronized clocks?
The same ordering can be achieved at all servers even if their clocks are not synchronized
for efficiency it is better if local clocks are roughly synchronized
then the ordering of transactions corresponds roughly to the real time
order in which they were started
Timestamp ordering
conflicts are resolved as each operation is performed
if this leads to an abort, the coordinator will be informedit will abort the transaction at the participants
any transaction that reaches the client request to commit should always be able to do so
participant will normally vote yes
unless it has crashed and recovered during the transaction
46

14.4.3 Optimistic concurrency control
1. writeread, 2. readwrite, 3. writewrite
each transaction is validated before it is allowed to committransaction numbers assigned at start of validation
transactions serialized according to transaction numbersvalidation takes place in phase 1 of 2PC protocol
1. satisfied 2. checked 3. paralllel
consider the following interleavings of T and UTbeforeUatXandUbeforeTatY
ReadA WriteA
at X at Y
ReadB at Y WriteB
ReadB WriteB
ReadA at X WriteA
No parallel Validation . commitment deadlock
TU
X does T first Y does U first
47
Use backward validation
Suppose TU start validation at about the same time

Commitment deadlock in optimistic concurrency control
servers of distributed transactions do parallel validationtherefore rule 3 must be validated as well as rule 2
the write set of Tv is checked for overlaps with write sets of earlier transactions
this prevents commitment deadlock
it also avoids delaying the 2PC protocol
another problemindependent servers may schedule transactions in different orders
e.g.TbeforeUatXandUbeforeTatY
this must be preventedsome hints as to how on page 531
48

14.5 Distributed deadlocks
Single server transactions can experience deadlocks
prevent or detect and resolve
use of timeouts is clumsy, detection is preferable.it uses waitfor graphs.
Distributed transactions lead to distributed deadlocks
in theory can construct global waitfor graph from local ones
a cycle in a global waitfor graph that is not in local ones is a distributed deadlock
49

Figure 14.12
Interleavings of transactions U, V and W
d.deposit10 a.deposit20
lock D lock A
b.deposit10
lock B at Y
b.withdraw30
at X
wait at Y
c.deposit30 wait at Z
lock C at Z
UVW
c.withdraw20
50
a.withdraw20
wait at X

Figure 14.12
Interleavings of transactions U, V and W
objectsA,BmanagedbyXandY; CandDbyZnext slide has global waitfor graph
d.deposit10 a.deposit20
lock D
b.deposit10
lock B at Y
UV at Y b.withdraw30
lock A at X
UVW
wait at Y
VW at Z c.withdraw20
wait at Z 51
WU at X a.withdraw20
wait at X
c.deposit30
lock C at Z

Figure 14.13 Distributed deadlock
a
b
C
D Z
A
Waits for
Held by
Held by
Held by
Waits for
V
U
U
Held by
W
W
B
Waits for
Y
X
V
52

Figure 14.13 Distributed deadlock
a deadlock cycle has alternate edges showing waitfor and heldbywaitforaddedinorder:UVatY;VW atZandWUatX
a
b
C
D Z
A
Waits for
Held by
Held by
Held by
Waits for
V
U
U
Held by

W
W
B
Waits for
Y
53
X
V

Deadlock detectionlocal waitfor graphs
Local waitfor graphs can be built, e.g.
server Y: UV added when U requests b.withdraw30server Z: VW added when V requests c.withdraw20server X: WU added when W requests a.withdraw20
to find a global cycle, communication between the servers is needed
centralized deadlock detection
one server takes on role of global deadlock detector
the other servers send it their local graphs from time to time
it detects deadlocks, makes decisions about which transactions to abort and informs the other servers
usual problems of a centralized servicepoor availability, lack of fault tolerance and no ability to scale
54

Figure 14.14
Local and global waitfor graphs
local waitfor graph local waitfor graph global deadlock detector
TUVT XY
U
T
V
55

Figure 14.14
Local and global waitfor graphs
Phantom deadlocks
a deadlock that is detected, but is not really one
happens when there appears to be a cycle, but one of the transactions has released a lock, due to time lags in distributing graphs
in the figure suppose U releases the object at X then waits for V at Yand the global detector gets Ys graph before Xs TUVT
local waitfor graph local waitfor graph global deadlock detector
TUVT XY
U
T
V
56

Edge chasinga distributed approach to deadlock detection
a global graph is not constructed, but each server knows about some of the edges
servers try to find cycles by sending probes which follow the edges of the graph through the distributed system
when should a server send a probe go back to Fig 13.13
edges were added in order UV at Y; VW at Z and WU at X when WU at X was added, U was waiting, but
when VW at Z, W was not waiting
send a probe when an edge T1T2 when T2 is waiting
each coordinator records whether its transactions are active or waiting
the local lock manager tells coordinators if transactions startstop waiting
when a transaction is aborted to break a deadlock, the coordinator tells the participants, locks are removed and edges taken from waitfor graphs
57

Edgechasing algorithms
Three stepsInitiation:
When a server notes that T starts waiting for U, where U is waiting at another server, it initiates detection by sending a probe containing the edgeTUto the server where U is blocked.
If U is sharing a lock, probes are sent to all the holders of the lock.Detection:
Detection consists of receiving probes and deciding whether deadlock has occurred and whether to forward the probes.
e.g. when server receives probeTUit checks if U is waiting, e.g. UV, if so it forwardsTUVto server where V waits
when a server adds a new edge, it checks whether a cycle is thereResolution:
When a cycle is detected, a transaction in the cycle is aborted to break the deadlock.
58

Figure 14.15
Probes transmitted to detect deadlock
W UVW
Held by
Waits for
Deadlock detected
C Z
A
X
Waits for
W U
U
WUV V
Initiation
Held by
B
Waits for
Y
59
W

Figure 13.15
Probes transmitted to detect deadlock
example of edge chasing starts with X sending WU, then Y sends WUV , then Z sends WUVW
W UVW Deadlock C
W
detected
A
X
Z
Waits for
WU
U
Held by
Waits for
WU V V
Initiation
Held by
YB Waitsfor60

Edge chasing conclusion
probe to detect a cycle with N transactions will require 2N1 messages.Studies of databases show that the average deadlock involves 2 transactions.
the above algorithm detects deadlock provided that
waiting transactions do not abort
no process crashes, no lost messages
to be realistic it would need to allow for the above failures
refinements of the algorithm p 5867
to avoid more than one transaction causing detection to start and then more than one
being aborted
not time to study these now
61

Figure 14.16
Two probes initiated
a initial situation
b detection initiated at object requested by T
c detection initiated at object requested by W
Waits for T
T
Waits for T TU VUVU
WVTT
W
Waits for
TUWV TUW
WV
W
W
Waits for
62
VWVU U

Summary of concurrency control for distributed transactions
each server is responsible for the serializability of transactions that access its own objects.
additional protocols are required to ensure that transactions are serializable globally.
timestamp ordering requires a globally agreed timestamp ordering
optimistic concurrency control requires global validation or a means of
forcing a global ordering on transactions.
twophase locking can lead to distributed deadlocks.
distributed deadlock detection looks for cycles in the global waitfor graph.
edge chasing is a noncentralized approach to the detection of distributed deadlocks
.
64

Atomicity properties of transactions
Definitions
durability and failure atomicity
durability requires that objects are saved in permanent storage and will be available indefinitely
failure atomicity requires that effects of transactions are atomic even when the server crashes
65

13.6 More on recovery
What is meant by failure atomicity?
Recovery is concerned with:
ensuring that a servers objects are durable and
that the service provides failure atomicity.
for simplicity we assume that when a server is running, all of its objects are in volatile memory
and all of its committed objects are in a recovery file in permanent storage.
recovery consists of restoring the server with the latest committed versions of all of its objects from its recovery file
66

Recovery manager
The task of the Recovery Manager RM is:
to save objects in permanent storage in a recovery file for committed
transactions;
to restore the servers objects after a crash;
to reorganize the recovery file to improve the performance of recovery;
to reclaim storage space in the recovery file.
media failures
i.e. disk failures affecting the recovery file
need another copy of the recovery file on an independent disk. e.g. implemented as stable storage or using mirrored disks
we deal with recovery of 2PC separately at the endwe study logging 13.6.1 but not shadow versions 13.6.2
67

Recoveryintentions lists
Each server records an intentions list for each of its currently active transactions
an intentions list contains a list of the object references and the values of all the objects that are altered by a transaction
when a transaction commits, the intentions list is used to identify the objects affected
the committed version of each object is replaced by the tentative onethe new value is written to the servers recovery file
in 2PC, when a participant says it is ready to commit, its RM must record its intentions list and its objects in the recovery file
it will be able to commit later on even if it crashes
when a client has been told a transaction has committed, the recovery files
of all participating servers must show that the transaction is committed,even if they crash between prepare to commit and commit
68

Figure 14.18
Types of entry in a recovery file
Type of entry
Description of contents of entry
Object Transaction status
A value of an object.
Intentions list
Transaction identifier and a sequence of intentions, each of which consists of identifier of object, position in recovery file of value of object.
Transaction identifier, transaction statusprepared, committed abortedand other status values used for the twophase
commit protocol.
69

Why is that a good Types of entry in a recovery file idea?
Type of entry
Description of contents of entry
Object state flattened to bytes
Object Transaction status
A value of an object.
Intentions list
Transaction identifier and a sequence of intentions, each of which consists of identifier of object, position in recovery file of value of object.
Figure 14.18
Transaction identifier, transaction statusprepared, committed aborted and other status values used for the twophase
commit protocol. first entry says prepared
For distributed transactions we need information relating to the 2PC as well as object values, that is:
transaction status committed, prepared or aborted
intentions list
Note that the objects need not be next to one another in the recovery file

70

Logginga technique for the recovery file
the recovery file represents a log of the history of all the transactions at a server
it includes objects, intentions lists and transaction status
in the order that transactions prepared, committed and aborted
a recent snapshota history of transactions after the snapshot
during normal operation the RM is called whenever a transaction prepares, commits or aborts
prepareRM appends to recovery file all the objects in the intentions list followed by status prepared and the intentions list
commitabortRM appends to recovery file the corresponding status
assume append operation is atomic, if server fails only the last write will be
incomplete
to make efficient use of disk, buffer writes. Note: sequential writes are more efficient than those to random locations
committed status is forced to the login case server crashes 71

Figure 14.19
Log for banking service
P0 P1 P2 P3 P4 Object: A Object: B Object: C Object: A Object: B Trans: T Trans: T
P5 P6 P7 Object: C Object: B Trans: U
100 200
300 80 220
prepared committed A, P1
B, P2
P0 P3
278 242
prepared C, P5B, P6P4
Checkpoint
End of log
72

Log for banking service
committed status
P0 P1 P2 P3 P4 P5 P6 P7 Object:A Object:B Object:C Object:A Object:B Trans:T Trans:T Object:C Object:B Trans:U
100 200 300 80 220
prepared committed 278 A, P1
B, P2
P0 P3
242 prepared C, P5 B, P6
Figure 14.19.
Checkpoint
End of log
LoggingmechanismforFig12.7therewouldreallybeotherobjectsinlogfile
initial balances of A, B and C 100, 200, 300
TsetsAandBto80and220.UsetsBandC to242and278
prepared status and intentions list
entries to left of line represent a snapshot checkpoint of values of A, B and C before T started. T has committed, but U is prepared.
the RM gives each object a unique identifier A, B, C in diagram
each status entry contains a pointer to the previous status entry, then the checkpoint
can follow transactions backwards through the file
73

P4

Recovery of objectswith logging
When a server is replaced after a crash
it first sets default initial values for its objects
and then hands over to its recovery manager.
The RM restores the servers objects to include
all the effects of all the committed transactions in the correct order and
none of the effects of incomplete or aborted transactions
it reads the recovery file backwards by following the pointersrestores values of objects with values from committed transactionscontinuing until all of the objects have been restored
if it started at the beginning, there would generally be more work to do
to recover the effects of a transaction use the intentions list to find the value of the objects
e.g. look at previous slide assuming the server crashed before T committedthe recovery procedure must be idempotent
74

Loggingreorganising the recovery file
RM is responsible for reorganizing its recovery fileso as to make the process of recovery faster and
to reduce its use of space
checkpointing
the process of writing the following to a new recovery file
the current committed values of a servers objects,
transaction status entries and intentions lists of transactions that have not yet been fully resolved
including information related to the twophase commit protocol see latercheckpointing makes recovery faster and saves disk space
done after recovery and from time to time
can use old recovery file until new one is ready, add a mark to old filedo as above and then copy items after the mark to new recovery file
replace old recovery file by new recovery file
75

Recovery of the twophase commit protocol
The above recovery scheme is extended to deal with transactions doing the 2PC protocol when a server fails
it uses new transaction status values done, uncertain see Fig 14.6
Participant
Transaction identifier, coordinator
added by RM when participant votes yes
the coordinator uses committed when result is Yes;
done when 2PC completeif a transaction is done its information may be removed when reorganising the recovery file
the participant uses uncertain when it has voted Yes; committed when told the result uncertain entries must not be removed from recovery file
It also requires two additional types of entry:
Type of entry Coordinator
Description of contents of entry
Transaction identifier, list of participants added by RM when coordinator prepared
77

Figure 14.21
Log with entries relating to twophase commit protocol
Trans:T
Coordr: T
Trans: T committed
Trans: U prepared
Partpant: U Trans: U Coordr: . . uncertain
Trans: U committed
prepared
partpant list: . . .
intentions list
intentions list
78

oBruiStWfitfahetrehtheasaetsvrevnreTvdre,crforohamrasmUsheciftidrtneadedsahiatrenliidsedrbccoweomfeormrhdeaitntvtheaedtoUlaranspdtreapnaptrayerdwticeaipnhadanvptearUticuipnacnetrtain and participant Log with entries relating to twophase commit protocol
Trans:T prepared
Coordr:T partpant
Trans:T Trans:U committed prepared
Partpant:U Trans:U Trans:U
intentions list
intentions list
Figure 14.21
list: . . .
Coordr: . .
uncertain committed
coordinator entry
T where server is coordinator prepared comes first, followed by the
participant entry coordinator entry, then committeddone is not shown
entries in log for
and U where server is participant prepared comes first followed by
the participant entry, then uncertain and finally committed
these entries will be interspersed with values of objectsrecovery must deal with 2PC entries as well as restoring objects
where server was coordinator find coordinator entry and status entries.where server was participant find participant entry and status entries
79

Recovery of the twophase commit protocol
Role Status Coordinator prepared
Action of recovery manager
Figure 14.22
No decision had been reached before the server failed. It sends abortTransaction to all the servers in the participant list and adds the transactionstatus abortedinitsrecoveryfile.Sameactionforstate aborted. If there is no participant list, the participants will eventually timeout and abort the transaction.
Coordinator Participant
committed committed
A decision to commit had been reached before the server failed. It sends a doCommit to all the participants in its participant list in case
it had not done so before and resumes the twophase protocol at step 4 Fig 13.5.
The participant sends a haveCommitted message to the coordinator in case this was not done before it failed. This will allow the coordinator to discard information about this transaction at the next checkpoint.
Participant uncertain The participant failed before it knew the outcome of the transaction. It the most recent entry in the recovery file determines the status of the
transaction at the time of failure
cannot determine the status of the transaction until the coordinator
informs it of the decision. It will send a getDecision to the coordinator to determine the status of the transaction. When it receives the reply it
the RM action for each transaction depends on whether server
will commit or abort accordingly.
was coordinator or participant and the status
Participant prepared The participant has not yet voted and can abort the transaction. Coordinator done No action is required.

80

Summary of transaction recovery
Transactionbased applications have strong requirements for the long life and integrity of the information stored.
Transactions are made durable by performing checkpoints and logging in a recovery file, which is used for recovery when a server is replaced after a crash.
Users of a transaction service would experience some delay during recovery.
It is assumed that the servers of distributed transactions exhibit crash failures and run in an asynchronous system,
but they can reach consensus about the outcome of transactions because crashed servers are replaced with new processes that can acquire all the relevant information from permanent storage or from other servers
82

83

Reviews

There are no reviews yet.

Only logged in customers who have purchased this product may leave a review.

Shopping Cart
[SOLVED] 代写 C algorithm Scheme parallel concurrency database graph theory Teaching material based on Distributed Systems: Concepts and Design, Edition 4, AddisonWesley 2005.
30 $