0% found this document useful (0 votes)
47 views65 pages

Distributed Systems Ii Fault-Tolerant Broadcast (CNT.) : Prof Philippas Tsigas

3rd lecture
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
47 views65 pages

Distributed Systems Ii Fault-Tolerant Broadcast (CNT.) : Prof Philippas Tsigas

3rd lecture
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 65

Prof Philippas Tsigas

Distributed Computing and Systems Research Group

DISTRIBUTED SYSTEMS II
FAULT-TOLERANT BROADCAST (CNT.)

Total, FIFO and causal ordering of multicast messages


Notice the consistent ordering
of totally ordered messages T1
and T2.
They are opposite to real time.
The order can be arbitrary
it need not be FIFO or causal

T1

T2

F1
F3

F2

and the causally related


messages C1 and C3

Time
C1
C2

C3

P1

P2
2

Figure 11.12

P3

Atomic Broadcast
Requires that all correct processes deliver all
messages in the same order.
Implies that all correct processes see the same view of
the world.

Atomic Broadcast
Theorem: Atomic broadcast is impossible in
asynchronous systems.
Proof:
Equivalent to consensus problem.

Review of Consensus
3

What is Consensus?

N processes
Each process p has

input variable xp (v) : initially either 0 or 1


output variable yp (d) : initially b (b=undecided)

v single value for process p; d decision value

A process is non-faulty in a run provided that it takes infinitely many


steps, and it is faulty otherwise

Consensus problem: design a protocol so that either


1. all non-faulty processes set their output variables to 0
2. all non-faulty processes set their output variables to 1
3. There is at least one initial state that leads to each outcomes 1
and 2 above

Consensus (II)
All correct processes propose a value, and must agree on a value related
to the proposed values!
Definition: The Consensus problem is specified as follows:
Termination: Every correct process eventually decides some value.
Validity: If all processes that propose a value, propose v, then all
correct processes eventually decide v.
Agreement: If a correct process decides v, then all correct processes
eventually decide v.
Integrity: Every process decides at most once, and if it decides on v
(not NU,) then some some process must have proposed it. (NU is a
special value which stands for no unanimity).

FLP
Theorem: Consensus is impossible in any asynchronous system
if one process can halt. [Fisher, Lynch, Peterson 1985]
Impossibility of distributed consensus with one faulty process
(the original paper)
http://dl.acm.org/citation.cfm?id=214121
A Brief Tour of FLP Impossibility
http://the-paper-trail.org/blog/a-brief-tour-of-flp-impossibility/
Possible Homework Assignment Area
8

Atomic Broadcast
Theorem 1: Any atomic broadcast algorithm solves
consensus.
Everybody does an Atomic Broadcast
Decides first value delivered
Theorem 2: Atomic broadcast is impossible in any
asynchronous system if one process can halt.
Proof: By contradiction using FLP and Theorem 1
9

Total ordering using a sequencer


A process wishing to TO-multicast m to g attaches a unique id,
id(m) and sends it to the sequencer and the members.
Other processes: B-deliver <m,i>
put <m,i> in hold-back queue

B-deliver order message, get g and S


and i from order message
wait till <m,i> in queue and S = rg,
TO-deliver m and set rg to S+1
The sequencer keeps sequence number sg for group g
When it B-delivers the message it multicasts an order
message to members of g and increments sg.

Figure 11.14
10

Atomic Broadcast
Consensus is solvable in:
Synchronous systems (we will discuss such an
algorithm that works in f+1 rounds) [We will come
back to that!!!]
Certain semi-synchronous systems
Consensus is also solvable in
Asynchronous systems with randomization
Asynchronous systems with failure-detectors [We will
come back to that!!!]
11

SLIDES FROM THE BOOK TO HAVE A LOOK AT


Please check aslo the slides from your book.
I appened them here.

12

Teachingmaterial
basedonDistributed
Systems:Concepts
andDesign,Edition3,
AddisonWesley2001.

Distributed Systems Course


Coordination and Agreement
CopyrightGeorge
Coulouris,JeanDollimore,
TimKindberg2001
email:[email protected]
Thismaterialismade
availableforprivatestudy
andfordirectuseby
individualteachers.
Itmaynotbeincludedinany
productoremployedinany
servicewithoutthewritten
permissionoftheauthors.

Viewing:Theseslides
mustbeviewedin
slideshowmode.

11.4

Multicast communication

this chapter covers other types of coordination and


agreement such as mutual exclusion, elections and
consensus. We will study only multicast.
But we will study the two-phase commit protocol for
transactions in Chapter 12, which is an example of
consensus
We also omit the discussion of failure detectors
which is relevant to replication

How
multicast
to
localofarea
network?
Revision
of restrict
IP multicast
(section
4.5.1
page154)
Givecan
twoyou
reasons
fora restricting
thethe
scope
a multicast
message
IP multicast an implementation of group communication
built on top of IP (note IP packets are addressed to computers)
allows the sender to transmit a single IP packet to a set of computers that form a
multicast group (a class D internet address with first 4 bits 1110)
Dynamic membership of groups. Can send to a group with or without joining it
To multicast, send a UDP datagram with a multicast address
To join, make a socket join a group (s.joinGroup(group) - Fig 4.17) enabling it to
receive messages to the group

Multicast routers
Local messages use local multicast capability. Routers make it efficient by
choosing other routers on the way.

Failure model
Omission failures some but not all members may receive a message.
e.g. a recipient may drop message, or a multicast router may fail

IP packets may not arrive in sender order, group members can receive
messages in different orders
14

Manyisprojects
- multicast
Amoeba,
Transis,
Introduction
What
meanttoby[the
term Isis,
broadcast
? Horus (refs p436)

Multicast communication requires coordination and


agreement. The aim is for members of a group to
receive copies of messages sent to the group
Many different delivery guarantees are possible
e.g. agree on the set of messages received or on delivery ordering

A process can multicast by the use of a single


operation instead of a send to each member
For example in IP multicast aSocket.send(aMessage)
The single operation allows for:
efficiency I.e. send once on each link, using hardware multicast when
available, e.g. multicast from a computer in London to two in Beijing
delivery guarantees e.g. cant make a guarantee if multicast is
implemented as multiple sends and the sender fails. Can also do ordering
15

System model
The system consists of a collection of processes which can
communicate reliably over 1-1 channels
Processes fail only by crashing (no arbitrary failures)
Processes are members of groups - which are the
destinations of multicast messages
In general process p can belong to more than one group
Operations
multicast(g, m) sends message m to all members of process group g
deliver (m) is called to get a multicast message delivered. It is different from
receive as it may be delayed to allow for ordering or reliability.

Multicast message m carries the id of the sending process


sender(m) and the id of the destination group group(m)
We assume there is no falsification of the origin and
destination of messages
16

Does IP multicast support open and closed groups?

Open and closed groups


Closed groups

only members can send to group, a member delivers to itself


they are useful for coordination of groups of cooperating servers

Open
they are useful for notification of events to groups of interested processes

Closed group

Open group

Figure 11.9
17

Reliability
one-to-one
communication(Ch.2 page 57)
validity?
How
do weof
achieve
integrity?
The term reliable 1-1 communication is defined in
terms of validity and integrity as follows:
validity:
any message in the outgoing message buffer is eventually delivered to
the incoming message buffer;

integrity:
the message received is identical to one sent, and no messages are
delivered twice.
validity - by use of acknowledgements and retries
integrity
by use checksums, reject duplicates (e.g. due to retries).
If allowing for malicious users, use security18techniques

What are
ack-implosions?
11.4.1
Basic
multicast
A correct process will eventually deliver the message
provided the multicaster does not crash
note that IP multicast does not give this guarantee

The primitives are called B-multicast and B-deliver


A straightforward but ineffective method of implementation:
use a reliable 1-1 send (i.e. with integrity and validity as above)
To B-multicast(g,m): for each process p g, send(p, m);
On receive (m) at p: B-deliver (m) at p

Problem
if the number of processes is large, the protocol will suffer from ack-implosion

A practical implementation of Basic Multicast may be


achieved over IP multicast (on next slide, but not shown)
19

11.4.2 Reliable multicast


The protocol is correct even if the multicaster crashes
it satisfies criteria for validity, integrity and agreement
it provides operations R-multicast and R-deliver
Integrity - a correct process, p delivers m at most once.
Also p group(m) and m was supplied to a multicast
operation by sender(m)
Validity - if a correct process multicasts m, it will eventually
deliver m
Agreement - if a correct process delivers m then all correct
processes in group(m) will eventually deliver m

integrity as for 1-1 communication


validity - simplify by choosing sender as the one process
agreement - all or nothing - atomicity, even if multicaster crashes
21

Agreement - every correct process B-multicasts the message to the others. If


the
channels
used
for B-multicast
pIntegrity
does not
R-deliver
thenreliable
thiswill
is 1-1
because
ittodidnt
B-deliver
- because no
Validity
--abecause
correct process
B-deliver
itself
Reliable
multicast algorithm over basic multicast
guarantee
integrity
others
did either.

processes can
to R-multicast
message,
a process
B-multicasts it to
belong to aseveral
closed
groups
processes in the group including itself

Figure 11.10
when a message is B-delivered, the recipient B-multicasts
it to the group, then R-delivers it. Duplicates are detected.

primitives R-multicast and R-deliver

Reliable multicast can be implemented efficiently over IP


multicast
What
by
you
holding
saycorrect
about
backin
the
messages
until every
of system?
this
member
algorithm?
can
Is thiscan
algorithm
anperformance
asynchronous
receive them. We skip that.
22

Reliable multicast over IP multicast (page 440)


This protocol assumes groups are closed. It uses:
piggybacked acknowledgement messages
negative acknowledgements when messages are missed

the
Process
p maintains:
piggybacked
values in a message allow recipients to learn about
Spg a message
sequence
for each group it belongs to and
messages
they have
not yet number
received

Rqg sequence number of latest message received from process q to g

For process p to R-multicast message m to group g


piggyback Spg and +ve acks for messages received in the form <q, Rqg >
IP multicasts the message to g, increments Spg by 1

A process on receipt by of a message to g with S from p


Iff S=Rpg+1 R-deliver the message and increment Rpg by 1
If S Rpg discard the message
If S> Rpg + 1 or if R<Rqg (for enclosed ack <q,R>)
then it has missed messages and requests them with negative acknowledgements
puts new message in hold-back queue for later delivery
23

The hold-back queue for arriving multicast messages

The hold back queue is not necessary for reliability as in the implementation using
IP muilticast, but it simplifies the protocol, allowing sequence numbers to represent
sets of messages. Hold-back queues are also used for ordering protocols.

Message
processing
deliver

Figure 11.11
Incoming
messages

Hold-back
queue

Delivery queue
When delivery
guarantees are
met
24

Reliability properties of reliable multicast over IP


Integrity - duplicate messages detected and rejected.
IP multicast uses checksums to reject corrupt messages
Validity - due to IP multicast in which sender delivers to itself
Agreement - processes can detect missing messages. They
must keep copies of messages they have delivered so that
they can re-transmit them to others.
discarding of copies of messages that are no longer needed :
when piggybacked acknowledgements arrive, note which processes have
received messages. When all processes in g have the message, discard it.
problem of a process that stops sending - use heartbeat messages.

This protocol has been implemented in a practical way in


Psynch and Trans (refs. on p442)
25

11.4.3 Ordered multicast


The basic multicast algorithm delivers messages to processes in an
arbitrary order. A variety of orderings may be implemented:
FIFO ordering
If a correct process issues multicast(g, m) and then multicast(g,m ), then
every correct process that delivers m will deliver m before m .

Causal ordering
If multicast(g, m) multicast(g,m ), where is the happened-before relation
between messages in group g, then any correct process that delivers m will
deliver m before m .

Total ordering
If a correct process delivers message m before it delivers m, then any other
correct process that delivers m will deliver m before m.

Ordering is expensive in delivery latency and bandwidth consumption

26

Total, FIFO and causal ordering of multicast messages


Notice the consistent ordering
of totally ordered messages T1
and T2.
They are opposite to real time.
The order can be arbitrary
it need not be FIFO or causal
Note the FIFO-related
messages F1 and F2

T1

T2

F1
Ordered
multicast delivery is expensive in bandwidth and
latency. Therefore the less expensive orderings (e.g.
F
F2
FIFO
or causal) are chosen for applications for which3
they are suitable

and the causally related


messages C1 and C3

Time
C1

these definitions do not imply


reliability, but we can define
atomic multicast - reliable and
totally ordered.

C2

C3

P1

P2
27

Figure 11.12

P3

Display from a bulletin board program

Users run bulletin board applications which multicast messages


One multicast group per topic (e.g. os.interesting)
Require reliable multicast - so that all members receive messages
Ordering:

total (makes
the numbers
the same at
all sites)

Bulletinboard: os.interesting
Item

From

Subject

23

A.Hanlon

Mach

24

G.Joseph

Microkernels

25

A.Hanlon

Re:Microkernels

26

T.LHeureux

RPCperformance

27

M.Walker

Re:Mach

end

FIFO (gives sender order


28

causal (makes replies


come after original
message)

Figure 11.13

Implementation of FIFO ordering over basic multicast


We discuss FIFO ordered multicast with operations
FO-multicast and FO-deliver for non-overlapping groups. It can
be implemented on top of any basic multicast
Each process p holds:
Spg a count of messages sent by p to g and
Rqg the sequence number of the latest message to g that p delivered from q

For p to FO-multicast a message to g, it piggybacks Spg on the


message, B-multicasts it and increments Spg by 1
On receipt of a message from q with sequence number S, p
checks whether S = Rqg + 1. If so, it FO-delivers it.
if S > Rqg + 1 then p places message in hold-back queue until
intervening messages have been delivered. (note that Bmulticast does eventually deliver messages unless the sender
crashes)
29

Implementation of totally ordered multicast

The general approach is to attach totally ordered identifiers


to multicast messages

each receiving process makes ordering decisions based on the identifiers


similar to the FIFO algorithm, but processes keep group specific sequence
numbers
operations TO-multicast and TO-deliver

we present two approaches to implementing total ordered


multicast over basic multicast
1. using a sequencer (only for non-overlapping groups)
2. the processes in a group collectively agree on a sequence number for each
message

30

Total ordering using a sequencer


A process wishing to TO-multicast m to g attaches a unique id,
id(m) and sends it to the sequencer and the members.
Other processes: B-deliver <m,i>
put <m,i> in hold-back queue

B-deliver order message, get g and S


and i from order message
wait till <m,i> in queue and S = rg,
TO-deliver m and set rg to S+1
The sequencer keeps sequence number sg for group g
When it B-delivers the message it multicasts an order
message to members of g and increments sg.

Figure 11.14
31

Members
that
do not
multicast
send
messages
(with
a sequence
number)
Discussion
of
sequencer
protocol
What
can
the
sequencer
doheartbeat
about
its
history
buffer
becoming
full?
Members
piggyback
on
theirsome
messages
the
latest
sequence
number
they have
seen
happens
when
member
stops
multicasting?

Since sequence numbers are defined by a


sequencer, we have total ordering.
Like B-multicast, if the sender does not crash, all
members receive the message

What are the potential problems with


using a single sequencer?
Kaashoeks protocol uses hardware-based multicast
The sender transmits one message to sequencer, then
the sequencer multicasts the sequence number and the message
but IP multicast is not as reliable as B-multicast so the sequencer stores
messages in its history buffer for retransmission on request
members notice messages are missing by inspecting sequence numbers
32

The ISIS algorithm for total ordering


this protocol is for open or closed groups
P2

1 Message

1. the process P1 B-multicasts a


message to members of the group

22

2. the receiving processes propose


numbers and return them to the sender

2P

P4

3 Agreed Seq

1
2

3. the sender uses the proposed


numbers to generate an agreed
number

eq
S
d
se
o
p
o
r

P1
3

Figure 11.15

P3
33

ISIS total ordering - agreement of sequence numbers


Each process, q keeps:
Aqg the largest agreed sequence number it has seen and
Pqg its own largest proposed sequence number

1. Process p B-multicasts <m, i> to g, where i is a unique


identifier for m.
2. Each process q replies to the sender p with a proposal for
the messages agreed sequence number of
Pqg := Max(Aqg, Pqg ) + 1.
assigns the proposed sequence number to the message and places it in its
hold-back queue

3. p collects all the proposed sequence numbers and selects


the largest as the next agreed sequence number, a.
It B-multicasts <i, a> to g. Recipients set Aqg := Max(Aqg, a ) ,
attach a to the message and re-order hold-back queue.
34

Discussion of ordering in ISIS protocol


Hold-back queue

proof of total ordering on page 448

ordered with the message with the smallest sequence number at


the front of the queue
when the agreed number is added to a message, the queue is reordered
when the message at the front has an agreed id, it is transferred
to the delivery queue
even if agreed, those not at the front of the queue are not transferred

every process agrees on the same order and delivers messages


in that order, therefore we have total ordering.

Latency
3 messages are sent in sequence, therefore it has a higher latency than sequencer
method
this ordering may not be causal or FIFO
35

Causally ordered multicast


We present an algorithm of Birman 1991 for causally
ordered multicast in non-overlapping, closed groups.
It uses the happened before relation (on multicast
messages only)
that is, ordering imposed by one-to-one messages is not taken into
account

It uses vector timestamps - that count the number of


multicast messages from each process that
happened before the next message to be multicast

36

Causal ordering using vector timestamps


each process has its own
vector timestamp

To CO-multicast m to g, a process adds 1 to its


entry in the vector timestamp and
B-multicasts m and the vector timestamp
When a process B-delivers m, it places it in a
hold-back queue until messages earlier in the
causal ordering have been delivered:a) earlier messages from same sender have been
delivered
b) any messages that the sender had delivered when it
sent the multicast message have been delivered

Figure 11.16
Note: a process can immediately CO-deliver to
itself its own messages (not shown)
37

then it CO-delivers the message and


updates its timestamp

Comments
after delivering a message from pj, process pi
updates its vector timestamp
by adding 1 to the jth element of its timestamp

compare the vector clock rule where


Vi[j] := max(Vi[j], t[j]) for j=1, 2, ...N
in this algorithm we know that only the jth element will increase

for an outline of the proof see page 449


if we use R-multicast instead of B-multicast then the
protocol is reliable as well as causally ordered.
If we combine it with the sequencer algorithm we get
total and causal ordering
38

Comments on multicast protocols


we need to have protocols for overlapping groups
because applications do need to subscribe to
several groups
definitions of global FIFO ordering etc on page 450
and some references to papers on them
multicast in synchronous and asynchronous systems
all of our algorithms do work in both

reliable and totally ordered multicast


can be implemented in a synchronous system
but is impossible in an asynchronous system (reasons discussed in
consensus section - paper by Fischer et al.)
39

Summary
Multicast communication can specify requirements for reliability and ordering, in
terms of integrity, validity and agreement
B-multicast
a correct process will eventually deliver a message provided the multicaster does not
crash

reliable multicast
in which the correct processes agree on the set of messages to be delivered;
we showed two implementations: over B-multicast and IP multicast

delivery ordering
FIFO, total and causal delivery ordering.
FIFO ordering by means of senders sequence numbers
total ordering by means of a sequencer or by agreement of sequence numbers
between processes in a group
causal ordering by means of vector timestamps

the hold-back queue is a useful component in implementing


multicast protocols
40

Prof Philippas Tsigas


Distributed Computing and Systems Research Group

DISTRIBUTED SYSTEMS II
FAULT-TOLERANT AGREEMENT

Teachingmaterial
basedonDistributed
Systems:Concepts
andDesign,Edition3,
AddisonWesley2001.

Distributed Systems Course

Coordination and Agreement


CopyrightGeorge
Coulouris,JeanDollimore,
TimKindberg2001
email:[email protected]
Thismaterialismade
availableforprivatestudy
andfordirectuseby
individualteachers.
Itmaynotbeincludedinany
productoremployedinany
servicewithoutthewritten
permissionoftheauthors.

Viewing:Theseslides
mustbeviewedin
slideshowmode.

11.5 Consensus and Related problems

Consensus - Agreement
All correct processes propose a value, and must agree on a value related
to the proposed values!
Definition: The Consensus problem is specified as follows:
Termination: Every correct process eventually decides some value.
Validity: If all processes that propose a value, propose v, then all
correct processes eventually decide v.
Agreement: If a correct process decides v, then all correct processes
eventually decide v.
Integrity: Every process decides at most once, and if it decides on v
(not NU,) then some some process must have proposed it. (NU is a
special value which stands for no unanimity).

43

The one general problem (Trivial!)

G
Battlefield

Troops

44

The two general problem:

Bluearmy

Enemy

Redarmy

<------------------------------->

Blue
G

messengers

Red
G

45

Rules:
Blue and red army must attack
at same time
Blue and red generals synchronize
through messengers
Messengers (messages) can be lost

46

How Many Messages Do We Need?


assumebluestarts...
BG

RG
attackat9am
Isthisenough??

47

How Many Messages Do We Need?


assumebluestarts...
BG

RG
attackat9am
ack(redgoesat9am)

Isthisenough??

48

How Many Messages Do We Need?


assumebluestarts...
BG

RG
attackat9am
ack(redgoesat9am)
gotack

Isthisenough??

49

Stated problem is Impossible!


Theorem: There is no protocol that uses a finite number of messages that solves the two-generals problem (as
stated here)
Proof: Consier the shortest such protocol(execution)

Consider last message


Protocol must work if last message never arrives
So dont send it
But, now you have a shorter protocol(execution)

50

Stated problem is Impossible!


Theorem: There is no protocol that uses a finite
number of messages that solves the two-generals
problem (as stated here)

Alternatives??

51

Probabilistic Approach?
Send as many messages as possible, hope one
gets through...
assumebluestarts...
BG

RG
attackat9am
attackat9am
attackat9am
attackat9am

52

Eventual Commit
Eventually both sides attack...
assumebluestarts...
BG

RG
attackASAP
retransmits
retransmits
onmyway!

53

2-Phase Eventual Commit


Eventually both sides attack...
assumebluestarts...
BG

RG
readytoattack?
retransmits

phase1

yes,atyourdisposal
attackASAP
retransmits
ack

phase2

54

Chalmerssurroundedbyarmyunits
ArmieshavetoattacksimultaneouslyinordertoconquerChalmers
Communicationbetweengeneralsbymeansofmessengers
Somegeneralsofthearmiesaretraitors
55

The Byzantine agreement problem

One process(the source or commander) starts with a binary value


Each of the remaining processes (the lieutenants) has to decide on a
binary value such that:
Agreement: all non-faulty processes agree on the same value
Validity: if the source is non-faulty, then all non-faulty processes agree
on the initial value of the source
Termination: all processes decide within finite time
So if the source is faulty, the non-faulty processes can agree on any
value
It is irrelevant on what value a faulty process decides

56

Byzantine Empire

Conditions for a solution for Byzantine faults

Number of processes: n
Maximum number of possibly failing processes: f
Necessary and sufficient condition for a solution to Byzantine
agreement:
f<n/3
Minimal number of rounds in a deterministic solution:
f+1
There exist randomized solutions with a lower expected number of
rounds

58

Senario 1

59

Senario 2

60

The Byzantine agreement problem (general


formulation)
All processes starts with a binary value
They have to decide on a binary value such that:
Agreement: all non-faulty processes agree on the same value
Validity: Input value
Termination: All correct processes decide within finite time
It is irrelevant on what value a faulty process decides

61

Impossibility of 1-resilient 3-processor Agreement


C:VC=1

A:VA=0

E1

B:VB=1

B:VB=0

C:VC=0

A:VA=1
62

Impossibility of 1-resilient 3-processor Agreement


C:VC=1

A:VA=0
E0

B:VB=1

B:VB=0

C:VC=0

A:VA=1
63

Impossibility of 1-resilient 3-processor Agreement


C:VC=1

A:VA=0

E1

B:VB=1

B:VB=0

C:VC=0

A:VA=1
64

Impossibility of 1-resilient 3-processor Agreement


E2
C:VC=1

A:VA=0

B:VB=1

B:VB=0

C:VC=0

A:VA=1
65

Proof
In E0 A and B decide 0
In E1 B and C decide 1
In E2 C has to decide 1 and A has to decide 0,
contradiction!

66

You might also like