0% found this document useful (0 votes)
23 views9 pages

36-QoS TBF WFQ

Uploaded by

banaz.jalil.59
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views9 pages

36-QoS TBF WFQ

Uploaded by

banaz.jalil.59
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Circuit Switching

Network resources (e.g., bandwidth) divided into “pieces”


Pieces allocated to and reserved for calls
Computer Networks Resource idle if not used by owner (no sharing)
Ways to divide link bandwidth into “pieces”
• frequency division multiplexing (FDM)
frequency
Example:
Lecture 36: 4 users
time
QoS, Priority Queueing, VC, WFQ
• time division multiplexing (TDM)
frequency

time

Packet Switching Bandwidth division into “pieces” Packet Switching:


Dedicated allocation
Each end-to-end data stream Resource reservation Statistical Multiplexing
divided into packets
10 Mbps
Packets from multiple users share network resources A Ethernet statistical multiplexing
C

Each packet uses full link bandwidth


B 1.5 Mbps
Resources used as needed
queue of packets
waiting for output
Resource contention: link
• aggregate resource demand can exceed amount available
• congestion: packets queued, wait for link use
D E
• store and forward: packets move one hop at a time
• each node receives complete packet before forwarding
Sequence of A’s and B’s packets does not have a fixed
pattern statistical multiplexing
Packet vs. Circuit Switching Pros and Cons of Packet Switching
Packet switching allows more users to use network! Advantages: great for bursty data
• resource sharing
For example: • simpler, no call setup
• 1 Mbps link
• each user:
• sends 100 kbps when “active”
Disadvantages: excessive congestion,
• active 10% of time
N users packet delay and loss
1 Mbps link
• protocols needed for reliable data transfer
circuit-switching: 10 users • congestion control
• no service guarantee: “best-effort” service

packet switching: with 35 users, probability that


more than 10 are active at the same time < .0004

Better than Best-Effort Service Example: HTTP vs. VoIP Traffic


Approach: deploy enough link capacity such that 1Mbps VoIP shares 1.5 Mbps link with HTTP
congestion doesn’t occur, traffic flows without • HTTP bursts can congest router, cause audio loss
queueing delay or overflow buffer loss • want to give priority to audio over HTTP
• advantage: low complexity in network mechanisms • packets can be differentiated by port number or
• packets can be marked as belonging to different classes
• disadvantage: high bandwidth costs, most of the time
bandwidth is under utilized (e.g., 2% average utilization)
1 Mbps
phone R1
Alternative: multiple classes of service R2
• partition traffic into classes (not individual connections)
• network treats different classes of traffic differently 1.5 Mbps link
Priority Queueing Traffic Metering/Policing
Send highest priority high priority queue What if applications misbehave (VoIP sends higher than
queued packet first
arrivals departures declared rate)?
• multiple classes, with
different priorities
classifier server
Marking and/or policing:
• fairness: gives priority to low priority queue • force sources to adhere to bandwidth allocations
some connections • provide protection (isolation) for one class from others
2
• delay bound: higher priority 5 • done at network ingress
1 3 4
connections have lower delay arrivals

• but within the same priority, packet in 1 Mbps


service 1 3 2 4 5
still operates as FIFO, hence phone R1 R2
delay not bounded departures

• relatively cheap to operate 1 3 2 4 5


1.5 Mbps link
(O(log N)), N number of packets in queue
packet marking and/or policing

Policing Mechanisms Token-Bucket Filter


Limit packet stream to specified
Goal: limit traffic to not exceed declared parameters burst size and average rate
Three commonly used criteria:
1. average rate: how many packets can be sent per averaging
time interval
• crucial question: what is the averaging interval length?
• 100 packets per sec or 6,000 packets per min have the same average! • bucket can hold at most b tokens
• new tokens generated at the rate of r tokens/sec
2. peak rate: packet sent at link speed, inter-packet gap is
• new tokens dropped once bucket is full
transmission delay
• e.g., 6,000 packets per min (ppm) avg.; 1,500 ppsec peak
• packet can be sent only if there’s enough tokens
in buffer to cover it
3. (max.) burst size: maximum number of packets allowed to • assuming 1 token is needed per packet, over
be sent at peak rate without intervening idle period interval of length t: number of packets metered
out is ≤ (rt + b)
Circuit vs. Packet Switching Packet-Switched Networks
No call setup at network layer
Packet switching: data sent through the
No state to support end-to-end connections at routers
network in discrete “chunks” • no network-level concept of “connection”
• route may change during session
Circuit switching: dedicated circuit per call Packets forwarded using destination host address
• end-to-end resources reserved for calls • packets between same source-destination pair may take
• link bandwidth, switch capacity different paths
• call setup required

• dedicated resources: no sharing


• guaranteed performance application
application
• resource idle if not used by owner transport
transport
network 1. send data 2. receive data network
data link
data link
physical
physical

Pros and Cons of Packet Switching Virtual Circuits (VC)


Advantages: great for bursty data Datagram network provides network-layer
• resource sharing connectionless service
• simpler, no call setup
VC network provides network-layer connection-
Disadvantages: excessive congestion, packet oriented service
delay and loss Analogous to the transport-layer services, but:
• protocols needed for reliable data transfer
• service is host-to-host, as opposed to socket-to-socket
• congestion control
• no service guarantee of any kind • implementation in network core

Source-to-destination path behaves much like a


How to provide circuit-like quality of service?
• bandwidth and delay guarantees needed for
telephone circuit
multimedia apps • in terms of performance, and
• network actions along the path
Virtual Circuits Virtual Circuits
A VC comprises: Signalling protocol:
1. path from source to destination • used to setup, maintain, teardown VC
• each call must be set up before data can flow
• e.g., ReSource reserVation Protocol (RSVP)
• requires signalling protocol
• fixed path determined at call setup time,
remains fixed throughout call
• every router on path maintains state
application
for each passing connection/flow 6. receive data application
transport 5. data flow begins
• link, router resources (bandwidth, buffers) transport
network
may be allocated to VC 4. call connected 3. accept call network
data link
data link
physical
2. VC numbers, one number for each link along path physical
1. initiate call 2. incoming call
• each packet carries a VC identifier (not destination host address)

3. entries in forwarding tables in routers along path

VC Forwarding Table Per-VC Resource Isolation


Packet belonging to a VC carries a VC number
To provide circuit-like quality of service
VC number must be changed for each link • resources allocated to a VC must be isolated from other
traffic
New VC number obtained from forwarding table
Examples: MPLS, Frame-relay, ATM, PPP Bit-by-bit Round Robin: t9 t6 t3 t0
Forwarding table on router NW: • cyclically scan per-VC queues,
4321
VC number incoming incoming outgoing outgoing sending one bit from each VC
12 NW 22 32
interface VC# interface VC# (if present) t7 t4 t1
µ
1 2
1 12 2 22 • 1 round, R( ), is defined as all 321 RR
3 2 63 1 18 non-empty queues have been t11 t10 t8 t5 t2
3 7 2 17 served 1 quantum 5 4321
interface 1 97 3 87 • R(t5) = 2
number
… … … … • time at Round 3? Round 4? 1 bit
A.k.a. Generalized Processor Sharing (GPS)
Routers maintain connection state information!
Fluid-Flow Approximation Packetized Scheduling
Packet-by-packet Round Robin:
A continuous service model • cyclically scan per-flow queues, sending one
• instead of thinking of each quantum as packet from each flow (if present) t5 t4 t2 t1

serving discrete bits in a given order • Problem: gives bigger share to 2 1


• think of each connection as a fluid stream, flows with big packets
t9 t8 t7 t6 t3
µ
described by the speed and volume of flow 65 4321 RR

At each quantum the Packet-by-packet Fair Queueing:


same amount of fluid • compute F: finish round, the round a t8 t7 t3 t2
packet finishes service
from each (non-empty) 2 1
• simulates fluid-flow RR in the
stream flows out µ
F=4 F=2
RR µ
computation of F’s t9 t6 t5 t4 t1
FQ
concurrently • serve packets with the 65 4321
F=5 F=3 F=1
smallest F first F=6 F=4 F=2

R5 R4 R3 R2 R1

Start and Finish Rounds Round# vs. Wall-Clock Time


Let: Round#
• time: wall-clock time
When does packet i finish service? • round: virtual-clock time
α
F
i

F αi = S αi + P αi,
a c b
• µ = 1 unit
t8 t7 t3 t2

where P αi is the service time (in


α

• t αi: arrival time of packet i of flow α


2 1 Pi

rounds) of packet i and S αi the


F=4 F=2
t9 t6 t5 t4 t1
µ • Nac(t): #active flows at time t α
S
FQ i

service start round 65 4321


F=5 F=3 F=1
Computing the rate of change:
a: Nac = 1, ∂R/∂t = µ/Nac(t) = 1,
F=6 F=4 F=2 α
t i tαi +δ1 tαi +δa tαi +δ2

At what round does packet i b: Nac = 2, ∂R/∂t = ½, δ2 = 2∗δ1 Wall-clock time

of flow α start seeing service? c: at the beginning, Nac = 1, ∂R/∂t = 1,


S αi = MAX(F αi–1, Aαi) halfway serving packet i, a packet belonging to
another flow arrives, Nac = 2, ∂R/∂t = ½
• S αi = F αi–1 if there is a queue, Aαi otherwise
As Nac(t) changes, finish round stays the same,
• Aαi = R(t αi): round at the time packet i arrives
actual time stretches
Round Computation Example Arrival Round Computation
Scenario: When packet i of an active flow arrives, its finish round is
• flows A has 1 packet of size 1 arriving at time t0 α α α α
computed as F i = F i–1 + P i , where F i–1 is the finish
• flows B and C each has 1 packet of size 2 arriving at time t0 round of the last packet in α’s queue
• flow A has another packet of size 2 arriving at time t4
If flow α is inactive, there’s no packet in its queue,
Round#
F αi = Aαi + P αi , how do we compute Aαi?
Slope (∂R/∂t): 3.5
FA2

a = ⅓, b = ½, If flow α has been inactive for Δt time and there has been Nac
c = ⅓, d = 1 d flows during the whole time, we can perform round catch up:
Aαi = F αi–1 + Δt(1/Nac)
FB1 FC1
2
S A2 = A A2
1.5 c
What is the arrival 1
FA1
b Iterated deletion: if Nac has changed, one or more times, over
round of A’s 2 packet?
nd a Δt, round catch up must be computed in piecewise fashion,
R(t A2) = 1.5 0 3 4 5.5 7
every time Nac changes expensive
Wall-clock time
assuming fluid-flow approximation

Weighted Fair Queueing (Weighted) Fair Queueing


Weighted-Fair Queueing (WFQ): Credit accumulation:
• generalized Round Robin • allows a flow to have a bigger share if it has been idle
• each VC/flow/class gets weighted • discouraged because it can be abused: accumulate
amount of service in each cycle credits for a long time, then send a big burst of data
• P αi = Lαi/(ωµ), Lαi size of packet
Characteristics of (W)FQ:
t5 t4 t3 t2
• max-min fair
2 1 ω=⅔
ω=2 • bounded delay
• expensive to implement
F=2 F=1
t9 t8 t7 t6 t1 µ
WFQ
65 4321 ω=1
ω=⅓
F=5 F=3 F=1
F=6 F=4 F=2
Max-Min Fair Max-Min Fair
In words: max-min fair share maximizes minimum Let:
share of flows whose demands have not been fully µtotal: total resource (e.g., bandwidth) available
satisfied
µi: total resource given to (flow) i
1. no flow gets more than its request
µfair: fair share of resource
2.no other allocation satisfying condition 1 has a
higher minimum allocation ρi: request for resource by (flow) i
3. condition 2 remains true as we remove the flow
with minimal request Max-Min fair share is µi = MIN(ρi, µfair)
µtotal = ∑ µi, i = 1 to n

Max-Min Fair Share Example Providing Delay Guarantee


Token bucket filter and WFQ combined provides
Let: i ρi µi guaranteed upper bound on delay
µtotal = 30
A 12 11 arriving token rate, r
B 11 11 traffic
bucket size, b
C 8 8
per-flow
rate, μf
Initialy µfair = 10
WFQ
ρC = 8, so unused resource (10 – 8 = 2) is divided
evenly between flows whose demands have not arriving
Dmax = b/μf
been fully met traffic QoS guarantee!
Same inefficiency issue as with circuit switching: allocating non-
Thus, µfair for A and B = 10 + 2/2 = 11 sharable bandwidth to flow leads to low utilization if flows don’t
use their allocations
Limitations of (W)FQ t8 t7 t3 t2
Work Conservation
2 1

Round computation expensive: F=4 F=2


µ
Work-conserving schedulers:
t9 t6 t5 t4 t1

must re-compute R every time FQ • doesn't go idle whenever there is packet in queue
65 4321
number of active flows changes F=6
F=5 F=3
F=4 F=2
F=1 • makes traffic burstier
• could require more buffer space downstream

Unless packet transmission can be pre-empted,


fairness is “quantized” by minimum packet size Non-work conserving schedulers:
• only serve packets whose service times have arrived
• once a big packet starts transmission, newly arriving
packets with smaller finish times must wait for • more work to determine whether packets' service times
completion of transmission have arrived
• smooth out traffic by idling link and pacing out packets
• flows with relatively smaller packets will suffer this
more than flows with larger packets

You might also like