0% found this document useful (0 votes)
63 views16 pages

Traffic Models in High-Speed Networks

This document summarizes key points from a lecture on traffic modeling and engineering for computer networks: - Traffic models are needed to predict network performance and enable traffic engineering. Users are often classified based on applications which can have different bandwidth and latency requirements. - Examples of traffic include videoconferencing, streaming media, email, and voice calls. Proper traffic management is necessary to provide quality of service for these diverse applications. - Traffic management operates on multiple time scales from milliseconds to handle bursts up to months for capacity planning. Both increasing capacity and tailoring services to user needs can improve welfare, and the best approach depends on technology and user behavior.

Uploaded by

jenath1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
63 views16 pages

Traffic Models in High-Speed Networks

This document summarizes key points from a lecture on traffic modeling and engineering for computer networks: - Traffic models are needed to predict network performance and enable traffic engineering. Users are often classified based on applications which can have different bandwidth and latency requirements. - Examples of traffic include videoconferencing, streaming media, email, and voice calls. Proper traffic management is necessary to provide quality of service for these diverse applications. - Traffic management operates on multiple time scales from milliseconds to handle bursts up to months for capacity planning. Both increasing capacity and tailoring services to user needs can improve welfare, and the best approach depends on technology and user behavior.

Uploaded by

jenath1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Acknowledgement/Reference

Slides are taken from the following source:


CS 5224 S. Keshav,
Keshav, An Engineering Approach to Computer
High Speed Networks and Multimedia Networking, Chapter 14: Traffic Management
Networking

Traffic Model and


Engineering
Dr. Chan Mun Choon
School of Computing, National University of Singapore

August 17, 2005 (week 2/3) 1 Aug 17, 2005 (Week 2/3) Traffic Model/Engineering 2

Motivation for Traffic Models An example


In order to predict the performance of a Executive participating in a worldwide
network system, we need to be able to videoconference
describe the behavior of the input traffic
Proceedings are videotaped and stored in an
Often, in order to reduce the complexity, we classify
the user behavior into classes, depending on the archive
applications Edited and placed on a Web site
Sometimes, we may be even able to restrict or
Accessed later by others
shape the users behavior so that they conform to
some specifications During conference
Only when there is a traffic model is traffic Sends email to an assistant
engineering possible Breaks off to answer a voice call

Aug 17, 2005 (Week 2/3) Traffic Model/Engineering 3 Aug 17, 2005 (Week 2/3) Traffic Model/Engineering 4

1
What this requires Traffic management
For video Set of policies and mechanisms that allow a
sustained bandwidth of at least 64 kbps
network to efficiently satisfy a diverse range of
low loss rate
service requests
For voice
sustained bandwidth of at least 8 kbps Tension is between diversity and efficiency
low loss rate Traffic management is necessary for providing
For interactive communication Quality of Service (QoS)
low delay (< 100 ms one-way)
Subsumes congestion control (congestion == loss of
For playback
efficiency)
low delay jitter

For email and archiving


reliable bulk transport
Aug 17, 2005 (Week 2/3) Traffic Model/Engineering 5 Aug 17, 2005 (Week 2/3) Traffic Model/Engineering 6

Time Scale of Traffic Management Time Scale (cont.)


Less than one round-trip-time (cell-level) Session (call-level)
End-points interact with network elements
Perform by the end-
end-points and switching nodes Signaling
Scheduling and buffer management Admission control
Regulation and policing Service pricing
Policy routing (datagram networks)
Routing (connection-
(connection-oriented networks)
Day
One or more round-trip-times (burst-level) Human intervention
Perform by the end-points Peak load pricing
Feedback flow control Weeks or months
Retransmission Human intervention
Capacity planning
Renegotiation

Aug 17, 2005 (Week 2/3) Traffic Model/Engineering 7 Aug 17, 2005 (Week 2/3) Traffic Model/Engineering 8

2
Some economic principles Principles applied
A single network that provides heterogeneous QoS is A single wire that carries both voice and data is more
better than separate networks for each QoS efficient than separate wires for voice and data
unused capacity is available to others ADSL
Lowering delay of delay-
delay-sensitive traffic increased IP Phone
welfare Moving from a 20% loaded10 Mbps Ethernet to a 20%
can increase welfare by matching service menu to user loaded 100 Mbps Ethernet will still improve social
requirements welfare
BUT need to know what users want (signaling) increase capacity whenever possible

Better to give 5% of the traffic lower delay than all


traffic low delay
should somehow mark and isolate low-low-delay traffic
Aug 17, 2005 (Week 2/3) Traffic Model/Engineering 9 Aug 17, 2005 (Week 2/3) Traffic Model/Engineering 10

The two camps Telephone traffic models (Call)


Can increase welfare either by How are calls placed?
matching services to user requirements or call arrival model
increasing capacity blindly studies show that time between calls is drawn from an
exponential distribution
Which is cheaper?
call arrival process is therefore Poisson
depends on technology advancement
memoryless:
memoryless: the fact that a certain amount of time has passed
User behavior/expectation/tolerance since the last call gives no information of time to next call
small and smart vs. big and dumb
How long are calls held?
It seems that smarter ought to be better usually modeled as exponential
otherwise, to get low delays for some traffic, we need to give all traffic low however, measurement studies (in the mid-
mid-90s) show that it is
delay, even if it doesnt need it heavy tailed
But, perhaps, we can use the money spent on traffic A small number of calls last a very long time
management to increase capacity Why?
We will study traffic management, assuming that it matters!
Aug 17, 2005 (Week 2/3) Traffic Model/Engineering 11 Aug 17, 2005 (Week 2/3) Traffic Model/Engineering 12

3
Exponential/Heavy Tail Distribution Packet Traffic Model for Voice
Exponential Distribution: P(X>x) = e-x/3 A single voice source is well represented by a two state
Pareto Distribution: P(X>x) = x-1.5 process: an alternating sequence of active or talk spurt,
Means of both distributions are 3 follow by silence period
Talk spurts typically average 0.4 1.2s
Silence periods average 0.6 1.8s
Talk spurt intervals are well approximated by exponential
distribution, but mot true for silence period
Silence periods allow voice packets to be multiplexed
For more detail description, take a look at Chapter 3 of
Broadband Integrated Networks, by Mischa Schwartz,
1996.

Aug 17, 2005 (Week 2/3) Traffic Model/Engineering 13 Aug 17, 2005 (Week 2/3) Traffic Model/Engineering 14

Internet traffic modeling Internet traffic models: features


A few apps account for most of the traffic LAN connections differ from WAN connections
WWW, FTP, E- higher bandwidth usage (more bytes/call)
E-mail
longer holding times
P2P
Many parameters are heavy-
heavy-tailed
A common approach is to model apps (this ignores examples
distribution of destination!) # bytes in call (e.g. file size of a web download)

time between app invocations call duration

connection duration means that a few calls are responsible for most of the traffic
these calls must be well-
well-managed
# bytes transferred
also means that even aggregates with many calls not be smooth
packet inter-
inter-arrival distribution
Little consensus on models
But two important features
Aug 17, 2005 (Week 2/3) Traffic Model/Engineering 15 Aug 17, 2005 (Week 2/3) Traffic Model/Engineering 16

4
Traffic classes Traffic classes - details
Networks should match offered service to source A basic division: guaranteed service and best effort
requirements (corresponds to utility functions) like flying with reservation or standby
Telephone network offers one single traffic class Guaranteed-
Guaranteed-service
The Internet offers little restriction on traffic behavior utility is zero unless app gets a minimum level of service
quality: bandwidth, delay, loss
Example: telnet requires low bandwidth and low delay open-
open-loop flow control (e.g. do not send more than x Mbps)
utility increases with decrease in delay with admission control
network should provide a low-
low-delay service e.g. telephony, remote sensing, interactive multiplayer games
or, telnet belongs to the low-
low-delay traffic class Best-
Best-effort
send and pray
closed-
closed-loop flow control (e.g. TCP)
e.g. email, ftp

Aug 17, 2005 (Week 2/3) Traffic Model/Engineering 17 Aug 17, 2005 (Week 2/3) Traffic Model/Engineering 18

GS vs. BE (cont.) Example of Traffic Classes


Degree of synchrony ATM Forum IETF
time scale at which peer endpoints interact based on sensitivity to based on sensitivity to
GS are typically synchronous or interactive bandwidth delay
interact on the timescale of a round trip time
GS GS
e.g. telephone conversation or telnet
CBR, VBR intolerant
BE are typically asynchronous or non-
non-interactive
BE tolerant
interact on longer time scales

e.g. Email ABR, UBR BE


interactive burst
Sensitivity to time and delay
interactive bulk
GS apps are real-
real-time
asynchronous bulk
performance depends on wall clock

BE apps are typically indifferent to real time


automatically scale back during overload

Aug 17, 2005 (Week 2/3) Traffic Model/Engineering 19 Aug 17, 2005 (Week 2/3) Traffic Model/Engineering 20

5
ATM Forum GS subclasses ATM Forum BE subclasses
Constant Bit Rate (CBR) Available Bit Rate (ABR)
constant, cell-
cell-smooth traffic users get whatever is available
mean and peak rate are the same zero loss if network signals (in RM cells) are obeyed
e.g. telephone call evenly sampled and uncompressed no guarantee on delay or bandwidth
constant bandwidth, variable quality Unspecified Bit Rate (UBR)
Variable Bit Rate (VBR) like ABR, but no feedback
long term average with occasional bursts no guarantee on loss
try to minimize delay presumably cheaper
can tolerate loss and higher delays than CBR
e.g. compressed video or audio with constant quality, variable
bandwidth

Aug 17, 2005 (Week 2/3) Traffic Model/Engineering 21 Aug 17, 2005 (Week 2/3) Traffic Model/Engineering 22

IETF GS subclasses IETF BE subclasses


Tolerant GS Interactive burst
nominal mean delay, but can tolerate occasional variation bounded asynchronous service, where bound is qualitative,
not specified what this means exactly but pretty tight
uses controlled-
controlled-load service e.g. paging, messaging, email

book uses older terminology (predictive) Interactive bulk


even at high loads, admission control assures a source that bulk, but a human is waiting for the result
its service does not suffer e.g. FTP
it really is this imprecise! Asynchronous bulk
Intolerant GS bulk traffic
need a worst case delay bound e.g P2P
equivalent to CBR+VBR in ATM Forum model

Aug 17, 2005 (Week 2/3) Traffic Model/Engineering 23 Aug 17, 2005 (Week 2/3) Traffic Model/Engineering 24

6
Some points to ponder Reading
The only thing out there is CBR (example?) and Reference
asynchronous bulk (example?)! Gallager, Data Networks, 2nd
Bertsekas and Gallager,
These are application requirements. There are Edition, Chapter 3: Delay Models in Data Network,
also organizational requirements (how to Prentice Hall
provision QoS end-
end-to-
to-end)
Users needs QoS for other things too!
billing
reliability and availability

Aug 17, 2005 (Week 2/3) Traffic Model/Engineering 25 Aug 17, 2005 (Week 2/3) Traffic Model/Engineering 26

Motivation for Traffic Engineering A Question


Traffic engineering for a wide-
wide-range of traffic Waiting time at two fast-
fast-food stores MD and
models and classes is difficult even for a single BK
networking node In MD, a queue is formed at each of the m servers
However, if we restrict ourselves to a small set (assume a customer chooses queue independently
of traffic model, one can get some good and does not change queue once he/she joins the
intuition queue)
For example, traffic engineering in the telephone In BK, all customers wait at a single queue and
network has been effective served by m servers
The M/M/* queuing analysis is a simple and elegant
Which one is better?
way to perform basic traffic engineering

Aug 17, 2005 (Week 2/3) Traffic Model/Engineering 27 Aug 17, 2005 (Week 2/3) Traffic Model/Engineering 28

7
Multiplexing of Traffic Littles Theorem
Traffic engineering involves the sharing of resource/link by (), service rate (
Given customer arrival rate ( ()
several traffic streams
Time-
Time-Division Multiplexing (TDM) What is the average number of customers (N(N) in the system
Divide transmission into time slots and what is the average delay per customer (T
(T) ?
Frequency Division Multiplexing (FDM) Let
Divide transmission into divide frequency channels
For TDM/FDM, if there is no traffic in a data stream, N(t)
N(t) = # of customers at time t

bandwidth is wasted (t)
(t) = # of customers arrived in the interval [0,t]
In statistical multiplexing, data from all traffic streams are Ti = time spent in system by ith customer
merged into a single queue and transmitted in a FIFO manner 1 t
t 0
Statistical multiplexing Nt, typical # of customers up to time t is N ( )d
has smaller delay per packet than TDM/FDM
can have larger delay variance
N = lim N t = lim t T = lim Tt

Results can be shown using queuing analysis
t > t > t >

Aug 17, 2005 (Week 2/3) Traffic Model/Engineering 29 Aug 17, 2005 (Week 2/3) Traffic Model/Engineering 30

Littles Theorem Derivation of Littles Theorem


Littles Theorem: N = T
Average # of customers = average arrival rate * average delay
time of a customer
Crowded system (large N) are associated with long customer
delays and vice versa

N()

Arrival, ()

T2 Departure, ()

T1

Aug 17, 2005 (Week 2/3) Traffic Model/Engineering 31 Aug 17, 2005 (Week 2/3) Traffic Model/Engineering 32

8
Littles Theorem (contd) Example
Littles Theorem is very general and holds for BG, Example 3.1
almost every queuing system that reaches L is the arrival rate in a transmission line
statistics equilibrium in the limit NQ is the average # of packets in queue (not under
transmission)
W is the average time spent by a waiting packet
(exclude packet being transmitted)
From LT, NQ = W
Furthermore, if X is the average transmission time,
= X
where is the lines utilization factor (proportion of time
line is busy)

Aug 17, 2005 (Week 2/3) Traffic Model/Engineering 33 Aug 17, 2005 (Week 2/3) Traffic Model/Engineering 34

Example What is a Poisson Process?


BG, Example 3.2 A Poisson Process A(t)
A(t)
A network of transmission lines where packets arrived at n 1. A(t)
A(t) is a counting process that represents the total
different nodes with rate 1 2 ,,n number of arrivals that have occurred from 0 to t, A(t)
A(t)
N is total number of packets in network A(s)
A(s) equals the number of arrivals in the interval (s,t
(s,t]]
Average delay per packet is T = N 2. Number of arrivals that occur in disjoint intervals are
n independent

i =1
i
3. Number of arrivals in any interval is Poisson distributed
with parameter
independent of packet length distribution (service rate) and
routing
( ) n
P{ A(t + ) A(t ) = n} = e
n!

Aug 17, 2005 (Week 2/3) Traffic Model/Engineering 35 Aug 17, 2005 (Week 2/3) Traffic Model/Engineering 36

9
Inter-arrival Time Exponential Distribution
Number of events in time interval t has a Poisson Distribution
Probability Density
Distribution p{ } = e

Inter-arrival time
Cumulative Density P{ s} = 1 e s
Distribution
Based on the definition of Poisson process, what is the
inter-
inter-arrival time between arrivals? 1
The distribution of inter-
inter-arrival time, t, can be Mean E{ } =
computed as P{A(t)
P{A(t) = 0}
Using only Property 2, it can be shown that inter-
inter-arrival Variance
times are independent and exponentially distributed 1
Var{ } =
with parameter 2
Aug 17, 2005 (Week 2/3) Traffic Model/Engineering 37 Aug 17, 2005 (Week 2/3) Traffic Model/Engineering 38

Poisson Process Memoryless Property


Merging: if two or more independent Poisson process For service time with exponential distribution, the
are merged into a single process, the merged process is additional time needed to complete a customers service
a Poisson process with a rate equal to the sum of the in progress is independent of when the service started
rates
Splitting:
Splitting: if a Poisson process is split probabilistically P{n > r +t |n > t} = P{n > r}
into two processes, the two processes are obtained are
also Poisson
Inter-
Inter-arrival time of bus arriving at a bus stop has an
exponential distribution. A random observer arrives at
the bus stop and a bus just leave t seconds ago. How
long should the observer expects to wait?

Aug 17, 2005 (Week 2/3) Traffic Model/Engineering 39 Aug 17, 2005 (Week 2/3) Traffic Model/Engineering 40

10
Applications of Poisson Process Basic Queuing Model
Poisson Process has a number of nice properties that
make it very useful for analytical and probabilistic
analysis N
Has been used to model a large number of physical
occurrences [KLE75] Departure Process
Number of soldiers killed by their horse (1928) Exponential with mean 1/
Sequence of gamma rays emitting from a radioactive particle M/M/1
Call holding time of telephone calls Number of servers
In many cases, the sum of large number of independent Arrival Process
stationary renewal process will tend to be a Poisson
process Memoryless (or Poisson process with rate )
Default N is infinite
[KLE75] L. Kleinrock,
Kleinrock, Queuing Systems, Vol I, 1975. D - deterministic, G - General

Aug 17, 2005 (Week 2/3) Traffic Model/Engineering 41 Aug 17, 2005 (Week 2/3) Traffic Model/Engineering 42

Birth-Death Process Derivation of M/M/1 Model


Balance Equations:
P0 = P1, P1 = P2, , Pn-1 = Pn
0 1 2 n-1 n n+1 Let = /
P0 = P1, P1 = P2, , Pn-1 = Pn

Pn = nP0
Model queue as a discrete time Markov chain
Let Pn be the steady state probability that there are n
customers in the queue
Balance equation: at equilibrium, the probability a
transition out of a state is equal to the probability of a
transition into the same state
Aug 17, 2005 (Week 2/3) Traffic Model/Engineering 43 Aug 17, 2005 (Week 2/3) Traffic Model/Engineering 44

11
Derivation of M/M/1 Model Properties of M/M/1 Queue

Pn = nP0 N = / (1 ) = / ( )
n Pn = n nP0 = P0 / (1 ) = 1 ( < 1) can be interpreted as the utilization of the queue
System is unstable if > 1 or > as N is not bounded
P0 = (1 )
In M/M/1 queue, there is no blocking/dropping, so
Pn = n (1 )
waiting time can increase without any limit
Average Number of Customers in System, N Buffer space is infinite, so customers are not rejected
N = n nPn = / (1 ) = / ( ) But there are infinite number of customers in front

Aug 17, 2005 (Week 2/3) Traffic Model/Engineering 45 Aug 17, 2005 (Week 2/3) Traffic Model/Engineering 46

M/M/1 More properties of M/M/1


From Littles Theorem,

N 1
T= = =
(1 )

1 1
W= =

Utilization
Aug 17, 2005 (Week 2/3) Traffic Model/Engineering 47 Aug 17, 2005 (Week 2/3) Traffic Model/Engineering 48

12
Example Example
BG, Example 3.8 (Statistical Multiplexing vs. BG, Example 3.9
TDM) Consider k TDM/FDM channels
(,)) or
Allocate each Poisson stream its own queue (, From previous example, merging k channels into a
(k,k
shared a single faster queue (k ,k)? single (k times faster) will keep the same N but
Increase and or a queue by a constant k > 1
reduces average delay by k
= k/k
/k = / (no change in utilization) So why use TDM/FDM ?
N = / 1
1 = / (no change)
(no change) Some traffic are not Poisson. For example, voice traffic
What changes? are regular with one voice packet every 20ms
1/k( )
T = 1/k( Merging multiplexing traffic streams into a single channel
Average transmission delay decreases by a factor k incurs buffering, queuing delay and jitter
Why?
Aug 17, 2005 (Week 2/3) Traffic Model/Engineering 49 Aug 17, 2005 (Week 2/3) Traffic Model/Engineering 50

Extension to M/M/m Queue Derivation of M/M/m Model


Balance Equations:
There are m servers, a customer is served by one of the
servers P0 = P1, P1 = 2P2,
pn-1 = npn (n <= m) , Pn-1 = nPn
pn-1 = mpn (n > m) Let = /m
(m ) n
p n = p0 ,n m
0 1 2
n!
m-1 m m+1

2 3 (m1) mm n
m m
p n = p0 ,n > m
m!
Aug 17, 2005 (Week 2/3) Traffic Model/Engineering 51 Aug 17, 2005 (Week 2/3) Traffic Model/Engineering 52

13
Derivation of M/M/m Model Extension to M/M/m/m Queue

pn = 1
There are m servers and m buffer size
This is no buffering
Calls are either served or rejected, calls rejected are lost
n =0
Common model for telephone switching

In order to compute Pn, P0 must be 0 1 2 m-1 m
computed first.
2 3 (m1) m

Aug 17, 2005 (Week 2/3) Traffic Model/Engineering 53 Aug 17, 2005 (Week 2/3) Traffic Model/Engineering 54

M/M/m/m Queue M/M/m/m Queue

Balanced Equations: PASTA: Poisson Arrival see times averages


P0 = P1, P1 = 2
2P2, , Pn-1 = nPn Pm is time average
Use time averages to compute loss rate
Pn = P0 (n) / n!
Loss for M/M/m/m queue is computed as the
mn Pn = mn P0 (n) / n! = 1 probability that there are m customers in the system:
P0 = (mn (n) / n!) -1
(m/m!) ( m n=0 (n/n!) ) -1
When does loss happens? The above equation is known as Erlang B formula
Loss happens when a customer arrives and see m and widely used to evaluate blocking probability
customers in the system

Aug 17, 2005 (Week 2/3) Traffic Model/Engineering 55 Aug 17, 2005 (Week 2/3) Traffic Model/Engineering 56

14
What is an Erlang? Erlang B Table
An Erlang is a unit of telecommunications traffic measurement
and represents the continuous use of one voice path
Average number of calls in progress Capacity (Erlangs
(Erlangs)) for grade of service of
Computing Erlang # of P=0.02 P=0.01 P=0.005 P=0.001
For a given grade of
Call arrival rate: Servers (N) service, a larger capacity
system is more efficient
1/, call departure rate =
Call Holding time is: 1/ 1 0.02 0.01 0.005 0.001
(statistical multiplexing)
System load in Erlang is / 5 1.66 1.36 1.13 0.76
Example: 10 5.08 4.46 3.96 3.09
A larger system incurs
a larger changes in
= 1 calls/sec, 1/
1/ = 100sec, load = 1/0.01 = 100 Erlangs
20 13.19 12.03 11.1 9.41 blocking probability
= 10 calls/sec, 1/
1/ = 10sec, load = 10/0.1 = 100 Erlangs when the system load
Load is function of the ratio of arrival rate to departure rate, 40 31.0 29.0 27.3 24.5 changes
independent of the specific rates 100 87.97 84.1 80.9 75.2

Aug 17, 2005 (Week 2/3) Traffic Model/Engineering 57 Aug 17, 2005 (Week 2/3) Traffic Model/Engineering 58

Example Multi-Class Queue


If there are 40 servers and target blocking rate is We can extend the Markov Chain for
2%, what is largest load supported? M/M/m/n
M/M/m/n to multi-
multi-class queues
P=0.02, N = 40
Such queues can be useful, for example, in cases
Load supported = 31 Erlang
where there is preferential treatment for one
Calls arrived at a rate of 1calls/sec and the class over another
average holding time is 12 sec. How many trunk
is needed to maintain call blocking of less than
1%?
Load = 1*12 = 12 Erlang
From Erlang B table, if P=0.01, N >= 20

Aug 17, 2005 (Week 2/3) Traffic Model/Engineering 59 Aug 17, 2005 (Week 2/3) Traffic Model/Engineering 60

15
Network of Queues
In a network, departing traffic from a queue is
strongly correlated with packet lengths beyond
the first queue. This traffic is the input to the
next queue.
Analysis using M/G/1 is affected
Kleinrock Independence Approximation
Poisson arrivals at entry points
Densely connected network
Moderate to heavy traffic load

Network with Product Form Solutions

Aug 17, 2005 (Week 2/3) Traffic Model/Engineering 61

16

You might also like