0% found this document useful (0 votes)
26 views229 pages

CN Module 3 Full

Uploaded by

cuckoodeer03
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views229 pages

CN Module 3 Full

Uploaded by

cuckoodeer03
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 229

MODULE 3

NETWORK LAYER
Syllabus
Network layer design issues. Routing algorithms - The Optimality
Principle, Shortest path routing, Flooding, Distance Vector Routing, Link
State Routing, Multicast routing, Routing for mobile hosts. Congestion
control algorithms. Quality of Service (QoS) - requirements, Techniques
for achieving good QoS.
Network layer
• The network layer is responsible for host-to-host delivery and for
routing the packets through the routers or switches.
• Other functions include Routing and Congestion control
Network Layer Design Issues

1. Store-and-Forward Packet Switching


2. Services Provided to the Transport Layer
3. Implementation of Connectionless Service
4. Implementation of Connection-Oriented Service

4
Store-and-Forward Packet Switching
• A host with a packet to send transmits it to the nearest router, either
on its own LAN or over a point-to-point link to the carrier.
• The packet is stored there until it has fully arrived so the checksum
can be verified.
• Then it is forwarded to the next router along the path until it reaches
the destination host, where it is delivered.
• This mechanism is store-and-forward packet switching

5
Store-and-Forward Packet Switching

fig 5-1

The environment of the network layer protocols. 6


Store-and-Forward Packet Switching
• The major components of the system are the carrier's equipment
(routers connected by transmission lines), shown inside the shaded
oval, and the customers' equipment, shown outside the oval.
• Host H1 is directly connected to one of the carrier's routers, A, by a
leased line.
• In contrast, H2 is on a LAN with a router, F, owned and operated by
the customer.

7
Services Provided to the Transport Layer

The network layer services have been designed with the


following goals.

1. The services should be independent of the router


technology.
2. The transport layer should be shielded from the number,
type, and topology of the routers present.
3. The network addresses made available to the transport
layer should use a uniform numbering plan, even across
LANs and WANs.

8
Types of services by network layer
• Two different services are possible
1. Connection Oriented
2. Connectionless
• If connectionless service is offered, packets are injected into the
subnet individually and routed independently of each other.
• No advance setup is needed.
• In this context, the packets are frequently called datagrams.
• If connection-oriented service is used, a path from the source router
to the destination router must be established before any data packets
can be sent.
• This connection is called a VC (virtual circuit), in analogy with the
physical circuits set up by the telephone system.
a. Datagram Approach: Connectionless Service
• The network-layer protocol treats each packet independently, with
each packet having no relationship to any other packet.
• The idea was that the network layer is only responsible for delivery
of packets from the source to the destination.
• In this approach, the packets in a message may or may not travel
the same path to their destination
A connectionless packet-switched network
•When the network layer provides a connectionless service, each
packet traveling in the Internet is an independent entity; there is no
relationship between packets belonging to the same message.
•The switches in this type of network are called routers
•Each packet is routed based on the information contained in its
header: source and destination addresses.
•The destination address defines where it should go; the source
address defines where it comes from. The router in this case routes
the packet based only on the destination address. The source
address may be used to send an error message to the source if the
packet is discarded
•The algorithm that manages the tables and makes the routing
decisions is called the routing algorithm
Forwarding process in a router when used in a Datagram
approach
Virtual-Circuit Approach: Connection-Oriented Service

•In a connection-oriented service (also called virtual-circuit approach),


there is a relationship between all packets belonging to a message.
•Before all datagrams in a message can be sent, a virtual connection
should be set up to define the path for the datagrams.
•After connection setup, the datagrams can all follow the same path.
•In this type of service, not only must the packet contain the source
and destination addresses, it must also contain a flow label, a virtual
circuit identifier (VCI) that defines the virtual path the packet should
follow.
•Each packet is forwarded based on the label in the packet
A virtual-circuit packet-switched network
Forwarding process in a router when used in a
virtual-circuit network
•To create a connection-oriented service, a three-phase process is used:
1.Setup
2.Data transfer
3.Teardown
• In the setup phase, the source and destination addresses of the sender and
receiver are used to make table entries for the connection-oriented service.
• In the teardown phase, the source and destination inform the router to delete
the corresponding entries.
•Data transfer occurs between these two phases.
Comparison of Virtual-Circuit and Datagram Subnets

5-4
Routing Algorithms
Routing Algorithms

▪Routing algorithm is that part of the network layer software


responsible for deciding which output line an incoming packet should
be transmitted on.
▪If the subnet uses datagrams internally, this decision must be made a
new for every arriving data packet since the best route may have
changed since last time.
▪If the subnet uses virtual circuits internally, routing decisions are
made only when a new virtual circuit is being set up.
▪ Thereafter, data packets just follow the previously-established route.
▪ This is sometimes called session routing because a route remains in force for
an entire user session (e.g., a login session at a terminal or a file transfer).

20
Properties of routing algorithm
1. correctness
2. simplicity
3. robustness
4. stability
5. fairness and
6. optimality
Routing Algorithms

▪Routing algorithms can be grouped into two major classes:


1. Nonadaptive or Static Routing
2. Adaptive or Dynamic Routing

22
Adaptive algorithms

•An adaptive routing algorithm is also known as dynamic


routing algorithm.
•This algorithm makes the routing decisions based on the
topology and network traffic.
•They differ in where they get their information, when they change
the routes, what metric is used for optimization.
•The main parameters related to this algorithm are hop count,
distance and estimated transit time.

23
Nonadaptive algorithms

•Non Adaptive routing algorithm is also known as a static


routing algorithm.
•When booting up the network, the routing information stores to
the routers.
•Non Adaptive routing algorithms do not take the routing decision
based on the network topology or network traffic.

24
The Optimality Principle
•The purpose of a routing algorithm at a router is to decide which
output line an incoming packet should go. The optimal path from a
particular router to another may be the least cost path, the least
distance path, the least time path, the least hops path or a
combination of any of the above.
• set of optimal routes from all sources to a given destination form a
tree rooted at the destination. Such a tree is called a sink tree
• Goal of all routing algorithms is to discover and use the sink trees for
all routers
The Optimality Principle

•The optimality principle can be logically proved as follows −


If a better route could be found between router J and router K, the
path from router I to router K via J would be updated via this route.
Thus, the optimal path from J to K will again lie on the optimal path
from I to K.

• It states that if router J is on the optimal path from router I to router


K, then the optimal path from J to K also falls along the same route.

27
The Optimality Principle

(a) A subnet. (b) A sink tree for router B.


28
Shortest Path Routing
•It is one of the simple routing algorithms that are widely used for
routing in the network.
•The basic idea of it is to build a graph with each node representing a
router and each line representing a communication link.
•To choose a route between any two nodes in the graph the
algorithm simply finds the shortest path between the nodes.
• Shortest Path means that the path in which anyone or more metrics
is minimized. The metric may be distance, bandwidth, average traffic,
communication cost, mean queue length, measured delay or any other
factor
Shortest Path Routing
• One way of measuring path length is the number of hops.
Using this metric, the paths ABC and ABE in Fig. are equally
long.

• Another metric is the geographic distance in kilometers, in


which case ABC is clearly much longer than ABE (assuming
the figure is drawn to scale).

30
Dijkstra’s algorithm
•Dijkstra’s algorithm is a single-source shortest path algorithm.
Algorithm to find shortest path between two nodes of a graph

• Here, single-source means that only one source is given, and we


have to find the shortest path from the source to all the nodes.
Dijkstra’s shortest path routing Algorithm
• Each node is labeled (in parentheses) with its distance from the
source node along the best known path.
• Initially, no paths are known, so all nodes are labeled with infinity.
• As the algorithm proceeds and paths are found, the labels may
change, reflecting better paths.
• A label may be either tentative or permanent.
• Initially, all labels are tentative.
• When it is discovered that a label represents the shortest possible
path from the source to that node, it is made permanent and never
changed thereafter.

32
The formula for calculating the distance between the vertices:
if( d(u) + c(u, v) < d(v) ) Then
d(v) = d(u) +c(u, v)
• Let's understand the working of Dijkstra's algorithm. Consider the
below graph.
• First, we have to consider any vertex as a source vertex.
• Here we assume that 0 as a source vertex, and distance to all the
other vertices is infinity.
• Initially, we do not know the distances. First, we will find out the
vertices which are directly connected to the vertex 0.
• As we can observe in the below graph that two vertices are directly
connected to vertex 0.
Let's assume that the vertex 0 is represented by 'x' and the vertex 1 is
represented by 'y'. The distance between the vertices can be calculated by
using the below formula:
d(x, y) = d(x) + c(x, y) < d(y)
= (0 + 4) < ∞
=4<∞ Since 4<∞ so we will update d(y) from ∞ to 4.
Now we consider vertex 0 same as 'x' and vertex 4 as 'y’.
d(x, y) = d(x) + c(x, y) < d(y)
= (0 + 8) < ∞
=8<∞
• Therefore, the value of d(y) is 8. We replace the infinity value of
vertices 1 and 4 with the values 4 and 8 respectively.
• Now, we have found the shortest path from the vertex 0 to 1 and 0
to 4. Therefore, vertex 0 is selected. Now, we will compare all the
vertices except the vertex 0.
• Since vertex 1 has the lowest value, i.e., 4; therefore, vertex 1 is
selected.
•Since vertex 1 is selected, so we consider the path from 1 to 2, and 1 to
•4. First, we calculate the distance between the vertex 1 and 2. Consider the vertex 1
as 'x', and the vertex 2 as 'y'.
• d(x, y) = d(x) + c(x, y) < d(y)
• = (4 + 8) < ∞
• = 12 < ∞
•Since 12<∞ so we will update d(2) from ∞ to 12.
•Now, we calculate the distance between the vertex 1 and vertex 4.
Consider the vertex 1 as 'x' and the vertex 4 as 'y'.
• d(x, y) = d(x) + c(x, y) < d(y)
• = (4 + 11) < 8
• = 15 < 8
• Since 15 is not less than 8, we will not update the value d(4) from 8 to 15
•Till now, two nodes have been selected, i.e., 0 and 1.
•Now we have to compare the nodes except the node 0 and 1.
•The node 4 has the minimum distance, i.e., 8. Therefore, vertex 4 is
selected.
•Since vertex 4 is selected, so we will consider all the direct paths
from the vertex 4.
•The direct paths from vertex 4 are 4 to 0, 4 to 1, 4 to 8, and 4 to 5.
• Since the vertices 0 and 1 have already been selected so we will not
consider the vertices 0 and 1.
•We will consider only two vertices, i.e., 8 and 5.
•First, we consider the vertex 8. First, we calculate the distance
between the vertex 4 and 8.
•Consider the vertex 4 as 'x', and the vertex 8 as 'y’.
•d(x, y) = d(x) + c(x, y) < d(y)
• = (8 + 7) < ∞
• = 15 < ∞

•Since 15 is less than the infinity so we update d(8) from infinity to 15.
•Now, we consider the vertex 5. First, we calculate the distance
between the vertex 4 and 5.
•Consider the vertex 4 as 'x', and the vertex 5 as 'y’.
•d(x, y) = d(x) + c(x, y) < d(y)
• = (8 + 1) < ∞
• =9<∞

•Since 5 is less than the infinity, we update d(5) from infinity to 9.


•The node 5 has the minimum value, i.e., 9. Therefore, vertex 5 is
selected.
• Since the vertex 5 is selected, so we will consider all the direct paths
from vertex 5. The direct paths from vertex 5 are 5 to 8, and 5 to 6.
• First, we consider the vertex 8. First, we calculate the distance
between the vertex 5 and 8. Consider the vertex 5 as 'x', and the
vertex 8 as 'y'.
• d(x, y) = d(x) + c(x, y) < d(y)
• = (9 + 15) < 15
• = 24 < 15
• Since 24 is not less than 15 so we will not update the value d(8) from
15 to 24.
• Now, we consider the vertex 6. First, we calculate the distance between
the vertex 5 and 6. Consider the vertex 5 as 'x', and the vertex 6 as 'y'.
• d(x, y) = d(x) + c(x, y) < d(y)
• = (9 + 2) < ∞
• = 11 < ∞
• Since 11 is less than infinity, we update d(6) from infinity to 11
• Till now, nodes 0, 1, 4 and 5 have been selected.
• We will compare the nodes except the selected nodes.
• The node 6 has the lowest value as compared to other nodes. Therefore,
vertex 6 is selected.
• Since vertex 6 is selected, we consider all the direct paths from
vertex 6. The direct paths from vertex 6 are 6 to 2, 6 to 3, and 6 to7.
• First, we consider the vertex 2. Consider the vertex 6 as 'x', and the
vertex 2 as 'y'.
• d(x, y) = d(x) + c(x, y) < d(y)
• = (11 + 4) < 12
• = 15 < 12
• Since 15 is not less than 12, we will not update d(2) from 12 to 15
• Now we consider the vertex 3. Consider the vertex 6 as 'x', and the
vertex 3 as 'y'.
• d(x, y) = d(x) + c(x, y) < d(y)
• = (11 + 14) < ∞
• = 25 < ∞
• Since 25 is less than ∞, so we will update d(3) from ∞ to 25.
• Now we consider the vertex 7. Consider the vertex 6 as 'x', and the
vertex 7 as 'y'.
• d(x, y) = d(x) + c(x, y) < d(y)
• = (11 + 10) < ∞
• = 22 < ∞
• Since 22 is less than ∞ so, we will update d(7) from ∞ to 22.
•Till now, nodes 0, 1, 4, 5, and 6 have been selected. Now we have
•to compare all the unvisited nodes, i.e., 2, 3, 7, and 8. Since node 2 has the
minimum value, i.e., 12 among all the other unvisited nodes. Therefore,
node 2 is selected.
•Since node 2 is selected, so we consider all the direct paths from node 2. The
direct paths from node 2 are 2 to 8, 2 to 6, and 2 to 3.
•First, we consider the vertex 8. Consider the vertex 2 as 'x' and 8 as 'y'.
• d(x, y) = d(x) + c(x, y) < d(y)
• = (12 + 2) < 15
• = 14 < 15
•Since 14 is less than 15, we will update d(8) from 15 to 14.
•Now, we consider the vertex 6. Consider the vertex 2 as 'x' and 6 as 'y'.
d(x, y) = d(x) + c(x, y) < d(y)
• = (12 + 4) < 11
• = 16 < 11
•Since 16 is not less than 11 so we will not update d(6) from 11 to 16.
•Now, we consider the vertex 3. Consider the vertex 2 as 'x' and 3 as 'y'.
• d(x, y) = d(x) + c(x, y) < d(y)
• = (12 + 7) < 25
• = 19 < 25
•Since 19 is less than 25, we will update d(3) from 25 to 19.
• Till now, nodes 0, 1, 2, 4, 5, and 6 have been selected. We compare
• all the unvisited nodes, i.e., 3, 7, and 8. Among nodes 3, 7, and 8, node 8 has the
minimum value.
• The nodes which are directly connected to node 8 are 2, 4, and 5.
• Since all the directly connected nodes are selected so we will not consider any
node for the updation.
• The unvisited nodes are 3 and 7. Among the nodes 3 and 7, node 3 has the
minimum value, i.e., 19.
• Therefore, the node 3 is selected.
• The nodes which are directly connected to the node 3 are 2, 6, and7.
• Since the nodes 2 and 6 have been selected so we will consider these two nodes.
•Now, we consider the vertex 7. Consider the vertex 3 as 'x' and 7 as 'y'.
• d(x, y) = d(x) + c(x, y) < d(y)
• = (19 + 9) < 21
• = 28 < 21
•Since 28 is not less than 21, so we will not update d(7) from 28 to 21.
Find the shortest path from A to D using
Dijkstra’s algorithm
▪Figure shows a weighted undirected graph where the weight represents distance
▪Let us start by marking node A as permanent, indicated by a filled-in circle
Shortest Path Routing

The first 5 steps used in computing the shortest path from A to D.


The arrows indicate the working node. 56
Dijkstra’s shortest path routing
• After making node A as permanent ,examine each of the nodes adjacent to A (the working node), relabeling
each one with the distance to A.
• Whenever a node is relabeled, we also label it with the node from which the probe was made so that we can
reconstruct the final path later.
• If the network had more than one shortest path from A to D and we wanted to find all of them, we would
need to remember all of the probe nodes that could reach a node with the same distance.
• Having examined each of the nodes adjacent to A, we examine all the tentatively labeled nodes in the whole
graph and make the one with the smallest label permanent, as shown in Fig. 5-7(b).
• This one becomes the new working node.
• We now start at B and examine all nodes adjacent to it. If the sum of the label on B and the distance from B
to the node being considered is less than the label on that node, we have a shorter path, so the node is
relabeled.
• After all the nodes adjacent to the working node have been inspected and the tentative labels changed if
possible, the entire graph is searched for the tentatively labeled node with the smallest value.
• This node is made permanent and becomes the working node for the next round..
57
• This Process is continued for all the nodes available in the graph.
Flooding:
• Every incoming packet is sent out on every outgoing line except the
one it arrived on.
• Flooding obviously generates vast (infinite) numbers of duplicate
packets
• some measures are taken to damp the process.
• One such measure is
• to have a hop counter contained in the header of each packet,
which is decremented at each hop, with the packet being
discarded when the counter reaches zero.
• Ideally, the hop counter should be initialized to the length of the
path from source to destination.
• If the sender does not know how long the path is, it can initialize
the counter to the worst case, namely, the full diameter of the
subnet. 58
Flooding
• An alternative technique is to keep track of which packets have been
flooded, to avoid sending them out a second time.
• This is achieved by having the source router put a sequence number
in each packet it receives from its hosts.
• Each router then needs a list per source router telling which sequence
numbers originating at that source have already been seen.
• If an incoming packet is on the list, it is not flooded.
• When a packet comes in, it is easy to check if the packet is a
duplicate;
• if so, it is discarded.

59
Flooding:
• To prevent the list from growing without bound, each list
should be augmented by a counter, k, meaning that all
sequence numbers through k have been seen.
• When a packet comes in, it is easy to check if the packet is a
duplicate;
• if so, it is discarded.
• Furthermore, the full list below k is not needed, since k
effectively summarizes it.

60
Variant: Selective flooding:
• Routers do not send every incoming packet out on every line,
• It will send only on those lines that are going approximately in
the right direction
• There is usually little point in sending a westbound packet on an
eastbound line unless the topology is extremely peculiar and the
router is sure of this fact.

61
Applications of flooding
• In military applications, where large numbers of routers may
be blown to bits at any instant, the tremendous robustness
of flooding is highly desirable.
• In distributed database applications, it is sometimes
necessary to update all the databases concurrently, in which
case flooding can be useful.
• In wireless networks, all messages transmitted by a station
can be received by all other stations within its radio range.
• Used in metric against which other routing algorithms can
be compared.

62
Advantages of Flooding:
▪always chooses the shortest path because it chooses
every possible path in parallel.
▪Consequently, no other algorithm can produce a shorter
delay (if we ignore the overhead generated by the
flooding process itself).

63
Distance Vector Routing
Dynamic routing algorithm

Two dynamic algorithms are the most popular


1. Distance vector routing

2. Link state routing

65
Distance Vector Routing (DVR)
• Algorithms operate by having each router maintain a table (i.e, a
vector) giving the best-known distance to each destination and
which line to use to get there.
• These tables are updated by exchanging information with the
neighbors.
• Other names:
• distributed Bellman-Ford routing algorithm
• Ford-Fulkerson algorithm
• original ARPANET routing algorithm used in Internet under the
name RIP (Routing Information Protocol)

66
Routing Algorithms - DVR
• In DVR, each router maintains a routing table containing one entry for
each router in the subnet.
• This entry contains two parts:
• the preferred outgoing line to use for that destination
• an estimate of the time or distance to that destination
• The metric used might be
• number of hops,
• time delay in milliseconds,
• total number of packets queued along the path, or
• something similar
67
Distance vector Routing
• The starting assumption for distance vector routing is that each node
knows the cost of the link to each of its directly connected
neighbours.
• Distance to other nodes are assigned to an infinite cost
Steps
Step-01:

Each router prepares its routing table. By their local knowledge. each router knows
about-
• All the routers present in the network
• Distance to its neighboring routers

Step-02:

•Each router exchanges its distance vector with its neighboring routers.
•Each router prepares a new routing table using the distance vectors it has obtained
from its neighbors.
•This step is repeated for (n-2) times if there are n routers in the network.
•After this, routing tables converge / become stable.
•when there is no more change to their estimated shortest-path distance
Distance Vector Routing Example-

• Consider-
• There is a network consisting of 4 routers.
• The weights are mentioned on the edges.
• Weights could be distances or costs or delays.
Step-01:
Each router prepares its routing table using its local knowledge.

• At Router A-
Step-02:

• Each router exchanges its distance vector obtained in Step-01 with its
neighbors.
• After exchanging the distance vectors, each router prepares a new
routing table by updating the distance based on the following
equation
• Let dx(y) be the cost of the least-cost path from node x to node y.
• The least costs are related by Bellman-Ford equation,
• dx(y) = minv{c(x,v) + dv(y)}
• where,
• dx(y)= The least distance from x to y
c(x,v)= Node x's cost from each of its neighbour v
dv(y)= Distance from neighbor to node y
minv= selecting shortest distance.
• Cost of reaching destination B from router A = min { 2+0 , 1+7 } = 2 via B.
• Cost of reaching destination C from router A = min { 2+3 , 1+11 } = 5 via B.
• Cost of reaching destination D from router A = min { 2+7 , 1+0 } = 1 via D.
New routing table at router B is-
Step-03:

• Each router exchanges its distance vector obtained in Step-02 with its
neighboring routers.
• After exchanging the distance vectors, each router prepares a new
routing table.
Important Notes-
• In Distance Vector Routing,
• Only distance vectors are exchanged.
• “Next hop”values are not exchanged.
• While preparing a new routing table-
• A router takes into consideration only the distance vectors it has obtained from its neighboring
routers.
• It does not take into consideration its old routing table.
• The algorithm is called so because-
• It involves exchanging of distance vectors between the routers.
• Distance vector is nothing but an array of distance
• Routing tables are prepared total (n-1) times if there are n routers in the given network.
• This is because shortest path between any 2 nodes contains at most n-1 edges if there are n
nodes in the graph.
• Distance Vector Routing suffers from count to infinity problem.
• For more details:
• https://www.gatevidyalay.com/distance-vector-routing-routing-algorit
hms/
Count to infinity problem
•Counting to infinity is just another name for a routing loop.
•In distance vector routing, routing loops usually occur when an
interface goes down.
•It can also occur when two routers send updates to each other at the
same time.
•Imagine a network with a graph as shown above in figure .
•As you see in this graph, there is only one link between A and the other
parts of the network.
•Now imagine that the link between A and B is cut. At this time, B corrects
its table.
•After a specific amount of time, routers exchange their tables, and so B
receives C's routing table.
•Since C doesn't know what has happened to the link between A and B, it
says that it has a link to A with the weight of 2 (1 for C to B, and 1 for B to
A -- it doesn't know B has no link to A).
•B receives this table and thinks there is a separate link between C and A,
so it corrects its table and changes infinity to 3 (1 for B to C, and 2 for C to
A, as C said).
•Once again, routers exchange their tables.
•When C receives B's routing table, it sees that B has changed the
weight of its link to A from 1 to 3, so C updates its table and changes
the weight of the link to A to 4 (1 for C to B, and 3 for B to A, as B said).
• This process loops until all nodes find out that the weight of link to A is
infinity.
•This situation is shown in the table below.
•In this way, Distance Vector Algorithms have a slow convergence rate.
•One way to solve this problem is for routers to send information only
to the neighbors that are not exclusive links to the destination.
•For example, in this case, C shouldn't send any information to B
about A, because B is the only way to A.
• ANS: Going via B gives (11, 6, 14, 18, 12, 8).
• Going via D gives (19, 15, 9, 3, 9, 10).
• Going via E gives (12, 11, 8, 14, 5, 9).
• Taking the minimum for each destination except C gives (11, 6, 0, 3, 5,
8).
• The outgoing lines are (B, B, -, D, E, B).
Going via B gives (14, 9, 17, 21, 15, 11) Going via D gives (22, 18, 12, 6, 15, 16)
Going via E gives (10, 9, 6, 12, 3, 7) Taking the minimum for each destination
except F gives (10, 9, 6, 6, 3, -) The outgoing lines are (E, B, E, D, E, -)
Link State Routing
Link State Routing
• Distance vector routing was used in the ARPANET until 1979, when it
was replaced by link state routing.
• Two primary problems with distance vector routing is that
1. It did not take line bandwidth into account when choosing routes.
2. Count-to-infinity problem
Link State Routing-steps
• Each router must do the following:
1. Discover its neighbors and learn their network addresses.
2. Measure the delay or cost to each of its neighbors.
3. Construct a packet telling all it has just learned.
4. Send this packet to all other routers.
5. Compute the shortest path to every other router.
• The complete topology and all delays are experimentally measured and
distributed to every router.
• Then Dijkstra's algorithm can be run to find the shortest path to every
other router.
1.Discover its neighbors and learn their network
addresses
• When a router is booted, its first task is to learn who its neighbors are.
• It accomplishes this goal by sending a special HELLO packet on each
point-to-point line.
• The router on the other end is expected to send back a reply giving its
name.
• These names must be globally unique because when a distant router
later hears that three routers are all connected to F, it is essential that
it can determine whether all three mean the same F.
2.Measure the delay or cost to each of its neighbors.

• This algorithm requires each router to know the estimate of delay to


each of its neighbours.
• The most direct way to determine this delay is to send over the line a
special ECHO packet that the other side is required to send back
immediately.
• By measuring the round-trip time and dividing it by two, the sending
router can get a reasonable estimate of the delay.
3.Construct a packet telling all it has just learned.
• Once the information needed for the exchange has been collected, the next step is
for each router to build a packet containing all the data.
• The packet starts with the identity of the sender, followed by a sequence number
and age and a list of neighbors.
• The cost to each neighbor is also given
• Building the link state packets is easy.
• The hard part is determining when to build them.
• One possibility is to build them periodically, that is, at regular intervals.
• Another possibility is to build them when some significant event occurs, such as a
line or neighbor going down or coming back up again or changing its properties
appreciably.
3.Construct a packet telling all it has just learned.

(a) A network. (b) The link state packets for this network.
4.Send this packet to all other routers
The next step is distributing the link state packets
• The fundamental idea is to use flooding to distribute the link state
packets to all routers.
• Every incoming packet is sent out on every outgoing line except the one
it arrived on.
• Flooding obviously generates vast (infinite) numbers of duplicate
packets.
• To keep the flood in check, each packet contains a sequence number
that is incremented for each new packet sent.
• Routers keep track of all the (source router, sequence) pairs they see.
• When a new link state packet comes in, it is checked against the list of
packets already seen.
Flooding
• An alternative technique is to keep track of which packets have been
flooded, to avoid sending them out a second time.
• This is achieved by having the source router put a sequence number
in each packet it receives from its hosts.
• Each router then needs a list per source router telling which sequence
numbers originating at that source have already been seen.
• If an incoming packet is on the list, it is not flooded.
• When a packet comes in, it is easy to check if the packet is a
duplicate;
• if so, it is discarded.
109
5.Computing the New Routes

• Once a router has accumulated a full set of link state packets, it can
construct the entire network graph because every link is represented.
• Every link is, in fact, represented twice, once for each direction.
• Now Dijkstra’s algorithm can be run locally to construct the shortest
paths to all possible destinations.
• The results of this algorithm tell the router which link to use to reach
each destination.
• This information is installed in the routing tables, and normal
operation is resumed.
Problems with this algorithm
• This algorithm has a few problems, but they are manageable.
• First, if the sequence numbers wrap around, confusion will reign. (Using the
sequence numbers again and again once all of them got used up, in order to
maintain the continuity of data transfer ” is called Wrap around the concept)
• The solution here is to use a 32-bit sequence number. With one link state packet
per second, it would take 137 years to wrap around, so this possibility can be
ignored.
• Second, if a router ever crashes, it will lose track of its sequence number.
• If it starts again at 0, the next packet it sends will be rejected as a duplicate.
Problems with this algorithm
• Third, if a sequence number is ever corrupted and 65,540 is received instead of 4
(a 1-bit error), packets 5 through 65,540 will be rejected , since the current
sequence number will be thought to be 65,540.
• The solution to all these problems is to include the age of each packet after the
sequence number and decrement it once per second.
• When the age hits zero,the information from that router is discarded.
Multicast routing
• Some applications require that widely-separated processes work
together in groups, for example, a group of processes implementing a
distributed database system.
• In these situations, it is frequently necessary for one process to send a
message to all the other members of the group.
• If the group is small, it can just send each other member a point-to
point message.
• If the group is large, this strategy is expensive.
Multicast routing
• Sometimes broadcasting can be used, but using broadcasting to
inform 1000 machines on a million-node network is inefficient
because most receivers are not interested in the message (or worse
yet, they are definitely interested but are not supposed to see it).
• Thus, we need a way to send messages to well-defined groups
Multicast routing
• Sending messages to well-defined groups is called multicasting, and
the routing algorithm used is called multicast routing.
• All multicasting schemes require some way to create and destroy
groups and to identify which routers are members of a group.
• Each group is identified by a multicast address and that routers know
the groups to which they belong.
Multicast routing
• Multicasting requires group management. Some way is needed to
create and destroy groups, and to allow processes to join and leave
groups.
• when a process joins a group, it informs its host of this fact.
• It is important that routers know which of their hosts belong to which
groups.
• Either hosts must inform their routers about changes in group
membership, or routers must query their hosts periodically.
• Either way, routers learn about which of their hosts are in which groups.
• Routers tell their neighbors, so the information propagates through the
subnet.
Multicast routing
• To do multicast routing, each router computes a spanning tree covering all other
routers.
• For example, in Fig.(a) we have two groups, 1 and 2.
• Some routers are attached to hosts that belong to one or both of these groups, as
indicated in the figure.
• A spanning tree for the leftmost router is shown in Fig. (b).
• When a process sends a multicast packet to a group, the first router examines its
spanning tree and prunes it, removing all lines that do not lead to hosts that are
members of the group.
• In our example, Fig.(c) shows the pruned spanning tree for group 1.
• Similarly, Fig. (d) shows the pruned spanning tree for group 2.
• Multicast packets are forwarded only along the appropriate spanning tree.
(a) A network. (b) A spanning tree for the
leftmost router

118
(c) A multicast tree for group 1. (d) A multicast tree for group
2.

119
Applications
• Multimedia
• Teleconferencing
• Database
• Distributed computations
• Real time workshop
• File ,graphics and messages are exchanged among active group members in
real time

120
CONGESTION CONTROL
CONGESTION CONTROL
• Too many packets present in (a part of) the network causes packet
delay and loss that degrades performance.
• This situation is called congestion.

122
Factors that Cause Congestion
1. Packet arrival rate exceeds the outgoing link capacity.
2. Insufficient memory to store arriving packets
3. Bursty traffic
4. Slow processor
If the routers' CPUs are slow at performing queuing buffers, updating tables,
etc., queues can build up, even though there is excess line capacity

123
Packet arrival rate exceeds the outgoing link capacity.
• If suddenly, a stream of packet start arriving on three or four input lines and all
need the same output line.
• In this case, a queue will be built up.
• If there is insufficient memory to hold all the packets, the packet will be lost.
• Increasing the memory to unlimited size does not solve the problem.
• This is because, by the time packets reach front of the queue, they have already
timed out (as they waited the queue). When timer goes off source transmits
duplicate packet that are also added to the queue.
• Thus same packets are added again and again, increasing the load all the way to
the destination.

124
Packet arrival rate exceeds the outgoing link
capacity.

125
Congestion

When too much traffic is offered, congestion sets in and


performance degrades sharply.
Congestion
• When the number of packets hosts send into the network is well within its
carrying capacity, the number delivered is proportional to the number sent.

• If twice as many are sent, twice as many are delivered.


• However, as the offered load approaches the carrying capacity, bursts of traffic
occasionally fill up the buffers inside routers and some packets are lost.

• These lost packets consume some of the capacity, so the number of delivered
packets falls below the ideal curve. The network is now congested

127
Congestion Control vs Flow Control
• Congestion control is a global issue – involves every router and host
within the subnet
• Flow control – scope is point-to-point; involves just sender and
receiver.

128
Types of Congestion Control
In general, we can divide congestion control mechanisms
into two broad categories:

• Open-loop congestion control

• Closed-loop congestion control


open-loop congestion control
In open-loop congestion control, policies are applied to
prevent congestion before it happens.

In these mechanisms, congestion control is handled by either


the source or the destination

130
Closed-Loop Congestion Control
Closed-loop congestion control mechanisms try to alleviate congestion
after it happens

131
open-loop congestion control
• Open loop solutions attempt to solve the problem by good
design, to make sure congestion does not occur in the first
place.
• Once the system is up and running, midcourse corrections
are not made
• Tools for doing open-loop control include
• deciding when to accept new traffic
• deciding when to discard packets
• which ones to discard
• making scheduling decisions at various points in the network

132
Closed-Loop Congestion Control
⮚ Closed loop solution based on the concept of a feedback
loop.
⮚ This approach has three parts when applied to
congestion control
1. Monitor the system and detect when and where congestion
occurs.
2. Pass information to where action can be taken.
3. Adjust system operation to correct the problem.
General Principles of Congestion Control
• open loop algorithms are divided into
• ones that act at the source
• ones that act at the destination
• closed loop algorithms are divided into
• explicit feedback
• implicit feedback
• In explicit feedback algorithms, packets are sent back from the point of
congestion to warn the source.
• In implicit algorithms, the source deduces the existence of congestion by
making local observations, such as the time needed for
acknowledgements to come back

134
Congestion Prevention Policies

5-26

Open loop Policies that affect congestion.


Congestion Prevention Policies
Policies considered in the Data link layer
• Retransmission policy:
• how fast a sender times out and what it transmits upon timeout
• go back n will put a heavier load on the system than selective
repeat
• Out-of-order caching policy:
• selective repeat is clearly better than go back n
• Acknowledgement policy:
• Piggybacking onto reverse traffic may help
• But extra timeouts and retransmissions may happen.
• Flow control Policy:
• a small window reduces the data rate and thus helps fight
congestion
136
Congestion Prevention Policies
Policies considered in the Network layer
• Choice of virtual circuits vs datagrams:
• affects congestion since many congestion control algorithms
work only with virtual-circuit subnets
• Packet queuing and service policy:
• relates to whether routers have one queue per input line, one
queue per output line, or both
• also relates to the order in which packets are processed (e.g.,
round robin or priority based)
• Packet Discard policy :
• rule telling which packet is dropped when there is no space
• Routing algorithm:
• Good algorithm can help avoid congestion by spreading the
traffic over all the lines
137
Congestion Prevention Policies
• Packet lifetime management:
• deals with how long a packet may live before being discarded
• If it is too long, lost packets may block up the works for a long
time
• If it is too short, packets may sometimes time out before
reaching their destination, thus inducing retransmissions
Policies considered in the Transport layer
• Same issues as in data link layer plus
• Timeout determination:
• determining the timeout interval is harder because the transit
time across the network is less predictable
• If the timeout interval is too short, extra packets will be sent
unnecessarily.
• If it is too long, congestion will be reduced but the response
time will suffer whenever a packet is lost.
Congestion Control in Virtual-Circuit Subnets
and datagram subnet
• Congestion Control in Virtual-Circuit Subnets

⮚ Admission control
⮚ Allow new virtual circuit avoiding congested routers

• Congestion Control in Datagram Subnets


⮚ Warning bit
⮚ Choke packets
⮚ Load shedding
⮚ Random early discard
139
Congestion Control in Virtual-Circuit Subnets
• Techniques is
• Admission control
• Idea of admission control is :

• once congestion has been signaled ,no more virtual circuit are set up until
the problem is solved

140
Congestion Control in Virtual-Circuit Subnets: second approach

• Alternative approach to allow new virtual circuit avoiding congested routers


• Eg: Suppose that a host attached to router A wants to set up a connection to a
host attached to router B. Fig. (a)
• Normally, this connection would pass through one of the congested routers.
• To avoid this situation, redraw the network as shown in Fig. (b), omitting the
congested routers and all of their lines.
• The dashed line shows a possible route for the virtual circuit that avoids the
congested routers.
Congestion Control in Virtual-Circuit
Subnets: second approach

(a) A congested subnet. (b) A redrawn subnet, eliminates congestion and a virtual
circuit from A to B.
Congestion Control in Datagram Subnets
• Each router can easily monitor the utilization of its output lines
and other resources.
• For example, it can associate with each line a real variable, u,
whose value is between 0.0 and 1.0, which reflects the recent
utilization of that line.
• To maintain a good estimate of u, a sample of the instantaneous
line utilization, f (either 0 or 1), can be made periodically and u
updated according to

• where the constant a determines how fast the router forgets


recent history

143
Congestion Control in Datagram Subnets:

• When u moves above the threshold the output line enters a warning
state
• Each newly-arriving packet is checked to see if its output line is in
warning state.
• If it is, some action is taken.
• The action taken can be one of the following
1. Warning bit
2. Choke packets
3. Load shedding
4. Random early discard

144
Warning Bit
• A special bit in the packet header is set by the router to warn the source when
congestion is detected.
• The bit is copied and piggy-backed on the ACK and sent to the sender.
• The sender monitors the number of ACK packets it receives with the warning bit
set and adjusts its transmission rate accordingly.
• As long as the warning bits continued to flow in, the source continued to
decrease its transmission rate.
• When they slowed to a trickle, it increases its transmission rate

145
Choke Packets
• A choke packet is a control packet generated at a congested router and
transmitted to source to inform that there is congestion
• A more direct way of telling the source to slow down.
• The source, on receiving the choke packet must reduce its transmission
rate by a certain percentage.
• The original packet is tagged (a header bit is turned on) so that it will
not generate any more choke packets further along the path and is then
forwarded in the usually way
• An example of a choke packet is the ICMP Source Quench Packet.

146
Working of choke packet
• When the source host gets the choke packet, it is required to reduce
the traffic sent to the specified destination by X percent.
• The host should ignore choke packets referring to that destination for
a fixed interval.
• After that period has expired, the host listens for more choke packets
for another interval.
– If one arrives, the line is still congested. The host reduces the flow
still more and begins ignoring choke packets again.
– If no choke packets arrive during the listening period, the host may
increase the flow again.

147
Working of choke packet
• Hosts can reduce traffic by adjusting their policy parameters, for example, their
window size.
• Typically, the first choke packet causes the data rate to be reduced to 0.50 of its
previous rate,
• Next one causes a reduction to 0.25, and so on

148
A choke packet that affects only the
source.

149
Hop-by-Hop Choke Packets
• Over long distances or at high speeds choke packets are not very
effective because the reaction is slow. .
• A more efficient method is to send to choke packets hop-by-hop.
• This requires each hop to reduce its transmission even before the
choke packet arrive at the source.

150
A choke packet that affects each
hop it passes through.

151
Load Shedding
• Load shedding is a fancy way of saying that when routers
are being flooded by packets that they cannot handle,
they just throw them away
• When buffers become full, routers simply discard packets
• Methods for dropping:
• can just pick packets at random to drop
• discard may depend on the applications running
⮚ For file transfer, an old packet is worth more than a
new one
⮚ For multimedia, a new packet is more important
than an old one.
• To implement an intelligent discard policy, applications
must mark their packets in priority classes to indicate how
important they are. 152
Load Shedding
• senders might be allowed to send high-priority packets
under conditions of light load
• but as the load increased they would be discarded, thus
encouraging the users to stop sending them
• Another option is to allow hosts to exceed the limits
specified in the agreement negotiated when the virtual
circuit was set up (e.g., use a higher bandwidth than
allowed)
• but subject to the condition that all excess traffic be marked as
low priority.

153
Random Early Discard (RED)
• This is a proactive approach in which the router discards
one or more packets before the buffer becomes completely
full.
• To determine when to start discarding, routers maintain a
running average of their queue lengths
• Each time a packet arrives, the RED algorithm computes
the average queue length, avg.
• If avg is lower than some lower threshold, congestion is
assumed to be minimal or non-existent and the packet is
queued.

154
RED, cont.
• If avg is greater than some upper threshold, congestion is
assumed to be serious and the packet is discarded.
• If avg is between the two thresholds, this might indicate
the onset of congestion.
• The probability of congestion is then calculated.

155
Quality of Service
• QoS is an overall performance measure of the computer network.
• Quality of Service (QoS) refers to the capability of a network to provide better
service to selected network traffic .

• A stream of packets from a source to destination is called a flow. Quality of Service


is defined as something a flow seeks to attain.
• The needs of each flow can be characterized by four primary parameters :
1. Bandwidth
2. Delay
3. Jitter
4. Reliability.
156
Parameters of flow
• Reliability, Lack of reliability means losing a packet or acknowledgement
which entertains retransmission.

• Delay, Increase in delay means destination will find the packet later than
expected..

• Jitter, Variation of the delay is jitter, If the delay is not at a constant rate,
it may result in poor quality.

• Bandwidth, Increase in bandwidth means increase in the amount of data


which can be transferred in given amount of time
Jitter Control

• The variation (i.e., standard deviation) in the packet arrival


times is called jitter.

• High jitter, for example, having some packets taking 20


msec and others taking 30 msec to arrive will give an
uneven quality to the sound or movie.

158
Jitter Control

(a) High jitter. (b) Low jitter.


159
Jitter Control
• The jitter can be controlled by computing the expected transit
time for each hop along the path.
• When a packet arrives at a router, the router checks to see how
much the packet is behind or ahead of its schedule.
• This information is stored in the packet and updated at each
hop.
• If the packet is ahead of schedule, it is held just long enough to
get it back on schedule.
• If it is behind schedule, the router tries to send it out quickly.
• both cases reducing the amount of jitter
• In applications such as video on demand, jitter can be eliminated
by buffering at the receiver and then fetching data for display
from the buffer instead of from the network in real time.
• In real time applications like Internet telephony &
videoconferencing, the delay inherent in buffering is not 160
Techniques of achieving QoS

1. Over Provisioning.
2. Buffering.
3. Traffic Shaping.
4. Resource Reservation.
5. Admission Control.
6. Proportional Routing.
7. Packet Scheduling .

161
Techniques of achieving QoS
• Overprovisioning –
The logic of overprovisioning is to provide greater router capacity, buffer
space and bandwidth.
• It is an expensive technique as the resources are costly. Eg: Telephone
System.

• Buffering –
Flows can be buffered on the receiving side before being delivered.
• It will not affect reliability or bandwidth, but helps to smooth out jitter.
• This technique can be used at uniform intervals.
Traffic Shaping

• It is one of the open loop mechanism to manage congestion

• This mechanism converts uneven flow of packets in to even flow


• “shape” the traffic before it enters the network
• Used in ATM and Integrated Services networks.
• Two traffic shaping algorithms are:
• Leaky Bucket
• Token Bucket

163
The Leaky Bucket Algorithm

• The Leaky Bucket Algorithm used to control rate in a network.

• It is implemented as a single-server queue with constant service time.

• If the bucket (buffer) overflows then packets are discarded.

164
The Leaky Bucket Algorithm

(a) A leaky bucket with water. (b) a leaky bucket with


packets.
165
Leaky Bucket Algorithm, cont.

• Imagine a bucket with a small hole in the bottom, as illustrated in Fig.


• No matter the rate at which water enters the bucket, the outflow is at a constant
rate, R, when there is any water in the bucket and zero when the bucket is empty.
• Also, once the bucket is full to capacity B, any additional water entering it spills
over the sides and is lost.
• Same concept can be applied to a network

166
Leaky Bucket Algorithm, cont.

• Each host is connected to the network by an interface containing a leaky bucket


that is a finite internal queue.

• If a packet arrives at the queue when it is full the packet is discarded


• The host injects one packet per unit time onto the network.
• This results in a uniform flow of packets, smoothing out bursts and reducing
congestion

167
Leaky Bucket Algorithm, cont.
• The leaky bucket enforces a constant output rate (average rate)
regardless of the burstiness of the input.
• Does nothing when input is idle.
• When packets are the same size (as in ATM cells), the one
packet per unit time is okay.

• For variable length packets though, it is better to allow a fixed


number of bytes per unit time.
• E.g. 1024 bytes per unit time will allow one 1024-byte packet or
two 512-byte packets or four 256-byte packets.

168
Token Bucket
• The leaky bucket algorithm enforces output patterns at the average
rate, no matter how busy the traffic is.
• So, to deal with the more traffic, we need a flexible algorithm so that
the data is not lost.
• One such approach is the token bucket algorithm.
Token bucket algorithm steps
• Step 1 − In regular intervals tokens are thrown into the bucket f.
• Step 2 − The bucket has a maximum capacity f.
• Step 3 − If the packet is ready, then a token is removed from the
bucket, and the packet is sent.
• Step 4 − Suppose, if there is no token in the bucket, the packet cannot
be sent.
• In Fig. (a) there is a bucket holding three tokens, with five packets
waiting to be transmitted.
• For a packet to be transmitted, it must capture and destroy one
token.
• In Fig. (b) three of the five packets have been transmitted, but the
other two are stuck waiting for two more tokens to be generated
Difference between leaky bucket and token bucket
RESOURCE RESERVATION
• A flow of data needs resources such as a buffer, bandwidth, CPU time,
and so on.
• The quality of service is improved if these resources are reserved
beforehand.
• Three different kinds of resources can potentially be reserved:

• 1. Bandwidth.

• 2. Buffer space.

• 3. CPU cycles.
Admission Control

• Admission control refers to the mechanism used by a router, or a


switch, to accept or reject a flow based on predefined parameters
called flow specifications.
• Before a router accepts a flow for processing, it checks the flow
specifications to see if its capacity (in terms of bandwidth, buffer size,
CPU speed, etc.) and its previous commitments to other flows can
handle the new flow.
Packet Scheduling
• Packets from different flows arrive at a switch or router for
processing.
• A good scheduling technique treats the different flows in a fair and
appropriate manner.
• Several scheduling techniques are designed to improve the quality of
service.
• Three of them are:
1. FIFO queuing
2. Priority queuing
3. Weighted fair queuing.
FIFO Queuing
• In first-in, first-out (FIFO) queuing, packets wait in a buffer (queue)
until the node (router or switch) is ready to process them.
• If the average arrival rate is higher than the average processing rate,
the queue will fill up and new packets will be discarded
Priority Queuing
• In priority queuing, packets are first assigned to a priority class.
• Each priority class has its own queue.
• The packets in the highest-priority queue are processed first.
• Packets in the lowest- priority queue are processed last.
Priority Queuing
Weighted Fair Queuing

• In this technique, the packets are still assigned to different classes and
admitted to different queues.
• The queues, however, are weighted based on the priority of the
queues; higher priority means a higher weight.
• The system processes packets in each queue in a round-robin fashion
with the number of packets selected from each queue based on the
corresponding weight.
• For example, if the weights are 3, 2, and 1, three packets are
processed from the first queue, two from the second queue, and one
from the third queue.
Weighted Fair Queuing
CONGESTION CONTROL
•When too many packets are present in (a part of) the subnet,
performance degrades. This situation is called congestion
•Congestion control refers to techniques and mechanisms that can
either prevent congestion, before it happens, or remove
congestion, after it has happened.
•In general, we can divide congestion control mechanisms into two
broad categories:
Open-loop congestion control (prevention)
Closed-loop congestion control (removal)
Factors that Cause Congestion
1. Packet arrival rate exceeds the outgoing link capacity.
2. Insufficient memory to store arriving packets
3. Bursty traffic
4. Slow processor
Open-Loop Congestion Control

•In open-loop congestion control, policies are applied to prevent


congestion before it happens.
•In these mechanisms, congestion control is handled by either the
source or the destination
1.Retransmission Policy
•Retransmission is sometimes unavoidable. If the sender feels that a
sent packet is lost or corrupted, the packet needs to be
retransmitted.
• Retransmission in general may increase congestion in the network.
However, a good retransmission policy can prevent congestion.
•The retransmission policy and the retransmission timers must be
designed to optimize efficiency and at the same time prevent
congestion.
• For example, the retransmission policy used by TCP is designed to
prevent or alleviate congestion.
2.Window Policy
•The type of window at the sender may also affect congestion.
•The Selective Repeat window is better than the Go-Back-N window for
congestion control.
•In the Go-Back-N window, when the timer for a packet times out,
several packets may be resent, although some may have arrived safe
and sound at the receiver. This duplication may make the congestion
worse.
•The Selective Repeat window, on the other hand, tries to send the
specific packets that have been lost or corrupted.
3.Acknowledgment Policy
•The acknowledgment policy imposed by the receiver may also affect
congestion.
•If the receiver does not acknowledge every packet it receives, it may
slow down the sender and help prevent congestion.
•Several approaches are used in this case. A receiver may send an
acknowledgment only if it has a packet to be sent or a special timer
expires.
•A receiver may decide to acknowledge only N packets at a time. We need
to know that the acknowledgments are also part of the load in a
network.
•Sending fewer acknowledgments means imposing less load on the
network.
4.Discarding Policy
•A good discarding policy by the routers may prevent congestion
and at the same time may not harm the integrity of the
transmission.
•For example in audio transmission, if the policy is to discard less
sensitive packets when congestion is likely to happen, the quality of
sound is still preserved and congestion is prevented or
alleviated.
5.Admission Policy
•An admission policy, which is a quality-of-service mechanism, can
also prevent congestion in virtual-circuit networks.
•Switches in a flow first check the resource requirement of a flow
before admitting it to the network.
•A router can deny establishing a virtual circuit connection if there is
congestion in the network or if there is a possibility of future
congestion.
Closed-Loop Congestion Control
•Closed-loop congestion control mechanisms try to alleviate
congestion after it happens. Several mechanisms have been used by
different protocols.
1. Backpressure
•Node 3 in the figure has more input data than it can handle. It drops
some packets in its input buffer and informs node 2 to slow down.
•Node 2, in turn, may be congested because it is slowing down the
output flow of data. If node 2 is congested, it informs node 1 to slow
down, which in turn may create congestion.
•If so, node 1 informs the source of data to slow down. This, in time,
alleviates the congestion.
•The pressure on node 3 is moved backward to the source to
remove the congestion.
2.Choke Packet
•A choke packet is a packet sent by a node to the source to inform it of
congestion.
•In backpressure, the warning is from one node to its upstream
node, although the warning may eventually reach the source
station.
•In the choke packet method, the warning is from the router, which
has encountered congestion, to the source station directly.
•The intermediate nodes through which the packet has traveled are
not warned.
3.Implicit Signaling
•In implicit signaling, there is no communication between the
congested node or nodes and the source. The source guesses that
there is a congestion somewhere in the network from other
symptoms.
•For example, when a source sends several packets and there is no
acknowledgment for a while, one assumption is that the network is
congested.
•The delay in receiving an acknowledgment is interpreted as
congestion in the network; the source should slow down.
4.Explicit Signaling
•The node that experiences congestion can explicitly send a signal to
the source or destination.
•In the choke packet method, a separate packet is used for this
purpose; in the explicit signaling method, the signal is included in the
packets that carry data.
• Explicit signaling can occur in either the forward or the backward
direction.
•Backward Signaling - A bit can be set in a packet moving in the
direction opposite to the congestion. This bit can warn the source
that there is congestion and that it needs to slow down to avoid the
discarding of packets.
•Forward Signaling - A bit can be set in a packet moving in the
direction of the congestion. This bit can warn the destination that
there is congestion. The receiver in this case can use policies, such as
slowing down the acknowledgments, to alleviate the congestion.
Congestion Control in Virtual-Circuit
Subnets and datagram subnet
• Congestion Control in Virtual-Circuit Subnets

⮚ Admission control
⮚ Allow new virtual circuit avoiding congested routers

• Congestion Control in Datagram Subnets


⮚ Warning bit
⮚ Choke packets
⮚ Load shedding
⮚ Random early discard
199
Congestion Control in Virtual-Circuit
Subnets
• Techniques is
• Admission control
• Idea of admission control is :

• once congestion has been signaled ,no more virtual circuit are set up
until the problem is solved

200
Congestion Control in Virtual-Circuit Subnets: second approach

• Alternative approach to allow new virtual circuit avoiding congested,


routers
• Eg: Suppose that a host attached to router A wants to set up a connection
to a host attached to router B. Fig. (a)
• Normally this connection would pass through one of the congested
routers.
• To avoid this situation, redraw the network as shown in Fig. (b), omitting
the congested routers and all of their lines.
• The dashed line shows a possible route for the virtual circuit that avoids
the congested routers.
Congestion Control in Virtual-Circuit
Subnets: second approach

(a) A congested subnet. (b) A redrawn subnet, eliminates congestion and a virtual
circuit from A to B.
Congestion Control in Datagram
Subnets:
• When u moves above the threshold the output line enters a warning
state
• Each newly-arriving packet is checked to see if its output line is in
warning state.
• If it is, some action is taken.
• The action taken can be one of the following
1. Warning bit
2. Choke packets
3. Load shedding
4. Random early discard

203
1.Warning Bit
• A special bit in the packet header is set by the router to warn the source
when congestion is detected.
• The bit is copied and piggy-backed on the ACK and sent to the sender.
• The sender monitors the number of ACK packets it receives with the
warning bit set and adjusts its transmission rate accordingly.
• As long as the warning bits continued to flow in, the source continued to
decrease its transmission rate.
• When they slowed to a trickle, it increases its transmission rate

204
Working of choke packet
• When the source host gets the choke packet, it is required to reduce
the traffic sent to the specified destination by X percent.
• The host should ignore choke packets referring to that destination for a
fixed interval.
• After that period has expired, the host listens for more choke packets for
another interval.
– If one arrives, the line is still congested. The host reduces the flow still
more and begins ignoring choke packets again.
– If no choke packets arrive during the listening period, the host may
increase the flow again.

205
2.Hop-by-Hop Choke Packets
• Over long distances or at high speeds choke packets are not very
effective because the reaction is slow. .
• A more efficient method is to send to choke packets hop-by-hop.
• This requires each hop to reduce its transmission even before the
choke packet arrive at the source.

206
3.Load Shedding
• Load shedding is a fancy way of saying that when routers are being
flooded by packets that they cannot handle, they just throw them away
• When buffers become full, routers simply discard packets
• Methods for dropping:
• can just pick packets at random to drop
• discard may depend on the applications running
⮚ For file transfer, an old packet is worth more than a new one
⮚ For multimedia, a new packet is more important than an old one.
• To implement an intelligent discard policy, applications must mark their
packets in priority classes to indicate how important they are.

207
Load Shedding
• senders might be allowed to send high-priority packets under
conditions of light load
• but as the load increased they would be discarded, thus encouraging
the users to stop sending them
• Another option is to allow hosts to exceed the limits specified in
the agreement negotiated when the virtual circuit was set up
(e.g., use a higher bandwidth than allowed)
• but subject to the condition that all excess traffic be marked as low
priority.

208
4.Jitter Control

• The variation (i.e., standard deviation) in the packet arrival times


is called jitter.

• High jitter, for example, having some packets taking 20 msec and
others taking 30 msec to arrive will give an uneven quality to the
sound or movie.

209
Jitter Control

(a) High jitter. (b) Low jitter.


210
Jitter Control
• The jitter can be controlled by computing the expected transit time for each
hop along the path.
• When a packet arrives at a router, the router checks to see how much the
packet is behind or ahead of its schedule.
• This information is stored in the packet and updated at each hop.
• If the packet is ahead of schedule, it is held just long enough to get it back on
schedule.
• If it is behind schedule, the router tries to send it out quickly.
• both cases reducing the amount of jitter
• In applications such as video on demand, jitter can be eliminated by buffering
at the receiver and then fetching data for display from the buffer instead of
from the network in real time.
• In real time applications like Internet telephony & videoconferencing, the
delay inherent in buffering is not acceptable. 211
Random Early Discard (RED)
• This is a proactive approach in which the router discards one or
more packets before the buffer becomes completely full.
• Each time a packet arrives, the RED algorithm computes the
average To determine when to start discarding, routers
maintain a running average of their queue lengths
• queue length, avg.
• If avg is lower than some lower threshold, congestion is
assumed to be minimal or non-existent and the packet is
queued.

212
RED, cont.
• If avg is greater than some upper threshold, congestion is
assumed to be serious and the packet is discarded.
• If avg is between the two thresholds, this might indicate the
onset of congestion.
• The probability of congestion is then calculated.

213
QUALITY OF SERVICE
•A stream of packets from a source to a destination is called a flow. In
a connection-oriented network, all the packets belonging to a flow
follow the same route, in a connectionless network, they may follow
different routes. The needs of each flow can be characterized
by four primary parameters:
1.reliability
2.delay
3.Jitter
4.bandwidth
• Together these determine the QoS (Quality of Service) the flow
requires
Reliability - Reliability is a characteristic that a flow needs. Lack of
reliability means losing a packet or acknowledgment, which entails
retransmission.
Delay - Source-to-destination delay is another flow characteristic.
Jitter -Jitter is the variation in delay for packets belonging to the
same flow. For example, if four packets depart at times 0, 1, 2, 3
and arrive at 20, 21, 22, 23, all have the same delay, 20 units of
time. On the other hand, if the above four packets arrive at 21, 23,
21, and 28, they will have different delays: 21,22, 19, and 24.
Bandwidth - Different applications need different bandwidths. In
video conferencing we need to send millions of bits per second to
refresh a color screen.
TECHNIQUES FOR ACHIEVING GOOD QUALITY OF SERVICE

1. Scheduling
•Packets from different flows arrive at a switch or router for processing. A
good scheduling technique treats the different flows in a fair and appropriate
manner. Following are some scheduling technique
a)FIFO Queuing
•In first-in, first-out (FIFO) queuing, packets wait in a buffer (queue) until the node
(router or switch) is ready to process them.
•If the average arrival rate is higher than the average processing rate, the queue
will fill up and new packets will be discarded.
b)Priority Queuing
•In priority queuing, packets are first assigned to a priority class.
Each priority class has its own queue.
•The packets in the highest-priority queue are processed first.
•Packets in the lowest-priority queue are processed last.
•A priority queue can provide better QoS than the FIFO queue
because higher priority traffic, such as multimedia, can reach the
destination with less delay.

•If there is a continuous flow in a high-priority queue, the packets in the


lower-priority queues will never have a chance to be
processed. This is a condition called starvation.
c)Weighted Fair Queuing
•In this technique, the packets are still assigned to different classes
and admitted to different queues. The queues are weighted based on
the priority of the queues, higher priority means a higher weight.
•The system processes packets in each queue in a round-robin
fashion with the number of packets selected from each queue
based on the corresponding weight.
• For example, if the weights are 3, 2, and 1, three packets are
processed from the first queue, two from the second queue, and one
from the third queue. If the system does not impose priority on the
classes, all weights can be equal
2. Traffic Shaping
•Traffic shaping is a mechanism to control the amount and the rate of the traffic
sent to the network. Two techniques can shape traffic: leaky bucket and token
bucket.

a)Leaky Bucket
•If a bucket has a small hole at the bottom, the water leaks from the bucket at a
constant rate as long as there is water in the bucket.
•The rate at which the water leaks does not depend on the rate at which the
water is input to the bucket unless the bucket is empty.
•The input rate can vary, but the output rate remains constant.
•In Figure, the host sends a burst of data at a rate of 12 Mbps for 2 s, for a
total of 24 Mbits of data. The host is silent for 5 s and then sends data at a
rate of 2 Mbps for 3 s, for a total of 6 Mbits of data.
•In all, the host has sent 30 Mbits of data in 10s. The leaky bucket smooth’s
the traffic by sending out data at a rate of 3 Mbps during the same 10 s.
•A FIFO queue holds the packets. If the traffic consists of fixed-size packets, the
process removes a fixed number of packets from the queue at each tick of the
clock.
•If the traffic consists of variable-length packets, the fixed output rate must be
based on the number of bytes or bits.
•A leaky bucket algorithm shapes bursty traffic into fixed-rate traffic by averaging
the data rate. It may drop the packets if the bucket is full.
b)Token Bucket
•The leaky bucket is very restrictive. It does not credit an idle host. For
example, if a host is not sending for a while, its bucket becomes empty.
•Now if the host has bursty data, the leaky bucket allows only an average
rate. The time when the host was idle is not taken into account.
•On the other hand, the token bucket algorithm allows idle hosts to
accumulate credit for the future in the form of tokens. For each tick of the
clock, the system sends n tokens to the bucket.
•The system removes one token for every cell (or byte) of data sent.
•The token bucket can easily be implemented with a counter.
•The token is initialized to zero.
•Each time a token is added, the counter is incremented by 1.
•Each time a unit of data is sent, the counter is decremented by 1.
When the counter is zero, the host cannot send data.
•The token bucket allows bursty traffic at a regulated maximum rate.
Combining Token Bucket and Leaky Bucket
•The two techniques can be combined to credit an idle host and at the
same time regulate the traffic.
•The leaky bucket is applied after the token bucket, the rate of the
leaky bucket needs to be higher than the rate of tokens dropped in the
bucket.
c)Resource Reservation
•A flow of data needs resources such as a buffer, bandwidth, CPU
time, and so on. The quality of service is improved if these
resources are reserved beforehand.
•One QoS model called Integrated Services, which depends heavily on
resource reservation to improve the quality of service.
d)Admission Control
•Admission control refers to the mechanism used by a router, or a
switch, to accept or reject a flow based on predefined parameters
called flow specifications.
•Before a router accepts a flow for processing, it checks the flow
specifications to see if its capacity (in terms of bandwidth, buffer size,
CPU speed, etc.) and its previous commitments to other flows can
handle the new flow.

You might also like