0% found this document useful (0 votes)
19 views79 pages

CN Unit - Iii

The document discusses the network layer's design issues, including store-and-forward packet switching, connectionless and connection-oriented services, and routing algorithms. It covers various routing techniques such as flooding, hierarchical routing, and broadcast methods, detailing their advantages and limitations. Additionally, it introduces Dijkstra's algorithm for shortest path routing and the optimality principle in routing algorithms.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views79 pages

CN Unit - Iii

The document discusses the network layer's design issues, including store-and-forward packet switching, connectionless and connection-oriented services, and routing algorithms. It covers various routing techniques such as flooding, hierarchical routing, and broadcast methods, detailing their advantages and limitations. Additionally, it introduces Dijkstra's algorithm for shortest path routing and the optimality principle in routing algorithms.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 79

UNIT - III

3.1: NETWORK LAYER:


3.1.1: DESIGN ISSUES

1. Store-and-forward packet switching


2. Services provided to transport layer
3. Implementation of connectionless service
4. Implementation of connection-oriented service
5. Comparison of virtual-circuit and datagram networks

1. Store-and-forward packet switching

A host with a packet to send transmits it to the nearest router, either


on its own LAN or over a point-to-point link to the ISP. The packet is
stored there until it has fully arrived and the link has finished its
processing by verifying the checksum. Then it is forwarded to the
next router along the path until it reaches the destination host, where it
is delivered. This mechanism is store-and-forward packet switching.

2 Services provided to transport layer


The network layer provides services to the transport layer at the
network layer/transport layer interface. The services need to be
carefully designed with the following goals in mind:
1. Services independent of router technology.
2. Transport layer shielded from number, type, topology of routers.
3. Network addresses available to transport layer use uniform
numbering plan – even across LANs and WANs

3 Implementation of connectionless service


If connectionless service is offered, packets are injected into the
network individually and routed independently of each other. No
advance setup is needed. In this context, the packets are frequently
called datagrams (in analogy with telegrams) and the network is
called a datagram network.

Let us assume for this example that the message is four times longer
than the maximum packet size, so the network layer has to break it
into four packets, 1, 2, 3, and 4, and send each of them in turn to
router A.
Every router has an internal table telling it where to send packets for
each of the possible destinations. Each table entry is a pair(destination
and the outgoing line). Only directly connected lines can be used.
A’s initial routing table is shown in the figure under the label
‘‘initially.’’ At A, packets 1, 2, and 3 are stored briefly, having arrived
on the incoming link. Then each packet is forwarded according to A’s
table, onto the outgoing link to C within a new frame. Packet 1 is then
forwarded to E and then to F.
However, something different happens to packet 4. When it gets to A
it is sent to router B, even though it is also destined for F. For some
reason (traffic jam along ACE path), A decided to send packet 4 via a
different route than that of the first three packets. Router A updated its
routing table, as shown under the label ‘‘later.’’
The algorithm that manages the tables and makes the routing
decisions is called the routing algorithm.

If connection-oriented service is used, a path from the source router


all the way to the destination router must be established before any
data packets can be sent. This connection is called a VC (virtual
circuit), and the network is called a virtual-circuit network
When a connection is established, a route from the source machine to
the destination machine is chosen as part of the connection setup and
stored in tables inside the routers. That route is used for all traffic
flowing over the connection, exactly the same way that the telephone
system works. When the connection is released, the virtual circuit is
also terminated. With connection-oriented service, each packet carries
an identifier telling which virtual circuit it belongs to.
As an example, consider the situation shown in Figure. Here, host H1
has established connection 1 with host H2. This connection is
remembered as the first entry in each of the routing tables. The first
line of A’s table says that if a packet bearing connection identifier 1
comes in from H1, it is to be sent to router C and given connection
identifier 1. Similarly, the first entry at C routes the packet to E, also
with connection identifier 1.
Now let us consider what happens if H3 also wants to establish a
connection to H2. It chooses connection identifier 1 (because it is
initiating the connection and this is its only connection) and tells the
network to establish the virtual circuit.
This leads to the second row in the tables. Note that we have a
conflict here because although A can easily distinguish connection 1
packets from H1 from connection 1 packets from H3, C cannot do
this. For this reason, A assigns a different connection identifier to the
outgoing traffic for the second connection. Avoiding conflicts of this
kind is why routers need the ability to replace connection identifiers
in outgoing packets.
In some contexts, this process is called label switching. An example
of a connection-oriented network service is MPLS (Multi Protocol
Label Switching).

5 Comparison of virtual-circuit and datagram networks

3.2: ROUTING ALGORITHMS:


The main function of NL (Network Layer) is routing packets from the
source machine to the destination machine.
There are two processes inside router:
a) One of them handles each packet as it arrives, looking up the
outgoing line to use for it in the routing table. This process is
forwarding.
b) The other process is responsible for filling in and updating the
routing tables. That is where the routing algorithm comes into play.
This process is routing.
Regardless of whether routes are chosen independently for each
packet or only when new connections are established, certain
properties are desirable in a routing algorithm correctness,
simplicity, robustness, stability, fairness, optimality

Routing algorithms can be grouped into two major classes:


1) nonadaptive (Static Routing)
2) adaptive. (Dynamic Routing)

Nonadaptive algorithm do not base their routing decisions on


measurements or estimates of the current traffic and topology. Instead,
the choice of the route to use to get from I to J is computed in
advance, off line, and downloaded to the routers when the network is
booted. This procedure is sometimes called static routing.

Adaptive algorithm, in contrast, change their routing decisions to


reflect changes in the topology, and usually the traffic as well.
Adaptive algorithms differ in 1) Where they get their information
(e.g., locally, from adjacent routers, or from all routers),
2) When they change the routes (e.g., every ΔT sec, when the load
changes or when the topology changes), and
3) What metric is used for optimization (e.g., distance, number of
hops, or estimated transit time).
This procedure is called dynamic routing

Different Routing Algorithms


• Optimality principle
• Shortest path algorithm
• Flooding
• Distance vector routing
• Link state routing
• Hierarchical Routing

The Optimality Principle


One can make a general statement about optimal routes without
regard to network topology or traffic. This statement is known as the
optimality principle.
It states that if router J is on the optimal path from router I to router K,
then the optimal path from J to K also falls along the same
As a direct consequence of the optimality principle, we can see that
the set of optimal routes from all sources to a given destination form a
tree rooted at the destination. Such a tree is called a sink tree. The
goal of all routing algorithms is to discover and use the sink trees for
all routers.

3.2.1: SHORTEST PATH ROUTING


What is Shortest Path Routing?

It refers to the algorithms that help to find the shortest path between a
sender and receiver for routing the data packets through the network
in terms of shortest distance, minimum cost, and minimum time.
● It is mainly for building a graph or subnet containing routers as
nodes and edges as communication lines connecting the nodes.
● Hop count is one of the parameters that is used to measure the
distance.
● Hop count: It is the number that indicates how many routers are
covered. If the hop count is 6, there are 6 routers/nodes and the
edges connecting them.
● Another metric is a geographic distance like kilometers.
● We can find the label on the arc as the function of bandwidth,
average traffic, distance, communication cost, measured delay,
mean queue length, etc.

Dijkstra’s Algorithm
The Dijkstra’s Algorithm is a greedy algorithm that is used to find the
minimum distance between a node and all other nodes in a given
graph. Here we can consider node as a router and graph as a network.
It uses weight of edge .ie, distance between the nodes to find a
minimum distance route.

Algorithm:
1: Mark the source node current distance as 0 and all others as
infinity.
2: Set the node with the smallest current distance among the
non-visited nodes as the current node.
3: For each neighbor, N, of the current node:
● Calculate the potential new distance by adding the current distance
of the current node with the weight of the edge connecting the
current node to N.
● If the potential new distance is smaller than the current distance of
node N, update N’s current distance with the new distance.
4: Make the current node as visited node.
5: If we find any unvisited node, go to step 2 to find the next node
which has the smallest current distance and continue this process.

Example:
Consider the graph G:
Now,we will start normalising graph one by one starting from node 0.

step 1

Nearest neighbour of 0 are 2 and 1 so we will normalize them first .


step 3

Similarly we will normalize other node considering it should not form


a cycle and will keep track in visited nodes.

step 5

3.2.2: FLOODING

Flooding is a non-adaptive routing technique following this simple


method: when a data packet arrives at a router, it is sent to all the
outgoing links except the one it has arrived on.
For example, let us consider the network in the figure, having six
routers that are connected through transmission lines.

Using flooding technique −

● An incoming packet to A, will be sent to B, C and D.


● B will send the packet to C and E.
● C will send the packet to B, D and F.
● D will send the packet to C and F.
● E will send the packet to F.
● F will send the packet to C and E.

Types of Flooding

Flooding may be of three types −

● Uncontrolled flooding − Here, each router unconditionally


transmits the incoming data packets to all its neighbours.
● Controlled flooding − They use some methods to control the
transmission of packets to the neighbouring nodes. The two
popular algorithms for controlled flooding are Sequence
Number Controlled Flooding (SNCF) and Reverse Path
Forwarding (RPF).
● Selective flooding − Here, the routers don't transmit the
incoming packets only along those paths which are heading
towards approximately in the right direction, instead of every
available paths.
Advantages of Flooding
● It is very simple to setup and implement, since a router may
know only its neighbours.
● It is extremely robust. Even in case of malfunctioning of a large
number routers, the packets find a way to reach the destination.
● All nodes which are directly or indirectly connected are visited.
So, there are no chances for any node to be left out. This is a
main criteria in case of broadcast messages.
● The shortest path is always chosen by flooding.
Limitations of Flooding
● Flooding tends to create an infinite number of duplicate data
packets, unless some measures are adopted to damp packet
generation.
● It is wasteful if a single destination needs the packet, since it
delivers the data packet to all nodes irrespective of the
destination.
● The network may be clogged with unwanted and duplicate data
packets. This may hamper delivery of other data packets.

3.2.3: HIERARCHICAL ROUTING


As networks grow in size, the router routing tables grow
proportionally. Not only is router memory consumed by
ever-increasing tables, but more CPU time is needed to scan them and
more bandwidth is needed to send status reports about them.

At a certain point, the network may grow to the point where it is no


longer feasible for every router to have an entry for every other router,
so the routing will have to be done hierarchically, as it is in the
telephone network.
When hierarchical routing is used, the routers are divided into what
we will call regions. Each router knows all the details about how to
route packets to destinations within its own region but knows nothing
about the internal structure of other regions.
For huge networks, a two-level hierarchy may be insufficient; it may
be necessary to group the regions into clusters, the clusters into zones,
the zones into groups, and so on, until we run out of names for
aggregations

When a single network becomes very large, an interesting question is


‘‘how many levels should the hierarchy have?’’
For example, consider a network with 720 routers. If there is no
hierarchy, each router needs 720 routing table entries.
If the network is partitioned into 24 regions of 30 routers each, each
router needs 30 local entries plus 23 remote entries for a total of 53
entries.
If a three-level hierarchy is chosen, with 8 clusters each containing 9
regions of 10 routers, each router needs 10 entries for local routers, 8
entries for routing to other regions within its own cluster, and 7 entries
for distant clusters, for a total of 25 entries
Kamoun and Kleinrock (1979) discovered that the optimal number of
levels for an N router network is ln N, requiring a total of e ln N
entries per router.
3.2.4: BROADCAST
Broadcast routing refers to the process of delivering messages or
packets from a single source to multiple or all destinations in a
network. It is commonly used in applications where information
needs to be disseminated to a large number of recipients, such as
distributing weather reports, stock market updates, or live radio
programs.
There are several methods for broadcasting, each with its own
advantages and disadvantages:

1. Distinct Packet for Each Destination:


 In this method, the source sends a separate packet for each
destination it wants to reach.
 It is inefficient and slow, as it consumes a significant amount of
bandwidth and requires the source to know all destinations
beforehand.

2. Multi-destination Routing:
 Each broadcast packet contains either a list of destinations or a bit
map indicating the desired destinations.
 Routers determine the output lines needed for the destinations and
generate new copies of the packet for each output line.
 Bandwidth usage is more efficient compared to distinct packets,
but the source still needs to know all destinations, and routers have to
determine where to send the packets.

3. Flooding:
 Flooding involves sending a broadcast packet to all links in the
network, except the one it arrived on.
 Although it uses links efficiently and is simple to implement, it can
lead to a large number of duplicates and is generally not suitable for
point-to-point communication.
 Flooding is a popular method for broadcasting due to its simplicity.

4. Reverse Path Forwarding (RPF):


 In this elegant and simple method, a router receiving a broadcast
packet checks if the packet arrived on the preferred path for sending
packets back to the source.
 If it did, the router forwards copies of the packet to all links except
the one it arrived on; otherwise, the packet is discarded as a likely
duplicate.
An example of reverse path forwarding is shown in the above figure.
Part (a) shows a network, part (b) shows a sink tree for router I of that
network, and part (c) shows how the reverse path algorithm works.
On the first hop, I sends packets to F, H, J, and N, as indicated by the
second row of the tree. Each of these packets arrives on the preferred
path to I (assuming that the preferred path falls along the sink tree)
and is so indicated by a circle around the letter. On the second hop,
eight packets are generated, two by each of the routers that received a
packet on the first hop. As it turns out, all eight of these arrive at
previously unvisited routers, and five of these arrive along the
preferred line. Of the six packets generated on the third hop, only
three arrive on the preferred path (at C, E, and K); the others are
duplicates. After five hops and 24 packets, the broadcasting
terminates, compared with four hops and 14 packets had the sink tree
been followed exactly.
 RPF is efficient and easy to implement, and it requires routers to
know how to reach all destinations without using sequence numbers
or maintaining a list of destinations in the packet.
5. Spanning Tree-Based Broadcasting:
 This method involves using a spanning tree (a subset of the
network with no loops) as a guide for broadcasting.
 Each router copies the incoming broadcast packet onto all the
spanning tree lines except the one it arrived on.
 It generates the minimum number of packets necessary to
broadcast the message effectively but requires routers to have
knowledge of a spanning tree, which may not always be available.

The choice of broadcast routing method depends on the specific


network topology, the available information at routers, and the
trade-offs between efficiency and complexity. Different algorithms
may be suitable for different scenarios, and network designers need to
consider these factors when implementing broadcast routing in a
particular network.
3.2.5: MULTICAST
Multicast routing allows sending messages or packets from a single
sender to multiple receivers, forming a well-defined group that is
large in size compared to individual point-to-point communication.
Multicasting is used in applications such as multiplayer games, video
streaming to multiple viewers, and group communication.
A multicast packet starts from the source S1 and goes to all
destinations that belong to group G1. In multicasting, when a router
receives a packet, it may forward it through several of its interfaces.
Group Identification:
 In multicast routing, each group is identified by a multicast
address, and routers are aware of the groups to which they belong.
 Group membership is managed through mechanisms not
concerning the routing algorithm itself. These mechanisms create and
destroy groups and determine which routers are members of a
particular group.

Dense Group Multicast:


 For dense groups, where receivers are scattered over most of the
network, broadcast is initially used to efficiently distribute the packets
to all parts of the network.
 However, broadcasting may lead to packets reaching routers that
are not members of the group, which is wasteful

A solution, known as pruning, is applied to the broadcast spanning


tree by removing links that do not lead to group members, resulting in
an efficient multicast spanning tree.
Sparse Group Multicast:
 For sparse groups, where much of the network is not part of the
group, different strategies are used.
 Link state routing protocols, such as MOSPF (Multicast OSPF),
allow each router to construct its own pruned spanning tree for each
sender by removing links that do not connect group members to the
sender.
 Distance vector routing protocols, such as DVMRP (Distance
Vector Multicast Routing Protocol), use the reverse path forwarding
algorithm and prune the spanning tree recursively. Routers respond
with PRUNE messages, leading to an efficient tree.

Core-Based Trees:
 Core-based trees are an alternative design for multicast routing.
 All routers agree on a core (or rendezvous point) and construct the
tree by sending a packet from each group member to the core. The
tree is formed by the union of the paths traced by these packets.
 The core-based tree allows sending packets to the core, which then
forwards them down the tree to all group members.
 It provides a shared tree for all sources, reducing storage costs,
messages sent, and computation, making it efficient for sparse groups.

Optimization and Considerations:


 The choice of multicast routing strategy depends on the group
characteristics and the network topology.
 For dense groups, pruning the broadcast tree efficiently reduces
unnecessary links.
 For sparse groups, core-based trees offer savings in storage,
messages, and computation.
 Optimization depends on the locations of senders and the core, and
single-source groups can use the sender as the core for better
efficiency.
Multicast routing protocols, such as PIM (Protocol Independent
Multicast), are widely used in the Internet to support efficient and
scalable multicast communication. These protocols are designed to
handle both dense and sparse multicast groups effectively.
3.2.6: DISTANCE VECTOR ROUTING
Distance vector routing is a routing protocol that uses distance as a
metric to determine the best path between two nodes. It is also known
as the Bellman-Ford algorithm.

Distance vector routing is used in simple network topologies where


the number of hops between two nodes is the primary metric used to
determine the best path. In more complex network topologies, other
factors such as link bandwidth and latency can be taken into account
when determining the best path.

● Each router prepares its routing table. By their local knowledge.


each router knows about-
● All the routers present in the network
● Distance to its neighboring routers
● Each router exchanges its distance vector with its neighboring
routers.
● Each router prepares a new routing table using the distance
vectors it has obtained from its neighbors.
● This step is repeated for (n-2) times if there are n routers in the
network.
● After this, routing tables converge / become stable.

Example
Consider-
● There is a network consisting of 4 routers.
● The weights are mentioned on the edges.
● Weights could be distances or costs or delays.
It works in the following steps-
Step-01:
Each router prepares its routing table using its local knowledge.
Routing table prepared by each router is shown below-
At Router A-

Destination Distance Next Hop

A 0 A

B 2 B

C ∞ –

D 1 D

At Router B-
Destination Distance Next Hop

A 2 A

B 0 B

C 3 C

D 7 D

At Router C-

Destinatio
Distance Next Hop
n

A ∞ –

B 3 B

C 0 C

D 11 D

At Router D-

Destinatio
Distance Next Hop
n

A 1 A
B 7 B

C 11 C

D 0 D

Step-02:
● Each router exchanges its distance vector obtained in Step-01
with its neighbors.
● After exchanging the distance vectors, each router prepares a
new routing table.
This is shown below-

At Router A-

● Router A receives distance vectors from its neighbors B and


D.
● Router A prepares a new routing table as-
● Cost of reaching destination B from router A = min { 2+0 ,
1+7 } = 2 via B.
● Cost of reaching destination C from router A = min { 2+3 ,
1+11 } = 5 via B.
● Cost of reaching destination D from router A = min { 2+7 ,
1+0 } = 1 via D.

Thus, the new routing table at router A is-

Destination Distance Next Hop

A 0 A
B 2 B

C 5 B

D 1 D

At Router B-
● Router B receives distance vectors from its neighbors A, C and
D.
● Router B prepares a new routing table as-

● Cost of reaching destination A from router B = min { 2+0 , 3+∞


, 7+1 } = 2 via A.
● Cost of reaching destination C from router B = min { 2+∞ ,
3+0 , 7+11 } = 3 via C.
● Cost of reaching destination D from router B = min { 2+1 ,
3+11 , 7+0 } = 3 via A.

Thus, the new routing table at router B is-

Destination Distance Next Hop

A 2 A

B 0 B
C 3 C

D 3 A

At Router C-
● Router C receives distance vectors from its neighbors B and D.
● Router C prepares a new routing table as-

Cost of reaching destination A from router C = min { 3+2 , 11+1 } = 5


via B.
● Cost of reaching destination B from router C = min { 3+0 ,
11+7 } = 3 via B.
● Cost of reaching destination D from router C = min { 3+7 ,
11+0 } = 10 via B.
Thus, the new routing table at router C is-

Destination Distance Next Hop

A 5 B

B 3 B

C 0 C
D 10 B

At Router D-
Router D receives distance vectors from its neighbors A, B and C.
● Router D prepares a new routing table as-

● Cost of reaching destination A from router D = min { 1+0 ,


7+2 , 11+∞ } = 1 via A.
● Cost of reaching destination B from router D = min { 1+2 ,
7+0 , 11+3 } = 3 via A.
● Cost of reaching destination C from router D = min { 1+∞ ,
7+3 , 11+0 } = 10 via B.
Thus, the new routing table at router D is-

Destination Distance Next Hop

A 1 A

B 3 A

C 10 B

D 0 D

Step-03:
● Each router exchanges its distance vector obtained in Step-02
with its neighboring routers.
● After exchanging the distance vectors, each router prepares a
new routing table.
This is shown below-
At Router A-
● Router A receives distance vectors from its neighbors B and
D.
● Router A prepares a new routing table as-

● Cost of reaching destination B from router A = min { 2+0 , 1+3


} = 2 via B.
● Cost of reaching destination C from router A = min { 2+3 ,
1+10 } = 5 via B.
● Cost of reaching destination D from router A = min { 2+3 ,
1+0 } = 1 via D.

Thus, the new routing table at router A is-

Destination Distance Next Hop

A 0 A

B 2 B
C 5 B

D 1 D

At Router B-
● Router B receives distance vectors from its neighbors A, C and
D.
● Router B prepares a new routing table as-


Cost of reaching destination A from router B = min { 2+0 ,
3+5 , 3+1 } = 2 via A.
● Cost of reaching destination C from router B = min { 2+5 ,
3+0 , 3+10 } = 3 via C.
● Cost of reaching destination D from router B = min { 2+1 ,
3+10 , 3+0 } = 3 via A.

Thus, the new routing table at router B is-

Destination Distance Next Hop

A 2 A

B 0 B
C 3 C

D 3 A

At Router C-

● Router C receives distance vectors from its neighbors B and D.


● Router C prepares a new routing table as-

● Cost of reaching destination A from router C = min { 3+2 ,


10+1 } = 5 via B.
● Cost of reaching destination B from router C = min { 3+0 ,
10+3 } = 3 via B.
● Cost of reaching destination D from router C = min { 3+3 ,
10+0 } = 6 via B.

Thus, the new routing table at router C is-

Destination Distance Next Hop

A 5 B
B 3 B

C 0 C

D 6 B

At Router D-
● Router D receives distance vectors from its neighbors A, B
and C.
● Router D prepares a new routing table as-

● Cost of reaching destination A from router D = min { 1+0 ,


3+2 , 10+5 } = 1 via A.
● Cost of reaching destination B from router D = min { 1+2 ,
3+0 , 10+3 } = 3 via A.
● Cost of reaching destination C from router D = min { 1+5 ,
3+3 , 10+0 } = 6 via A.

Thus, the new routing table at router D is-

Destination Distance Next Hop


A 1 A

B 3 A

C 6 A

D 0 D

These will be the final routing tables at each router.

3.2.7: CONGESTION CONTROL ALGORITHMS


What is Congestion Control? Describe the Congestion Control
Algorithm commonly used
Congestion is an important issue that can arise in packet switched
network. Congestion is a situation in Communication Networks in
which too many packets are present in a part of the subnet,
performance degrades. Congestion in a network may occur when the
load on the network (i.e. the number of packets sent to the network)
is greater than the capacity of the network (i.e. the number of packets
a network can handle.). Network congestion occurs in case of traffic
overloading.

In other words when too much traffic is offered, congestion sets in


and performance degrades sharply
Causing of Congestion:
The various causes of congestion in a subnet are:
• The input traffic rate exceeds the capacity of the output lines. If
suddenly, a stream of packet start arriving on three or four input lines
and all need the same output line. In this case, a queue will be built
up. If there is insufficient memory to hold all the packets, the packet
will be lost. Increasing the memory to unlimited size does not solve
the problem. This is because, by the time packets reach front of the
queue, they have already timed out (as they waited the queue). When
timer goes off source transmits duplicate packet that are also added to
the queue. Thus same packets are added again and again, increasing
the load all the way to the destination.

• The routers are too slow to perform bookkeeping tasks (queuing


buffers, updating tables, etc.).
• The routers’ buffer is too limited.
• Congestion in a subnet can occur if the processors are slow. Slow
speed CPU at routers will perform the routine tasks such as queuing
buffers, updating table etc slowly. As a result of this, queues are built
up even though there is excess line capacity.
• Congestion is also caused by slow links. This problem will be
solved when high speed links are used. But it is not always the case.
Sometimes increase in link bandwidth can further deteriorate the
congestion problem as higher speed links may make the network
more unbalanced.Congestion can make itself worse. If a route!” does
not have free buffers, it start ignoring/discarding the newly arriving
packets. When these packets are discarded, the sender may retransmit
them after the timer goes off. Such packets are transmitted by the
sender again and again until the source gets the acknowledgement of
these packets. Therefore multiple transmissions of packets will force
the congestion to take place at the sending end.
How to correct the Congestion Problem:

Congestion Control refers to techniques and mechanisms that can


either prevent congestion, before it happens, or remove congestion,
after it has happened. Congestion control mechanisms are divided
into two categories, one category prevents the congestion from
happening and the other category removes congestion after it has
taken place.

These two categories are:


1. Open loop
2. Closed loop

Open Loop Congestion Control


• In this method, policies are used to prevent the congestion before it
happens.
• Congestion control is handled either by the source or by the
destination.
• The various methods used for open loop congestion control are:
Retransmission Policy
• The sender retransmits a packet, if it feels that the packet it has sent
is lost or corrupted.
• However retransmission in general may increase the congestion in
the network. But we need to implement good retransmission policy to
prevent congestion.
• The retransmission policy and the retransmission timers need to be
designed to optimize efficiency and at the same time prevent the
congestion.
Window Policy
• To implement window policy, selective reject window method is
used for congestion control.
• Selective Reject method is preferred over Go-back-n window as in
Go-back-n method, when timer for a packet times out, several
packets are resent, although some may have arrived safely at the
receiver. Thus, this duplication may make congestion worse.
• Selective reject method sends only the specific lost or damaged
packets.
Acknowledgement Policy
• The acknowledgement policy imposed by the receiver may also
affect congestion.
• If the receiver does not acknowledge every packet it receives it may
slow down the sender and help prevent congestion.
• Acknowledgments also add to the traffic load on the network. Thus,
by sending fewer acknowledgements we can reduce load on the
network.
• To implement it, several approaches can be used:
1. A receiver may send an acknowledgement only if it has a packet to
be sent.
2. A receiver may send an acknowledgement when a timer expires.
3. A receiver may also decide to acknowledge only N packets at a
time.
Discarding Policy
• A router may discard less sensitive packets when congestion is
likely to happen.
• Such a discarding policy may prevent congestion and at the same
time may not harm the integrity of the transmission.
Admission Policy
• An admission policy, which is a quality-of-service mechanism, can
also prevent congestion in virtual circuit networks.
• Switches in a flow first check the resource requirement of a flow
before admitting it to the network.
• A router can deny establishing a virtual circuit connection if there is
congestion in the “network or if there is a possibility of future
congestion.
Closed Loop Congestion Control

• Closed loop congestion control mechanisms try to remove the


congestion after it happens.
• The various methods used for closed loop congestion control are:
Backpressure
• Back pressure is a node-to-node congestion control that starts with a
node and propagates, in the opposite direction of data flow.
• The
backpressure technique can be applied only to virtual circuit
networks. In such virtual circuit each node knows the upstream node
from which a data flow is coming.
• In this method of congestion control, the congested node stops
receiving data from the immediate upstream node or nodes.
• This may cause the upstream node on nodes to become congested,
and they, in turn, reject data from their upstream node or nodes.
• As shown in fig node 3 is congested and it stops receiving packets
and informs its upstream node 2 to slow down. Node 2 in turns may
be congested and informs node 1 to slow down. Now node 1 may
create congestion and informs the source node to slow down. In this
way the congestion is alleviated. Thus, the pressure on node 3 is
moved backward to the source to remove the congestion.
Choke Packet

• In this method of congestion control, congested router or node


sends a special type of packet called choke packet to the source to
inform it about the congestion.
• Here, congested node does not inform its upstream node about the
congestion as in backpressure method.
• In choke packet method, congested node sends a warning directly to
the source station i.e. the intermediate nodes through which the
packet has traveled are not warned.
Implicit Signaling
• In implicit signaling, there is no communication between the
congested node or nodes and the source.
The source guesses that there is congestion somewhere in the
network when it does not receive any acknowledgment. Therefore
the delay in receiving an acknowledgment is interpreted as
congestion in the network.
• On sensing this congestion, the source slows down.
• This type of congestion control policy is used by TCP.
Explicit Signaling
• In this method, the congested nodes explicitly send a signal to the
source or destination to inform about the congestion.
• Explicit signaling is different from the choke packet method. In
choke packed method, a separate packet is used for this purpose
whereas in explicit signaling method, the signal is included in the
packets that carry data .
• Explicit signaling can occur in either the forward direction or the
backward direction .
• In backward signaling, a bit is set in a packet moving in the
direction opposite to the congestion. This bit warns the source about
the congestion and informs the source to slow down.
• In forward signaling, a bit is set in a packet moving in the direction
of congestion. This bit warns the destination about the congestion.
The receiver in this case uses policies such as slowing down the
acknowledgements to remove the congestion.
Congestion control algorithms
Leaky Bucket Algorithm

• It is a traffic shaping mechanism that controls the amount and the


rate of the traffic sent to the network.
• A leaky bucket algorithm shapes bursty traffic into fixed rate traffic
by averaging the data rate.
• Imagine a bucket with a small hole at the bottom.
• The rate at which the water is poured into the bucket is not fixed
and can vary but it leaks from the bucket at a constant rate. Thus (as
long as water is present in bucket), the rate at which the water leaks
does not depend on the rate at which the water is input to the bucket.

• Also, when the bucket is full, any additional water that enters into
the bucket spills over the sides and is lost.
• The same concept can be applied to packets in the network.
Consider that data is coming from the source at variable speeds.
Suppose that a source sends data at 12 Mbps for 4 seconds. Then
there is no data for 3 seconds. The source again transmits data at a
rate of 10 Mbps for 2 seconds. Thus, in a time span of 9 seconds, 68
Mb data has been transmitted.
If a leaky bucket algorithm is used, the data flow will be 8 Mbps for
9 seconds. Thus constant flow is maintained.

Token bucket Algorithm

• The leaky bucket algorithm allows only an average (constant) rate


of data flow. Its major problem is that it cannot deal with bursty data.
• A leaky bucket algorithm does not consider the idle time of the
host. For example, if the host was idle for 10 seconds and now it is
willing to sent data at a very high speed for another 10 seconds, the
total data transmission will be divided into 20 seconds and average
data rate will be maintained. The host is having no advantage of
sitting idle for 10 seconds.
• To overcome this problem, a token bucket algorithm is used. A
token bucket algorithm allows bursty data transfers.
• A token bucket algorithm is a modification of leaky bucket in
which leaky bucket contains tokens.
• In this algorithm, a token(s) are generated at every clock tick. For a
packet to be transmitted, system must remove token(s) from the
bucket.
• Thus, a token bucket algorithm allows idle hosts to accumulate
credit for the future in form of tokens.
• For example, if a system generates 100 tokens in one clock tick and
the host is idle for 100 ticks. The bucket will contain 10,000 tokens.
Now, if the host wants to send bursty data, it can consume all 10,000
tokens at once for sending 10,000 cells or bytes.
Thus a host can send bursty data as long as bucket is not empty.

3.3: QUALITY OF SERVICE


There are applications that demand stronger performance guarantees
from the network than “the best that could be done under the
circumstances.”

An easy solution to provide good quality of service is to build a


network with enough capacity for whatever traffic will be thrown at it.
The name for this solution is overprovisioning. The trouble with this
solution is that it is expensive. Quality of service mechanisms let a
network with less capacity meet application requirements just as well
at a lower cost. With quality of service mechanisms, the network can
honor the performance guarantees that it makes even when traffic
spikes, at the cost of turning down some requests.
Four issues must be addressed to ensure quality of service: 1. What
applications need from the network. 2. How to regulate the traffic that
enters the network. 3. How to reserve resources at routers to guarantee
performance. 4. Whether the network can safely accept more traffic.
1. Application Requirements:
 Different applications have different performance requirements
from the network. For example, multimedia applications often require
minimum throughput and maximum latency to function properly.
 Understanding the specific needs of various applications is crucial
for providing appropriate Quality of Service.
2. Traffic Shaping:
 Traffic shaping is a QoS mechanism used to regulate the traffic
that enters the network. It controls the flow of data to ensure that it
adheres to certain predefined parameters.
 By controlling the rate at which data is transmitted, traffic shaping
helps in managing congestion and improving overall network
performance.
3. Packet Scheduling:
 Packet scheduling is an important aspect of QoS that involves
deciding the order in which packets are transmitted from the router's
buffer onto the outgoing link.
 Different packet scheduling algorithms can be employed to
prioritize certain types of traffic over others, ensuring that
high-priority traffic gets transmitted first.
4. Admission Control:

Admission control is used to manage the allocation of resources in the


network to guarantee performance for specific applications.
 Before admitting new traffic into the network, the admission
control mechanism checks if sufficient resources are available to meet
the QoS requirements of the new traffic.
5. Integrated Services (IntServ):
 Integrated Services is a QoS model that aims to provide a
guaranteed level of service for individual flows in the network.
 IntServ uses signaling protocols like RSVP (Resource Reservation
Protocol) to reserve resources along the path of the flow, ensuring that
the required QoS guarantees are met.
6. Differentiated Services (DiffServ):
 Differentiated Services is another QoS model that classifies and
treats traffic into different classes or service levels.
 Unlike IntServ, DiffServ does not require resource reservation for
individual flows; instead, it applies per-hop behaviors to packets
based on their class.
Quality of Service (QoS) in computer networks aims to meet the
specific performance requirements of different applications. This
involves regulating incoming traffic, reserving resources to guarantee
performance, and ensuring that the network can handle the traffic
demands without degradation in service quality. Both Integrated
Services and Differentiated Services are QoS models that address
these issues, but they differ in their approaches to achieve the desired
QoS guarantees. While Integrated Services uses per-flow resource
reservation, Differentiated Services classifies and treats traffic based
on predefined service levels. By implementing QoS mechanisms,
networks can offer better performance guarantees without the need for
excessive overprovisioning, thus optimizing cost-efficiency.
3.4: INTERNETWORKING
Interworking refers to the process of connecting and enabling
communication between different types of networks or systems. In the
context of internetworking, it involves the seamless exchange of data
between multiple heterogeneous networks, allowing users on one
network to communicate with users on other networks. Interworking
is essential in today's interconnected world, where numerous networks
with different technologies and protocols coexist.
Key points about internetworking:

1. Heterogeneity (how networks differ): Networks can differ in


many ways, including protocols, addressing schemes, packet sizes,
quality of service, reliability, security mechanisms, and more.
Interworking deals with addressing and resolving these differences to
enable communication between networks.

Challenges (how networks can be connected): Connecting different


networks can be challenging due to differences in addressing, packet
sizes, ordering, quality of service, and security, among other factors.
These differences must be accommodated to ensure data transmission
across the interconnected networks.
Example: The source accepts data from the transport layer and
generates a packet with the common network layer header, which is
IP in this example. The network header contains the
ultimate destination address, which is used to determine that the
packet should be sent via the first router. So the packet is encapsulated
in an 802.11 frame whose destination is the first router and
transmitted. At the router, the packet is removed from the frame’s data
field and the 802.11 frame header is discarded. The router now
examines the IP address in the packet and looks up this address in its
routing table. Based on this address, it decides to send the packet to
the second router next. For this part of the path, an MPLS virtual
circuit must be established to the second router and the packet must
be encapsulated with MPLS headers that travel this circuit. At the far
end, the MPLS header is discarded and the network address is again
consulted to find the next network layer hop. It is the destination
itself. Since the packet is too long to be sent over Ethernet, it is split
into two portions. Each of these portions is put into the data field of
an Ethernet frame and sent to the Ethernet address of the destination.
At the destination, the Ethernet header is stripped from each of the
frames, and the contents are reassembled. The packet has finally
reached its destination.
3. Multiprotocol Routers: To enable interworking between different
networks, multiprotocol routers are used. These routers are capable of
handling multiple network protocols and can translate packets
between different types of networks.
4. Tunnelling: Tunnelling is a technique used to connect isolated
hosts or networks using other networks as an overlay. It involves
encapsulating packets within packets, effectively creating a "tunnel"
through which data can pass from one network to another.

Tunnelling is useful when the source and destination hosts are on the
same type of network, but there is a different network (the tunnel) in
between. For example, an organisation with an IPv6 network in Paris,
an IPv6 network in London, and connectivity between the offices via
the IPv4 Internet.

To send a packet from a host in Paris to a host in London, the Paris


host encapsulates the packet with an IPv6 address for the London
host. This packet is then sent to a multiprotocol router that connects
the Paris IPv6 network to the IPv4 Internet.
The router that receives the encapsulated packet adds an IPv4 header
to the packet, addressing it to the IPv4 side of the multiprotocol router
that connects to the London IPv6 network. The IPv6 packet
effectively becomes the payload of the IPv4 packet.
The path through the IPv4 Internet acts as a tunnel, extending from
one multiprotocol router to the other. The IPv6 packet travels inside
this tunnel, unaffected by the underlying IPv4 network. An analogy to
tunneling is a person driving a car from Paris to London. Within
France, the car moves on its own power, but when it reaches the
English Channel, it is loaded onto a high-speed train and transported
through the Channel Tunnel to England. Once in England, the car is
released and continues to move under its own power. Similarly, the
packet is encapsulated and travels through the tunnel, reaching the
destination network.

Internetwork Routing: Internetwork routing involves the process of


determining the paths that data packets take through an interconnected
network of networks, such as the Internet. It poses several challenges
due to the diversity of networks, varying routing algorithms, different
operator preferences, and the need to ensure efficient and scalable
routing. Internetwork Routing is typically handled by a two-level
routing algorithm, where each network (or Autonomous System - AS)
uses an intradomain or interior gateway protocol for internal routing,
and an interdomain or exterior gateway protocol for routing between
networks.
Here are the key points to note about Internetwork Routing:
 Intradomain Routing (Interior Gateway Protocol - IGP):
Within each network (AS), an intradomain routing protocol is used to
determine paths for data within the network. Examples of intradomain
routing protocols include OSPF (Open Shortest Path First) and IS-IS
(Intermediate System to Intermediate System). Each network can use
its preferred IGP based on its requirements and infrastructure.
 Interdomain Routing (Exterior Gateway Protocol - EGP):
Across the networks that make up the Internet, an interdomain routing
protocol is used to determine paths between networks. The goal is to
find the best path for data to traverse through multiple networks. The
interdomain routing protocol must be the same across all networks to
ensure consistent routing. The Internet uses BGP (Border Gateway
Protocol) for interdomain routing.
 Autonomous System (AS): Each network that makes up the
Internet is operated independently and is known as an Autonomous
System (AS). An AS can be considered as an ISP network, and it may
consist of multiple networks managed or acquired by the same entity.
 Routing Policies: Routing across networks involves
considerations beyond technical aspects. Business arrangements
between ISPs, charging or receiving money for traffic carriage, and
compliance with international laws are factors that influence routing
decisions. These considerations are encapsulated in routing policies
that govern the way autonomous networks select their routes.
 Two-Level Routing: The two-level routing approach allows each
network to maintain autonomy over its internal routing decisions
while adhering to a consistent protocol (BGP) for routing across the
Internet. It helps with scaling, allows for different routing algorithms
within networks, and protects sensitive information from exposure
outside the networks.
 BGP (Border Gateway Protocol): BGP is the primary
interdomain routing protocol used in the Internet. It is responsible for
exchanging routing information between autonomous networks,
determining the best paths for data to traverse through multiple
networks, and enforcing routing policies.

5. Fragmentation: Networks impose maximum packet sizes, and


when a large packet needs to travel through a network with a smaller
maximum packet size, fragmentation can occur. Fragmentation
involves breaking the large packet into smaller fragments, which can
then be reassembled at the destination.
Packet fragmentation is a process used in computer networks to break
large data packets into smaller fragments to fit within the maximum
packet size allowed by the underlying network links.

Reason for fragmentation:


Causes of Maximum Packet Size: Each network or link in a network
imposes a maximum size on its packets due to hardware limitations,
operating system buffer sizes, protocol specifications, compliance
with standards, reducing error-induced retransmissions, or preventing
one packet from occupying the channel for too long.

Fragmentation Strategies:
Transparent Fragmentation: In this strategy, routers break up
oversized packets into fragments, and each fragment is addressed to
the same exit router where they are recombined. Subsequent networks
are unaware that fragmentation occurred. However, this approach
requires the exit router to know when all fragments have arrived and
may constrain routing options.

Nontransparent Fragmentation: In this strategy, routers do not


recombine fragments; each fragment is treated as an independent
packet. Reassembly is performed only at the destination host.
This approach requires less work for routers but adds overhead due to
fragment headers.
6. Path MTU Discovery: Path MTU discovery is a technique used to
determine the smallest maximum transmission unit (MTU) along the
path between the source and destination. This allows the source to
send packets of the appropriate size, reducing the need for
fragmentation.

The modern Internet primarily uses Path MTU discovery to avoid


fragmentation. The source sends packets with the "Do Not Fragment"
(DF) bit set. If a router along the path finds that the packet is too large
for its MTU, it sends an error message back to the source. The source
then re fragments the packet to a smaller size and retries sending. This
process continues until the correct packet size is determined for the
entire path.
Advantages and Disadvantages: Path MTU discovery avoids
fragmentation in the network, making the process transparent to
intermediate routers. However, it may introduce startup delays while
probing for the correct MTU, and higher layers need to adapt their
data transmission accordingly. Non transparent fragmentation requires
less work for routers but may lead to additional overhead due to
fragment headers.

3.5: THE NETWORK LAYER IN THE INTERNET.

The design principles that drove the success of the network layer in
the Internet can be summarized as follows:
1. Make sure it works: Finalize the design or standard only after
successful communication between multiple prototypes. Avoid
writing extensive standards that may be deeply flawed and do not
work.
2. Keep it simple: Use the simplest solution when in doubt. Avoid
adding unnecessary features; fight complexity to keep the design
straightforward.
3. Make clear choices: Choose one way of doing things when there
are multiple options. Avoid providing multiple modes or parameters,
as it can lead to trouble.
4. Exploit modularity: Use protocol stacks with independent layers.
This allows changing one module or layer without affecting others.
5. Expect heterogeneity: Design the network to handle different
hardware, transmission facilities, and applications. Keep the design
simple, general, and flexible to accommodate varying requirements.
6. Avoid static options and parameters: Prefer dynamic negotiation
of parameters between the sender and receiver rather than defining
fixed choices.
7. Look for a good design; it need not be perfect: Choose a good
design even if it cannot handle some rare special cases. The burden of
working around specific requirements can be put on the users with
unique needs.
8. Be strict when sending and tolerant when receiving: Strictly
adhere to standards when sending packets but be flexible and able to
handle non-conforming incoming packets.
9. Think about scalability: Design the network to handle millions of
hosts and billions of users effectively. Avoid centralized databases and
spread the load evenly across available resources.
10. Consider performance and cost: Ensure that the network has
good performance and reasonable costs to encourage its usage.

The Internet's network layer is built around the Internet Protocol (IP).
The network layer provides a best-effort way to transport IP packets
from the source to the destination without concern for whether the
machines are on the same network or whether multiple networks are
in between.

Communication in the Internet involves breaking up data streams into


IP packets, forwarding them through the network using IP routers
along the best path, and reassembling the packets at the destination
before handing the data to the receiving process.

The Internet is a collection of interconnected networks or


Autonomous Systems (ASes), with major backbones formed by
high-bandwidth lines and fast routers. IP serves as the glue that holds
the entire Internet together, providing a unified way to route packets
across networks.

The network layer's primary job is to transport packets from source to


destination without guarantees, and the IP routing protocols decide
the paths to use among the numerous possible routes in the Internet.
The Internet's design principles have been crucial in its success, and
adherence to these principles continues to play a pivotal role in the
stability and scalability of the network layer in the modern Internet.
The IP Version 4 Protocol:
IPv4 (Internet Protocol version 4) is used for identifying and
addressing devices on a network, particularly on the Internet. It is one
of the foundational protocols of the Internet and is responsible for
routing data packets from source to destination across interconnected
networks.

 An IPv4 datagram consists of a header part and a payload part


(body part).
 The header has a fixed 20-byte part and a variable-length optional
part.
 The header format includes fields for version, header length,
differentiated services, total length, identification, flags (DF and MF),
fragment offset, time to live (TTL), protocol, header checksum,
source address, destination address, options, and padding.

Version Field:
 The version field indicates the version of the IP protocol being
used, and it is set to 4 for IPv4.

IHL (HEL) Field:


The IHL (Internet Header Length) field specifies the length of the
header in 32-bit words. The minimum value is 5, corresponding to a
20-byte header (no options), and the maximum value is 15, allowing a
60-byte header (with 40 bytes for options).
Differentiated Services (Service type) Field:
 The Differentiated Services field (formerly Type of Service)
distinguishes different classes of service, indicating priority and
treatment requirements for the packet. It is used for differentiated
services and explicit congestion notification.
Total Length Field:
 The Total Length field indicates the entire size of the datagram in
bytes, including both header and data. The maximum value is 65,535
bytes.

Identification, DF, and MF Fields:


 The Identification field helps the destination host determine which
fragments belong to the same datagram.
 The DF (Don't Fragment) bit indicates that the packet should not
be fragmented.
 The MF (More Fragments) bit is set in all fragments except the last
one.

Fragment Offset Field:


 The Fragment Offset field indicates the position of the fragment in
the original datagram, in multiples of 8 bytes.

Time to Live (TTL) Field:


 The TTL field serves as a hop counter, limiting the lifetime of the
packet. It is decremented at each hop, and when it reaches zero, the
packet is discarded, and an error message may be sent back to the
source.

Protocol Field:
 The Protocol field specifies the transport protocol to which the
packet should be handed at the destination, such as TCP or UDP.

Header Checksum Field:


 The Header Checksum is a checksum calculated for the header to
detect errors during transmission.

Source and Destination Address Fields:


 The Source Address and Destination Address fields contain the IP
addresses of the source and destination network interfaces,
respectively.
Options Field:
 The Options field allows subsequent versions of the IP protocol to
include additional information or experimental data.
IPv4 is the dominant version of the IP protocol used in the Internet
today. It provides a best-effort way to transport packets across the
Internet, and its design principles have contributed to the Internet's
success and scalability. However, IPv4 is limited in terms of available
addresses, which led to the development of IPv6, the next version of
the IP protocol. IPv6 provides a much larger address space and other
improvements, but its adoption has been gradual due to the large
existing infrastructure based on IPv4.
IP Addresses
Each host and router on the Internet has an IP address representing a
network interface. IP addresses do not refer directly to hosts but to
network interfaces. A host on two networks needs two IP addresses,
while most hosts have one IP address.Routers have multiple
interfaces, so they have multiple IP addresses.
1. Prefixes (Hierarchical Structure);

IP addresses are hierarchical, unlike Ethernet addresses. IPv4


addresses are 32-bit, divided into a network portion and a host
portion. The network portion has the same value for all hosts on a
network (e.g., an Ethernet LAN), forming a contiguous block called a
"prefix." IP addresses are written in dotted decimal notation (e.g.,
128.208.2.151). Prefixes are written as the lowest IP address in the
block followed by the size (e.g., 128.208.0.0/24).
2. Subnet Masks:

A subnet mask is a 32-bit number created by setting host bits to all 0s


and setting network bits to all 1s. in this way, the subnet mask
seperates the IP address into the network and host addresses. The
‘255’ address is always assigned to a broadcast address, and the ‘0’
address is always assigned to a network address.
An example of subnet mask of an IP address 192.168.100.1/24 is
shown in the below given figure.

3. Advantages of Hierarchical Addresses:


 Routers can forward packets based on the network portion alone,
making routing tables smaller.
 Scalability is improved, as routers only need to keep routes for
around 300,000 prefixes despite the large number of hosts on the
Internet.

Disadvantages of Hierarchical Addresses:


∙ IP addresses are tied to specific networks, making it challenging for
mobile hosts (solved by mobile IP).
 Address wastage can occur if large address blocks are assigned to
networks, leading to unused addresses.
5. IPv6 as a Solution:
The growth of the Internet has depleted the IPv4 address space. IPv6
with its larger address space provides a solution to the address
shortage. Until IPv6 is widely deployed, efficient allocation of IPv4
addresses remains crucial.
IP addresses are essential for identifying network interfaces, and the
hierarchical structure enables efficient routing. However, it also
presents challenges with host mobility and address management,
which are being addressed by IPv6's larger address space.

Subnets
CANN (Internet Corporation for Assigned Names and Numbers)
manages the allocation of network numbers to avoid conflicts.
ICANN delegates parts of the address space to regional authorities,
who assign IP addresses to ISPs and other companies.
As companies grow and require more IP addresses, the initial block
allocation may become inefficient. Routers need all hosts in a network
to have the same network number for proper routing by prefix. When
a single network becomes too large, dividing it into smaller subnets is
necessary to address the address shortage issue.

Subnetting Process:
Subnetting allows a block of IP addresses to be split into several
smaller parts for internal use as multiple networks (subnets). Subnets
enable efficient use of IP address space without requiring additional
blocks from external authorities. Each subnet must be aligned so that
any bits can be used in the lower host portion.

Subnet Example:
 An example network with a /16 prefix is split into three subnets for
different departments.
 Computer Science Dept.: /17 (half of the original block)
 Electrical Engineering Dept.: /18 (quarter of the original block)
 Art Dept.: /19 (eighth of the original block)
 One eighth of the original block remains unallocated.

Subnet Routing:
When a packet arrives at a router, the router needs to determine which
subnet the destination IP address belongs to. The router does this by
ANDing the destination address with the subnet mask for each subnet
and checking if it matches the corresponding prefix. The longest
matching prefix determines the correct subnet for the packet.

Example: For example, consider a packet destined for IP address


128.208.2.151.
Convert the IP address to binary form:
128 208 2 151
10000000 11010000 00000010 10010111
Now, let's check which subnet the IP address belongs to:
1. For Computer Science (CS) subnet with prefix 128.208.128.0/17:

Subnet Mask: 255.255.128.0


Network Address: 128.208.128.0 (binary: 10000000 11010000
10000000 00000000)
Check: AND the given IP address with the subnet mask
10000000 11010000 00000010 10010111
11111111 11111111 10000000 00000000 (subnet mask)
---------------------------------------------------
10000000 11010000 00000000 00000000
The result does not match the prefix address (128.208.128.0), so the
IP address does not belong to the CS subnet.

2. For Electrical Engineering (EE) subnet with prefix 128.208.0.0/18:

Subnet Mask: 255.255.192.0


Network Address: 128.208.0.0 (binary: 10000000 11010000
00000000 00000000)
Check: AND the given IP address with the subnet mask
10000000 11010000 00000010 10010111
11111111 11111111 11000000 00000000 (subnet mask)
-------------------------------------
10000000 11010000 00000000 00000000
The result matches the prefix address (128.208.0.0), so the IP address
belongs to the EE subnet.
3. For Art subnet with prefix 128.208.96.0/19:

Subnet Mask: 255.255.224.0


Network Address: 128.208.96.0 (binary: 10000000 11010000
01100000 00000000)
Check: AND the given IP address with the subnet mask
10000000 11010000 00000010 10010111
11111111 11111111 11100000 00000000 (subnet mask)
-------------------------------------
10000000 11010000 00000000 00000000
The result does not match the prefix address (128.208.96.0), so the IP
address does not belong to the Art subnet.

Based on the computation, the destination IP address 128.208.2.151


belongs to the Electrical Engineering (EE) subnet with prefix
128.208.0.0/18. Therefore, the packet will be forwarded to the
interface that leads to the Electrical Engineering network.
Subnet divisions can be changed later if needed, by updating the
subnet masks at routers inside the network. Outside the network,
subnetting is not visible, so no changes to external databases or
contacting ICANN are necessary for subnet allocation.

Subnetting is a crucial mechanism to efficiently utilize IP address


space and manage network growth. It allows organizations to divide a
large network into smaller subnets without requiring additional
external address allocations. Routers can efficiently route packets to
the appropriate subnets based on the destination IP address and the
corresponding subnet masks.

CIDR (Classless InterDomain Routing):

Routers at the edge of a network need entries for each of their subnets
in their routing tables. Core routers in the default-free zone of the
Internet need to know routes to every network, leading to large
routing tables. Large routing tables can cause performance issues and
communication complexities.

CIDR (Classless InterDomain Routing) is a solution to reduce routing


table sizes by aggregating multiple small prefixes into a single larger
prefix called a supernet. This process is called route aggregation.
Routers at different locations can have information about the same IP
address as belonging to prefixes of different sizes.

Aggregation is automatic and depends on where prefixes are located


in the Internet.
Example of Aggregation:
A block of 8192 IP addresses is available starting at 194.24.0.0. The
goal is to assign IP addresses to three universities: Cambridge,
Oxford, and the University of Edinburgh, with specific address
requirements for each.

Cambridge University:
Needs 2048 addresses.
Assigned IP addresses: 194.24.0.0 to 194.24.7.255
Subnet mask: 255.255.248.0 (which is equivalent to /21 prefix)

The subnet mask 255.255.248.0 (or /21) means that the first 21 bits of
the IP address are the network portion, and the remaining 11 bits are
the host portion. This allows for 2^11 = 2048 addresses, which is
sufficient for Cambridge University.
Oxford University:
Needs 4096 addresses.

Since a block of 4096 addresses must lie on a 4096-byte boundary, it


cannot be given addresses starting at 194.24.8.0. Instead, it is assigned
IP addresses: 194.24.16.0 to 194.24.31.255
Subnet mask: 255.255.240.0 (or /20 prefix)

The subnet mask 255.255.240.0 (or /20) allows for 4096 addresses,
with the first 20 bits being the network portion and the remaining 12
bits being the host portion. Oxford University gets a block of 4096
addresses that starts at 194.24.16.0.
University of Edinburgh:
Needs 1024 addresses.

Assigned IP addresses: 194.24.8.0 to 194.24.11.255


Subnet mask: 255.255.252.0 (or /22 prefix)

The subnet mask 255.255.252.0 (or /22) provides 1024 addresses,


with the first 22 bits being the network portion and the remaining 10
bits being the host portion. The University of Edinburgh is given a
block of 1024 addresses starting at 194.24.8.0.
Longest Matching Prefix Routing:
 When a packet arrives, the routing table is scanned to find the
longest matching prefix for the destination IP address.
 Overlapping prefixes are allowed, and packets are sent in the
direction of the most specific route with the fewest IP addresses.

Advantages of CIDR:
 Reduces the size of routing tables, making routing more efficient.
 Allows flexible address allocation and aggregation based on
network requirements.
 CIDR is widely used in the Internet and reduces routing table sizes
to manageable levels.
Hardware Support:
 Complex algorithms have been devised to speed up the address
matching process in routers.
 Commercial routers use custom VLSI chips with these algorithms
embedded in hardware to handle large routing tables efficiently.

CIDR is a crucial component of modern Internet routing, enabling


efficient address allocation, route aggregation, and reducing the size
of routing tables. It plays a significant role in managing the
ever-growing number of networks connected to the Internet.

Classful and Special Addressing:


Before 1993, IP addresses were divided into five classes (A, B, C, D,
and E) with fixed address block sizes.

The fixed-size classes wasted IP addresses as organizations often


received more addresses than they needed, especially with class B
networks.
To address the limitations of classful addressing, subnets were
introduced, allowing organizations to divide their address blocks into
smaller sub-blocks as needed.
CIDR (Classless Inter-Domain Routing) replaced classful addressing
and allowed more flexible allocation of IP addresses using
variable-length prefixes (subnet masks).
Special IP Addresses: Specific IP addresses, such as 0.0.0.0,
127.xx.yy.zz (loopback testing), and 255.255.255.255 (broadcast),
have special meanings in networking.
CIDR has been crucial in scaling the Internet and efficiently utilizing
the IPv4 address space.
IP Version 6:
IPv6 (Internet Protocol version 6) was designed to address the
growing shortage of IPv4 addresses, providing a larger address space
with 128-bit addresses.
IPv6 aims to support billions of hosts, reduce routing table size,
simplify header processing, enhance security, and improve quality of
service.
IPv6 Header:
The IPv6 header is simplified compared to IPv4, with 7 fields
(compared to 13 in IPv4).
The fields include: Version, Differentiated Services, Flow Label,
Payload Length, Next Header, Hop Limit, Source Address, and
destination Address.

IPv6 addresses are 128 bits long, expressed in hexadecimal format


with groups separated by colons.
Extension Headers:
IPv6 introduces extension headers for additional functionality. Six
types of extension headers are defined: Hop-by-hop options,
Destination options, Routing, Fragmentation, Authentication, and
Encrypted security payload.
Extension headers can be used to provide extra information in an
efficient manner.

Version (4-bits): Indicates version of Internet Protocol which contains


bit sequence 0110.
Traffic Class (8-bits): The Traffic Class field indicates class or priority of
IPv6 packet which is similar to Service Field in IPv4 packet. It helps
routers to handle the traffic based on the priority of the packet. If
congestion occurs on the router then packets with the least priority will
be discarded.
As of now, only 4-bits are being used (and the remaining bits are under
research), in which 0 to 7 are assigned to Congestion controlled traffic
and 8 to 15 are assigned to Uncontrolled traffic.
Priority assignment of Congestion controlled traffic :

Uncontrolled data traffic is mainly used for Audio/Video data. So we


give higher priority to Uncontrolled data traffic.
The source node is allowed to set the priorities but on the way, routers
can change it. Therefore, the destination should not expect the same
priority which was set by the source node.
Flow Label (20-bits): Flow Label field is used by a source to label the
packets belonging to the same flow in order to request special handling
by intermediate IPv6 routers, such as non-default quality of service or
real-time service. In order to distinguish the flow, an intermediate
router can use the source address, a destination address, and flow label
of the packets. Between a source and destination, multiple flows may
exist because many processes might be running at the same time.
Routers or Host that does not support the functionality of flow label
field and for default router handling, flow label field is set to 0. While
setting up the flow label, the source is also supposed to specify the
lifetime of the flow.
Payload Length (16-bits): It is a 16-bit (unsigned integer) field,
indicates the total size of the payload which tells routers about the
amount of information a particular packet contains in its payload. The
payload Length field includes extension headers(if any) and an
upper-layer packet. In case the length of the payload is greater than
65,535 bytes (payload up to 65,535 bytes can be indicated with
16-bits), then the payload length field will be set to 0 and the jumbo
payload option is used in the Hop-by-Hop options extension header.
Next Header (8-bits): Next Header indicates the type of extension
header(if present) immediately following the IPv6 header. Whereas In
some cases it indicates the protocols contained within upper-layer
packets, such as TCP, UDP.
Hop Limit (8-bits): Hop Limit field is the same as TTL in IPv4 packets. It
indicates the maximum number of intermediate nodes IPv6 packet is
allowed to travel. Its value gets decremented by one, by each node that
forwards the packet and the packet is discarded if the value decrements
to 0. This is used to discard the packets that are stuck in an infinite loop
because of some routing error.

Source Address (128-bits): Source Address is the 128-bit IPv6 address


of the original source of the packet.
Destination Address (128-bits): The destination Address field indicates
the IPv6 address of the final destination(in most cases). All the
intermediate nodes can use this information in order to correctly route
the packet.
Extension Headers: In order to rectify the limitations of the IPv4 Option
Field, Extension Headers are introduced in IP version 6. The extension
header mechanism is a very important part of the IPv6 architecture. The
next Header field of IPv6 fixed header points to the first Extension
Header and this first extension header points to the second extension
header and so on.

IPv6 packet may contain zero, one or more extension headers but these
should be present in their recommended order:

Controversies: IPv6 design decisions sparked debates on issues like


address length, hop limit field size, maximum packet size, and
security implementation. Decisions were made based on balancing
factors such as performance, security, and compatibility with existing
networks. IPv6 was designed to support future growth and needs
while considering ongoing debates and potential constraints.
Deployment and Transition:
IPv6 deployment has been a challenge due to its differences from
IPv4 and existing network infrastructure. Transition strategies include
dual-stack hosts, automatic tunneling, and methods for automatic
configuration of IPv6 tunnels.
IPv6 was developed to address the limitations of IPv4, such as
address exhaustion and security concerns, while introducing
enhancements in various aspects of network communication.
Internet Control Protocols
In addition to IP, which is used for data transfer, the Internet has
several companion control protocols that are used in the network
layer. They include ICMP, ARP, and DHCP.
Internet Control Message Protocol (ICMP):
ICMP (Internet Control Message Protocol) is used for reporting errors
and other informational messages concerning IP packet processing. It
has various message types
ICMP Message Types: ICMP messages are encapsulated in IP
packets for various purposes. These message types are listed below
with their purpose.
There is a network debugging tool that uses Time Exceeded messages
to identify routers along the path to a destination IP address. This tool
is called Traceroute.
Similar ICMP message types exist for IPv6, serving the same
purposes as in IPv4.

ARP—The Address Resolution Protocol


IP addresses are used for routing and identifying hosts on the Internet.
They are 32-bit addresses typically represented in dotted-decimal
notation (e.g., 192.168.1.1).
Ethernet addresses (also known as MAC addresses) are 48-bit
addresses assigned to Network Interface Cards (NICs) of Ethernet
devices. They are used for communication within a local network
segment.
The purpose ARP (Address Resolution Protocol) is to map IP
addresses to Ethernet addresses in order to facilitate communication
between devices on a local network. It enables a sender to determine
the Ethernet address of a destination device when only its IP address
is known.

Address Resolution Process:


 When a host wants to send a packet to another host on the same
network, but it only knows the IP address of the destination, it uses
ARP to find the corresponding Ethernet address.
 The sending host broadcasts an ARP request on the local network,
asking which device has the IP address it's looking for.
 The host with the matching IP address responds with its Ethernet
address, allowing the sender to create an Ethernet frame for the packet
and send it to the correct device.

The following figure shows an example network, and the working of


ARP

ARP Caching:
After an ARP request and response have occurred, many hosts will
cache the result (IP-to-Ethernet mapping) for a period of time to avoid
repeating the ARP process for the same destination. This caching
helps to optimise network performance and reduce unnecessary
broadcast traffic.

DHCP (Dynamic Host Configuration Protocol)


DHCP is a protocol used to automatically configure network-related
settings for devices when they connect to a network. It avoids the
need for manual configuration, making network setup easier and less
error-prone.
When a device joins a network, it doesn't have an IP address or other
network settings. DHCP allows the device to request necessary
information from a central server on the network.
Key Components:
Every network should have a DHCP server responsible for
configuration. It Assigns IP addresses and other settings to devices.
Devices seeking network configuration are called DHCP clients. For
example, your computer, phone, or any device that connects to Wi-Fi.
How DHCP Works
1. When a device connects to the network, it sends a DHCP Discover
packet as a broadcast. It's like the device saying, "Hey, I'm here and
need network settings!"
2. The DHCP server receives the Discover packet and responds with a
DHCP Offer packet. This packet contains an available IP address and
other configuration details.
3. The client receives multiple Offers (if available) and chooses one.
It then sends a DHCP Request to the server, confirming its choice.
4. The server gets the Request and sends back a DHCP Acknowledge
packet. This confirms the configuration and provides the device with
the chosen settings.

IP addresses are allocated for a specific period called a "lease." Before


the lease expires, the client must request renewal to continue using the
same settings. If renewal fails or the lease ends, the device may lose
its IP address.
DHCP can provide more than just IP addresses. It can configure
settings like subnet masks, default gateway addresses, DNS server
addresses, and more.

Label Switching and MPLS


MultiProtocol Label Switching (MPLS) is a technology used to
efficiently route and forward network traffic, commonly employed by
ISPs to manage Internet traffic within their networks.
It adds a label to packets, allowing routers to make forwarding
decisions based on the label instead of the traditional destination IP
address.
MPLS combines aspects of circuit switching and datagram-based
routing.
Label is a short identifier added to the front of each packet. Label
Switched Router (LSR) is a router that makes forwarding decisions
based on the label, and Label Edge Router (LER) is a Router at the
edge of an MPLS network that assigns labels to packets.

MPLS Header:
A 4-byte header is added to the packet, containing:
● Label: Index into the forwarding table.
● QoS (Quality of Service): Indicates priority or service level.
● S (Stacking) Bit: Used for stacking multiple labels.
● TtL (Time to Live): Decremented at each hop to prevent
looping.

Label Assignment and Path Establishment:

MPLS operates

Label Assignment and Path Establishment:


MPLS operates without the need for direct user involvement in the
initial setup. Instead, the establishment of label forwarding
information is facilitated through control protocols, encompassing
routing protocols and connection setup mechanisms. Within the
MPLS network, routers assume the responsibility of identifying the
routes for which they function as final destinations. Subsequently,
these routers allocate labels to represent these routes. Through a
process of label exchange, routers communicate these labels with one
another, effectively sharing information about the routes and their
associated labels. As this label exchange takes place, routers populate
their forwarding tables with the relevant entries, creating a
comprehensive map that dictates how incoming data packets should
be forwarded

MPLS can use multiple labels stacked on top of each other. Outer
label guides the path, inner labels are revealed at the end of the path
for further forwarding.
MPLS is sometimes referred to as a "layer 2.5" protocol. It operates
between the network layer (IP) and the link layer (PPP, Ethernet, etc.).
It doesn't interfere with IP addresses, allowing flexibility in routing.

Benefits of MPLS:
1. Fast Forwarding: Labels enable quick and efficient forwarding
based on lookup tables.
2. Quality of Service (QoS): MPLS can support differentiated service
levels.
3. Traffic Engineering: Optimizing network resources and traffic
distribution.
4. Multiprotocol Support: MPLS can carry various types of traffic,
not just IP.

OSPF—An Interior Gateway Routing Protocol

● OSPF stands for Open Shortest Path First and it is an Interior


Gateway Protocol which is mainly used for exchanging the
routing information between the dynamic routers.
● It is generally used within the autonomous system of the Internet
and used in large TCP/IP networks.
● In the corporate networks, OSPF replaced the old Routing
Information Protocol.
● OSPF is one of the intra domain protocols. Intradomain protocol
means that this protocol is used within the network or an area.
● OSPF protocol is the protocol that works based on the link state
routing algorithm in which each router has the information
about each domain and uses this information to determine the
shortest path.

OSPF Areas

Refer to the below image to show the OSPF areas

In OSPF, the Autonomous system is divided into areas to avoid the


high traffic which is caused by flooding. These divided areas can be a
collection of hosts, routers, and networks. As the internet is divided
by internet service providers into various autonomous systems to
make management easy, in this way OSPF divides the autonomous
further into areas. Routers that are within the area are flooded with
routing information.

Special routers are also there in a divided area. The routers that are
present at the border of an area are considered special routers and
these are also known as Area Border Routers.
Special routers generally summarise the whole information of an area
and also share information about an area with other areas.

Different divided areas of an autonomous system are connected to the


backbone router. The main purpose of the backbone is to enable
communication between the different divided areas

Working of OSPF

OSPF protocol working can be understood in the following three


steps:

Step 1: The first step in the working of the OSPF protocol is to


become the OSPF neighbours. The two routers that are running on the
same link and are connected establish the neighbour relationship
between them.

Step 2: Now the next step is to exchange the database information


between the routers. When the router establishes the neighbor
relationship they exchange the link-state database (LSDB) with each
other.

Step 3: The third step in the working of the OSPF protocol is to select
the best route. After an exchange of LSDB information, the router
finds the best route for adding to the routing table.

How Does a Router Form a Neighbour Relationship?

Creating neighbour relationships is the first step in the working of the


OSPF protocol. Before forming the relationship the first thing is to
select the router ID.

Router ID (RID) is a number that is used for the unique identification


of each router on the network. In the IPv4 address format, there is a
router ID. Router ID can be set in two ways. The first way of setting
the router ID is to set it manually and the second way is that the router
decides its ID on its own.

The logic used by the router to set the router ID is given below.

Manually assigned: First of all the router checks whether the router
ID is assigned manually or not. If the ID is set manually, then that is
considered the router ID. Otherwise, if the ID is not manually set then
the router selects the highest 'up' status loopback interface IP address
as an ID. In a situation, where no loopback interfaces are available
then for the ID it will select the highest 'up' status non-loopback
interface IP address.

Refer to the below image for a visualisation of the manual assignment


of the router ID

Refer to the below image for a visualisation of the manual assignment


of the router ID
OSPF is used for providing communication between two routers that
are connected using point-to-point links or numbers of routers that are
connected. The two routers can be considered adjacent routers only
when they exchange the HELLO message with each other. These two
routers enter in a two-way state only when they both receive the
acknowledgment of the HELLO message. The creation of relations
between routers is allowed in the OSPF protocol as it is a link-state
routing protocol. The two routers that have having same area
ID, sharing their subnet, authentication, and timers will only be
considered neighbours. Between routers, the relationship is created so
that they can know each other. Between two routers if at least one of
them is serving as designated routers as a backup designated router or
connected using a point-to-point link then, only these two routers are
considered neighbours in a particular network.

OSPF Message Format

Refer to the below image for the OSPF message format

OSPF message format contains 8 fields which are given below:


Version: It is the field of 8 bits that is used to specify the version of
the OSPF protocol.

Type: It is the field of 8 bits that is used to specify the OSPF packet
type.

Message: Message is a 16-bit field and is used to specify the total


length of the message. The total length of the message also involves
the header length. So the sum of the message and header length
represents the total length.

Source IP address: It specifies the address of the source of the packet


means the address from where the packet is sent to the receiver.

Area identification: It specifies the area in which the routing takes


place.

Checksum: Checksum is used for specifying the data related to error


detection and correction.

Authentication type: This field can contain two types of


authentication i.e. 0 and 1. 0 specifies that no authentication is used,
and 0 represents none. And 1 represents PWD and it specifies
password-based authentication.
Authentication: Authentication is a field of 32-bit that specifies the
authentication data's actual value.

OSPF Packets

Hello

Hello packet is generally used for the creation of neighborhood


relationships and to check the reachability of the neighbors. So
HELLO packet is required at the time of connection establishments
between the routers.

Database Description

After connection establishment, if the communication between the


neighbour router and the system is happening the first time then the
database information about the network topology is sent to the system
from the router, and with the help of this information system can
modify and update.

Link State Request

The router sends the link-state request to get the information about the
specified route. Let us assume two routers are connected and router 1
wants to know the information of router 2. For this router1 sends the
link state request to the router2. Router2 sends the link-state
information to router 1 after receiving the link-state request from
router 1.

Link State Update

Link state link-state update is used for advertising. Link state update
is also used when the router wants to broadcast the link state.

Link State Acknowledgment

Link-state acknowledgment forces the router to send the


acknowledgment on every link-state update and this increases the
reliability of the routing. Let us understand it with an example,
suppose router1 sends the link state update to router2 and router3.
When router2 and router3 receive the link state update then in return
they send a link-state acknowledgment to router1 so that router1 gets
to know that router2 and router3 have received the link state update.

OSPF States

Down

When the state of the device is down, it is not able to receive the
HELLO packet. This down state does not refer to the down condition
of the device physically. It simply means that the process of the OSPF
protocol has not been started.

Init

When the device enters in init state, it refers to the HELLO packet
being received from the other routers.

Exstart

If the connection between both the routers starts, then the routers
enter the Exstart state. In this state, the master and slave get selected
based on the router's ID. The main function of the master is to control
the sequence of numbers and then begin the exchange process.

Exchange

When the device enters the exchange state, both routers start
transferring the list of LSA(Link State Advertisements) to each other.
The LSA includes a database description.

Loading

When the device enters the loading state, there is an exchange of


the LSU(Link State Update), LSR(Link-State Routing),
and LSA(Link State Advertisements).

Full

The device will enter into the full state if the exchange of LSA is
completed successfully.
Router Attributes

Before entering the Extract state, the OSPF protocol selects one router
that acts as a designated router(DR) and other routers act as a backup
designated router(BDR). These routers are only the attributes but not
the type of routers. If there is a condition of the broadcast network,
then the router chooses one router that acts as a designated router and
the other router acts as a backup designated router.

The selection of the designated and the backup designated routers is


performed so that the number of adjacencies can be minimized and
also to avoid flooding conditions in the network. They act as a central
point where the routing information can be exchanged between the
routers. Although there are point-to-point links that are directly
connected, so a need for the selection of the DR (Designated Router)
and BDR (Backup Designated Router).

If there is no selection of the DR and BDR, then the job of sending


the update to all adjacent neighbours, which may lead to flooding is
done by the router. The DR and BDR need to be selected for resolving
the problem of flooding. Despite exchanging the updates among
routers in the network, each non-DR and non-BDR transmits the
updates to DR and BDR only. After that, DR divides the information
of network topology among different routers in the same area. On the
other hand, BDR acts as a substitute for DR. The routing information
is also received by BDR from all other routers but BDR is not allowed
to distribute that information. If the condition occurs when DR fails
then, only BDR is allowed to distribute the information of routing.

The non-DR and non-BDR use 224.0.0.6 multicast addresses for


transferring the routing information to DR and BDR. Then multicast
address 224.0.0.6 receives routing information from the DR and BDR.

The DR and BDR can be selected based on the following rules:

● The router having the highest OSPF priority will be selected as


DR.
● If there is a condition when no router has the highest priority
then, the router having the highest router ID will be selected as
DR. On the other hand, the router having the second-highest
priority will be selected as BDR.

Let's take this example to understand the complete concept of DR.

Refer to the below image for an example of the concept of the DR.

In the given figure, R1 will be selected as the DR as R1 has the


highest router-id as compared to others and R2 works as the BDR as
R2 has the second-highest priority among all. If there is a condition
when a link fails between the system and R4 then, R4 updates about
its link failure only to R1 and R4 respectively. After that, the DR will
inform all the non-DRs and non-BDRs of this update. And also in this
situation, only R3 is available and serves as a non-DR and non-BDR,
except R4.

You might also like