0% found this document useful (0 votes)
23 views

Unit IV Course Material Comp - Networks

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views

Unit IV Course Material Comp - Networks

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 37

UNIT – IV - MAC SUB LAYER AND NETWORK LAYER – SECA1604

SECA1604 - COMPUTER NETWORKS


UNIT 4 - MAC SUB LAYER AND NETWORK LAYER

MAC sub layer for Standard Ethernet, Fast Ethernet, Wireless LAN and broadband
wireless. Design issues of network layer - Routing algorithm - shortest path routing -
Distance vector routing - Broadcast routing –Inter domain routing, Congestion control
algorithm - Congestion control in virtual circuit and datagram switches - The network layer
in the internet - The IP protocol-IP Addresses - IPv6, ARP,DHCP,ICMP, Classless
Addressing, Network Address Translation.

4.1 MAC sub layer for Standard

Ethernet operates in the data link layer and the physical layer. It is a family of
networking technologies that are defined in the IEEE 802.2 and 802.3 standards. Ethernet
supports data bandwidths of

➢ 10 Mb/s
➢ 100 Mb/s
➢ 1000 Mb/s (1 Gb/s)
➢ 10,000 Mb/s (10 Gb/s)
➢ 40,000 Mb/s (40 Gb/s)
➢ 100,000 Mb/s (100 Gb/s)

Ethernet standards define both the Layer 2 protocols and the Layer 1 technologies.
For the Layer 2 protocols, as with all 802 IEEE standards, Ethernet relies on the two
separate sublayers of the data link layer to operate, the Logical Link Control (LLC) and
the MAC sublayers.

LLC sublayer

The Ethernet LLC sublayer handles the communication between the upper layers
and the lower layers. This is typically between the networking software and the device
hardware. The LLC sublayer takes the network protocol data, which is typically an IPv4
packet, and adds control information to help deliver the packet to the destination node.
The LLC is used to communicate with the upper layers of the application, and transition
the packet to the lower layers for delivery.
LLC is implemented in software, and its implementation is independent of the
hardware. In a computer, the LLC can be considered the driver software for the NIC. The
NIC driver is a program that interacts directly with the hardware on the NIC to pass the
data between the MAC sublayer and the physical media.

MAC sublayer

MAC constitutes the lower sublayer of the data link layer. MAC is implemented by
hardware, typically in the computer NIC. The specifics are specified in the IEEE 802.3
standards. Figure 4.1 lists common IEEE Ethernet standards.

IEEE 802.3 and Ethernet

• Very popular LAN standard.


• Ethernet and IEEE 802.3 are distinct standards but as they are very similar to one
another these words are used interchangeably.
• A standard for a 1-persistent CSMA/CD LAN.
• It covers the physical layer and MAC sublayer protocol.

Figure 4.1 Common IEEE Ethernet Standards

4.2 Fast Ethernet

Fast Ethernet is a collective term for a number of Ethernet standards that carry
traffic at the nominal rate of 100 Mbit/s (the earlier Ethernet speed was 10 Mbit/s). Of the
Fast Ethernet standards, 100BASE-TX is by far the most common.
Fast Ethernet was introduced in 1995 as the IEEE 802.3u standard and remained
the fastest version of Ethernet for three years before the introduction of Gigabit Ethernet.

4.3 wireless local area network (WLAN)

A wireless local area network (WLAN) is a wireless computer network that links two
or more devices using wireless communication within a limited area such as a home,
school, computer laboratory, or office building. This gives users the ability to move around
within a local coverage area and yet still be connected to the network. Through a
gateway, a WLAN can also provide a connection to the wider Internet.
Most modern WLANs are based on IEEE 802.11 standards and are marketed under
the Wi-Fi brand name.

4.3.1 Wireless broadband


It is technology that provides high-speed wireless Internet access or computer
networking access over a wide area. broadband means "having instantaneous
bandwidths greater than 1 MHz and supporting data rates greater than about 1.5
Mbit/s.

4.4 The network layer design issues :

1) Store and formed packet switching.


2) Service provided to the transport layer.
3) Implementation of connectionless service.
4) Implementation of connection-oriented source.
5) Comparison of virtual circuit and datagram submits.

1) Store and formed packet switching :

Store and forward operation : -

i) Host transmits packet to router across LAN or oval point to point link.
ii) Packet is stored on router until fully arrived and processed.
iii) Packet is forward to next router.
2) Service provide to transport layer :

The network layer services have been designed with the goals : -

i) The advice should independent of router telenet


ii) The transport layer should be shielded from the number type and topology
of the router present.
iii) The network addresses maid available to transport

3) Implementation of connectionless service :

Connectionless service is offered packets are injected into the subnet individually
and routed independently of each other. Each packet is transmitted independently.
Connectionless service used in network layer ID and transport layer.
Packet are frequently called datagram connectionless service is largely for data
communication the internet.

4) Implementation of connection-oriented service : -

Connection-oriented service is used a path from the source router to the


destination router must be established before any data packet can be sent.
Connection oriented service also called virtual circuit service. This service used
network layer for ATM. It also used in transport layer for TCP.
A connection must be established before any can be sent packets order preserved
logical connection is also established here.

4.5 Routing Algorithm

A Routing Algorithm is a method for determining the routing of packets in a node.


For each node of a network, the algorithm determines a routing table, which in each
destination, matches an output line. The algorithm should lead to a consistent routing,
that is to say without loop.
The routing algorithm is that part of the network layer software responsible for
deciding which output line an incoming packet should be transmitted on.

PROPERTIES OF ROUTING ALGORITHM:


Correctness, simplicity, robustness, stability, fairness, and optimality
FAIRNESS AND OPTIMALITY.

Fairness and optimality may sound obvious, but as it turns out, they are often
contradictory goals. There is enough traffic between A and A', between B and B', and
between C and C' to saturate the horizontal links. To maximize the total flow, the X to X'
traffic should be shut off altogether. Unfortunately, X and X' may not see it that way.
Evidently, some compromise between global efficiency and fairness to individual
connections is needed.

CATEGORY OF ALGORITHM

➢ Routing algorithms can be grouped into two major classes: nonadaptive and
adaptive.
➢ Nonadaptive algorithms do not base their routing decisions on measurements or
estimates of the current traffic and topology. Instead, the choice of the route to use to
get from I to J is computed in advance, off-line, and downloaded to the routers when
the network is booted.
➢ This procedure is sometimes called Static routing.
➢ Adaptive algorithms, in contrast, change their routing decisions to reflect changes in
the topology, and usually the traffic as well
➢ This procedure is sometimes called dynamic routing

THE OPTIMALITY PRINCIPLE

➢ If router J is on the optimal path from router I to router K, then the optimal path from J
to K also falls along the same route.
➢ The set of optimal routes from all sources to a given destination form a tree rooted at
the destination. Such a tree is called a sink tree.
➢ As a direct consequence of the optimality principle, we can see that the set of optimal
routes from all sources to a given destination form a tree rooted at the destination.
➢ Such a tree is called a sink tree where the distance metric is the number of hops. Note
that a sink tree is not necessarily unique; other trees with the same path lengths may
exist.
➢ The goal of all routing algorithms is to discover and use the sink trees for all routers.

Fig 4.2 (a) A Sub Net, (b) A Sink tree for Router B

4.6 Shortest path routing

➢ A technique to study routing algorithms: The idea is to build a graph of the subnet, with
each node of the graph representing a router and each arc of the graph representing
a communication line (often called a link).
➢ To choose a route between a given pair of routers, the algorithm just finds the shortest
path between them on the graph.
➢ One way of measuring path length is the number of hops. Another metric is the
geographic distance in kilometers. Many other metrics are also possible. For example,
each arc could be labeled with the mean queuing and transmission delay for some
standard test packet as determined by hourly test runs.
➢ In the general case, the labels on the arcs could be computed as a function of the
distance, bandwidth, average traffic, communication cost, mean queue length,
measured delay, and other factors. By changing the weighting function, the algorithm
would then compute the ''shortest'' path measured according to any one of a number
of criteria or to a combination of criteria.
Figure 4.3 The first five steps used in computing the shortest path from A to D. The
arrows indicate the working node.

➢ To illustrate how the labelling algorithm works, look at the weighted, undirected graph
of Fig. 4.3 (a), where the weights represent, for example, distance.
➢ We want to find the shortest path from A to D. We start out by marking node A as
permanent, indicated by a filled-in circle.
➢ Then we examine, in turn, each of the nodes adjacent to A (the working node),
relabeling each one with the distance to A.
➢ Whenever a node is relabelled, we also label it with the node from which the probe
was made so that we can reconstruct the final path later.
➢ Having examined each of the nodes adjacent to A, we examine all the tentatively
labelled nodes in the whole graph and make the one with the smallest label permanent,
as shown in Fig. 4.3 (b).
➢ This one becomes the new working node.

We now start at B and examine all nodes adjacent to it. If the sum of the label on B
and the distance from B to the node being considered is less than the label on that node,
we have a shorter path, so the node is relabeled
After all the nodes adjacent to the working node have been inspected and the
tentative labels changed if possible, the entire graph is searched for the tentatively-
labelled node with the smallest value. This node is made permanent and becomes the
working node for the next round. Figure 4.3 shows the first five steps of the algorithm.
Another example using Dijkstra's algorithm to compute the shortest paths from a
given source node to all other nodes in a network. Links are bi-directional, with the same
distance in either direction. Distance can be any measure of cost.
Example with 8 nodes and 11 links
nodeset = {'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H'}
linklist = [('A', 'B', 2), ('B', 'C', 7), ('C', 'D', 3),
# (node,node,distance) ('B', 'E', 2), ('E', 'F', 2), ('F', 'C', 3), ('A', 'G', 6), ('G', 'E', 1),
('G', 'H', 4), ('F', 'H', 2), ('H', 'D', 2),

Figure 4.4 Dijkstra's algorithm

The strategy is to start at the source node, send probes to each of its adjacent
nodes, pick the node with the shortest path from the source, and make that the new
working node. Send probes from the new working node, pick the next shortest path, and
make that the next working node. Continue selecting the shortest possible path until every
every node in the network has been selected.
Figure 4.4 shows the first few steps in our example network. Labels on each node
show its distance from the source, and the previous node on the path from which that
distance was computed.
As new nodes are first probed, they are added to a working set, shown with a
darkened open circle. After each probe cycle, we look at the entire set of working nodes.
The node with the shortest path is moved to a final set, shown with a solid circle.
The light dotted lines are links not used in any shortest path from node A. They
might be used in another tree, however. Each node in a network can compute its own
shortest path tree, given the linklist for the entire network.

FLOODING

• Another static algorithm is flooding, in which every incoming packet is sent out on
every outgoing line except the one it arrived on.
• Flooding obviously generates vast numbers of duplicate packets, in fact, an infinite
number unless some measures are taken to damp the process.
• One such measure is to have a hop counter contained in the header of each
packet, which is decremented at each hop, with the packet being discarded when the
counter reaches zero.
• Ideally, the hop counter should be initialized to the length of the path from source
to destination. If the sender does not know how long the path is, it can initialize the counter
to the worst case, namely, the full diameter of the subnet.

4.7 Distance-Vector Routing

Distance vector routing algorithms operate by having each router maintain a table
(i.e, a vector) giving the best known distance to each destination and which line to use to
get there.
These tables are updated by exchanging information with the neighbors.
The distance vector routing algorithm is sometimes called by other names, most
commonly the distributed Bellman-Ford routing algorithm and the Ford-Fulkerson
algorithm, after the researchers who developed it (Bellman, 1957; and Ford and
Fulkerson, 1962).
It was the original ARPANET routing algorithm and was also used in the Internet
under the name RIP.
Figure 4.5 (a) A subnet. (b) Input from A, I, H, K, and the new routing table for J.

Part (a) shows a subnet. The first four columns of part (b) show the delay vectors
received from the neighbours of router J.
A claims to have a 12-msec delay to B, a 25-msec delay to C, a 40-msec delay to
D, etc. Suppose that J has measured or estimated its delay to its neighbours, A, I, H,
and K as 8, 10, 12, and 6 msec, respectively.

Each node constructs a one-dimensional array containing the "distances"(costs)


to all other nodes and distributes that vector to its immediate neighbors.

1. The starting assumption for distance-vector routing is that each node knows the
cost of the link to each of its directly connected neighbors.
2. A link that is down is assigned an infinite cost.
Example.

Figure 4.6 Distance-Vector Routing


Information Distance to Reach Node
Stored at
Node A B C D E F G

A 0 1 1 � 1 1 �
B 1 0 1 � � � �
C 1 1 0 1 � � �
D � � 1 0 � � 1
E 1 � � � 0 � �
F 1 � � � � 0 1
G � � � 1 � 1 0
Table 1. Initial distances stored at each node (global view)

We can represent each node's knowledge about the distances to all other nodes as a
table like the one given in Table 1.
Note that each node only knows the information in one row of the table.

1. Every node sends a message to its directly connected neighbors containing its
personal list of distance. ( for example, A sends its information to its neighbors
B,C,E, and F. )

2. If any of the recipients of the information from A find that A is advertising a path
shorter than the one they currently know about, they update their list to give the new
path length and note that they should send packets for that destination through A.
(node B learns from A that node E can be reached at a cost of 1; B also knows it
can reach A at a cost of 1, so it adds these to get the cost of reaching E by means
of A. B records that it can reach E at a cost of 2 by going through A.)

3. After every node has exchanged a few updates with its directly connected
neighbors, all nodes will know the least-cost path to all the other nodes.

4. In addition to updating their list of distances when they receive updates, the nodes
need to keep track of which node told them about the path that they used to calculate
the cost, so that they can create their forwarding table. ( for example, B knows that
it was A who said " I can reach E in one hop" and so B puts an entry in its table that
says " To reach E, use the link to A.)
Distance to Reach Node
Information
Stored at Node
A B C D E F G

A 0 1 1 2 1 1 2
B 1 0 1 2 2 2 3
C 1 1 0 1 2 2 2
D 2 2 1 0 3 2 1
E 1 2 2 3 0 2 3
F 1 2 2 2 2 0 1
G 2 3 2 1 3 1 0
Table 2. final distances stored at each node ( global view).

In practice, each node's forwarding table consists of a set of triples of the form:
(Destination, Cost, Next Hop).
For example, Table 3 shows the complete routing table maintained at node B for the
network in figure 4.3.

Destination Cost Next Hop

A 1 A

C 1 C

D 2 C

E 2 A

F 2 A

G 3 A

Table 3. Routing table maintained at node B.


4.8 Broadcast routing

Sending a packet to all destinations simultaneously is called broadcasting.


1) The source simply sends a distinct packet to each destination. Not only is the
method wasteful of bandwidth, but it also requires the source to have a complete list
of all destinations.
2) Flooding. - The problem with flooding as a broadcast technique is that it
generates too many packets and consumes too much bandwidth.

Figure 4.7 Reverse path forwarding. (a) A subnet. (b) A sink tree. (c) The tree built by
reverse path forwarding.

Part (a) shows a subnet, part (b) shows a sink tree for router I of that subnet, and part (c)
shows how the reverse path algorithm works.
• When a broadcast packet arrives at a router, the router checks to see if the packet
arrived on the line that is normally used for sending packets to the source of the
broadcast. If so, there is an excellent chance that the broadcast packet itself followed the
best route from the router and is therefore the first copy to arrive at the router.
• This being the case, the router forwards copies of it onto all lines except the one it
arrived on. If, however, the broadcast packet arrived on a line other than the preferred
one for reaching the source, the packet is discarded as a likely duplicate.
4.9 Congestion control algorithms

When too many packets are present in (a part of) the subnet, performance degrades.
This situation is called congestion.
• Figure 4.8 depicts the symptom. When the number of packets dumped into the subnet
by the hosts is within its carrying capacity, they are all delivered (except for a few that are
afflicted with transmission errors) and the number delivered is proportional to the number
sent.
• However, as traffic increases too far, the routers are no longer able to cope and they
begin losing packets. This tends to make matters worse. At very high traffic, performance
collapses completely and almost no packets are delivered.

Figure 4.8. When too much traffic is offered, congestion sets in and performance
degrades sharply.

• Congestion can be brought on by several factors. If all of a sudden, streams of packets


begin arriving on three or four input lines and all need the same output line, a queue
will build up.
• If there is insufficient memory to hold all of them, packets will be lost.
• Slow processors can also cause congestion. If the routers' CPUs are slow at
performing the bookkeeping tasks required of them (queuing buffers, updating tables,
etc.), queues can build up, even though there is excess line capacity. Similarly, low-
bandwidth lines can also cause congestion.

APPROACHES TO CONGESTION CONTROL

• Many problems in complex systems, such as computer networks, can be viewed from
a control theory point of view. This approach leads to dividing all solutions into two
groups: open loop and closed loop. Open loop solutions attempt to solve the problem
by good design.
• Tools for doing open-loop control include deciding when to accept new traffic, deciding
when to discard packets and which ones, and making scheduling decisions at various
points in the network.
• Closed loop solutions are based on the concept of a feedback loop.
• This approach has three parts when applied to congestion control: 1. Monitor the
system to detect when and where congestion occurs. 2. Pass this information to places
where action can be taken. 3. Adjust system operation to correct the problem.
• A variety of metrics can be used to monitor the subnet for congestion. Chief among
these are the percentage of all packets discarded for lack of buffer space, the average
queue lengths, the number of packets that time out and are retransmitted, the average
packet delay, and the standard deviation of packet delay. In all cases, rising numbers
indicate growing congestion.
• The second step in the feedback loop is to transfer the information about the
congestion from the point where it is detected to the point where something can be
done about it. In all feedback schemes, the hope is that knowledge of congestion will
cause the hosts to take appropriate action to reduce the congestion.
• The presence of congestion means that the load is (temporarily) greater than the
resources (in part of the system) can handle. Two solutions come to mind: increase
the resources or decrease the load.

Figure 4.9 Timescales of Approaches to Congestion Control

4.9.1Leaky Bucket Algorithm


Let us consider an example
Imagine a bucket with a small hole in the bottom. No matter at what rate water enters
the bucket, the outflow is at constant rate. When the bucket is full with water additional
water entering spills over the sides and is lost.
Figure 4.10 Leaky Bucket Algorithm

Similarly, each network interface contains a leaky bucket and the following steps are
involved in leaky bucket algorithm:
1. When host wants to send packet, packet is thrown into the bucket.
2. The bucket leaks at a constant rate, meaning the network interface transmits
packets at a constant rate.
3. Bursty traffic is converted to a uniform traffic by the leaky bucket.
4. In practice the bucket is a finite queue that outputs at a finite rate.

4.9.2 Token bucket Algorithm

Need of token bucket Algorithm:- The leaky bucket algorithm enforces output
pattern at the average rate, no matter how bursty the traffic is. So in order to deal
with the bursty traffic we need a flexible algorithm so that the data is not lost. One
such algorithm is token bucket algorithm.

Steps of this algorithm can be described as follows:


1. In regular intervals tokens are thrown into the bucket. ƒ
2. The bucket has a maximum capacity. ƒ
3. If there is a ready packet, a token is removed from the bucket, and the packet is
send.
4. If there is no token in the bucket, the packet cannot be send.

Let’s understand with an example,


In figure (A) we see a bucket holding three tokens, with five packets waiting to be
transmitted. For a packet to be transmitted, it must capture and destroy one token.
In figure (B) We see that three of the five packets have gotten through, but the other
two are stuck waiting for more tokens to be generated.

Figure 4.11 Token Bucket Algorithm

4.9.3 Congestion control in virtual Circuit

Different approaches are used to control the congestion in virtual-circuit network.


Some of them are as follows:
Admission control: In this approach, once the congestion is signaled, no new
connections are set up until the problem is solved. This type of approach is often used in
normal telephone networks. When the exchange is overloaded, then no new calls are
established. Allow new virtual connections other than the congested area.
Negotiate an agreement between the host and the network when the connection is
setup. This agreement specifies the volume and shape of traffic, quality of service,
maximum delay and other parameters. The network will reserve resources (Buffer space,
Bandwidth and CPU cycle) along the path when the connection is set up. Now congestion
is unlikely to occur on the new connections because all the necessary resources are
guaranteed to be available. The disadvantage of this approach is that it may leads to
wasted bandwidth because of the some idle connection.

4.9.4 Congestion control in Datagram Subnets

Congestion control in Datagram Subnets is achieved by sending warning to sender


in advance. Each router can easily monitor the utilization of its output lines. If utilization
is greater than threshold value then output line may be congested in future so mark it as
warning state. Each newly arriving packet is checked to see if its output line is in warning
state. If it is, some action is taken.

The actions are:


1. The warning bit
2. Choke packets
3. Hop-by-hop choke packet

The warning bit


When a new packet is to be transmitted on the output line marked as warning state,
a special bit is added in header to signal this state. At the destination, this information is
sent back with ACK to the sender so that it could cut the traffic. When warning bit is
absent, sender increases its transmitting rate.
Note: It uses a whole trip (source to destination to source) to tell the source to slow
down

Choke Packet Technique


In this approach, the router sends a choke packet back to the source host. The
original packet is marked so that it would not generate any more choke packets further
along the path and is then forwarded in the usual way. When the source gets the choke
packet, it is required to reduce the traffic by X packets.

Hop-by Hop Choke Packets


In this approach, unlike choke packet, reduction of flow starts from intermediate
node rather than source node. To understand this, let us refer the figure 2. When the
choke packet reaches the nearest router (say R) from router Q, it reduces the flow.
However, router R now requires devoting more buffers to the flow since the source is still
sending at full blast but it gives router Q immediate relief. In the next step, the choke
packet reaches P and flow genuinely slow down. The net effect of hop-by-hop scheme
is to provide quick relief at the point of congestion at the price of using up more buffers
upstream.

4.10 The network layer in the internet

The transport layer enables the applications to efficiently and reliably exchange
data. Transport layer entities expect to be able to send segment to any destination without
having to understand anything about the underlying subnetwork technologies. Many
subnetwork technologies exist. Most of them differ in subtle details (frame size,
addressing, ...). The network layer is the glue between these subnetworks and the
transport layer. It hides to the transport layer all the complexity of the underlying
subnetworks and ensures that information can be exchanged between hosts connected
to different types of subnetworks.

Principles :

The main objective of the network layer in is to allow end systems, connected to
different networks, to exchange information through intermediate systems called router.
The unit of information in the network layer is called a packet.

Figure 4.12 Representation of network layer

OSI model vs. TCP/IP model

The TCP/IP model is an alternative model of how the Internet works. It divides the
processes involved into four layers instead of seven. Some would argue that the TCP/IP
model better reflects the way the Internet functions today, but the OSI model is still widely
referenced for understanding the Internet, and both models have their strengths and
weaknesses.

In the TCP/IP model, the four layers are:


o 4. Application layer: This corresponds, approximately, to layer 7 in
the OSI model.
o 3. Transport layer: Corresponds to layer 4 in the OSI model.
o 2. Internet layer: Corresponds to layer 3 in the OSI model.
o 1. Network access layer: Combines the processes of layers 1 and 2
in the OSI model.

Figure 4.13 OSI Model Vs TCP/IP Model

➢ The Internet Protocol (IP) is a network layer protocol.


➢ Hosts and gateways process packets called Internet datagrams (IP datagrams).
➢ IP provides connectionless, best-effort delivery service.
➢ The Transmission Control Protocol (TCP) is a transport layer protocol that provides
reliable stream service between processes on two machines. It is a sliding window
protocol that uses acknowledgments and retransmissions to overcome the
unreliability of IP.
➢ The Universal Datagram Protocol (UDP) provides connectionless datagram service
between machines.

An Internet Protocol address (IP address) is a numerical label assigned to each


device connected to a computer network that uses the Internet Protocol for
communication. An IP address serves two principal functions: host or network interface
identification and location addressing.
Internet Addressing
Host identifiers are classified as names, addresses, or routes, where:
A name suggests what object we want.
An address specifies where the object is.
A route tells us how to get to the object.
In the Internet, names consist of human-readable strings such as eve, percival,
or gwen.cs.purdue.edu.
Addresses consist of compact, 32-bit identifiers. Internet software translates
names into addresses; lower protocol layers always uses addresses rather than names.
Internet addresses are hierarchical, consisting of two parts:
Network:
The network part of an address identifies which network a host is on. Conceptually,
each LAN has its own unique IP network number.
Local:
The local part of an address identifies which host on that network.
Address Classes

The Internet designers were unsure whether the world would evolve into a few networks
with many hosts (e.g., large networks), or many networks each supporting only a few
hosts (e.g., small networks). Thus, Internet addresses handle both large and small
networks. Internet address are four bytes in size, where:

1. Class A addresses start with a ``0'' in the most significant bit, followed by a 7-bit
network address and a 24-bit local part.
2. Class B addresses start with a ``10'' in the two most significant bits, followed by a
14-bit network number and a 16-bit local part.
3. Class C addresses start with a ``110'' in the three most significant bits, followed by
a 22-bit network number and an 8-bit local part.
4. Class D addresses start with a ``1110'' in the four most significant bits, followed by
a 28-bit group number.

Note: The use of fixed-sized addresses makes the routing operation efficient. In the ISO
world, addresses are of varying format and length and just extracting the address from
the packet may not be straightforward.

Internet addresses can also refer to broadcast addresses. The all 1's address is used to
mean ``broadcast on this network''. Of course, if the underlying network technology
doesn't support broadcasting, one can't broadcast Internet datagrams either.

Network addresses are written using dotted decimal notation. Each address consists of
4 bytes, and each byte is written in decimal form. Sample addresses:

• wpi.wpi.edu: 130.215.24.6 (class B)


• owl.wpi.edu: 130.215.8.139 (class B)
• wpi.edu: 130.215 (a network address)
• rialto.mcs.vuw.ac.nz: 130.195.5.15 (class B)
• gwen.cs.purdue.edu: 128.10.2.3 (class B)
• c.nyser.net: 192.33.4.12 (Class C)
• pescadero.stanford.edu: 36.8.0.8 (class A)
• su-net-temp: 36 (network address)

Note: Internet addresses refer to network connections rather than hosts. Gateways, for
instance, have two or more network connections and each interface has its own IP
address. Thus, there is not a one-to-one mapping between host names and IP addresses.

4.11 IPv6

IPv6 is the replacement Internet protocol for IPv4. It corrects some of the
deficiencies of IPv4 and simplifies the way that addresses are configured and how they
are handled by Internet hosts. IPv4 has proven to be robust, easily implemented, and
interoperable, and has stood the test of scaling an internetwork to a global utility the size
of the Internet. However, the initial design did not anticipate the following conditions:
• Recent exponential growth of the Internet and the impending exhaustion of the
IPv4 address space
• The ability of Internet backbone routers to maintain large routing tables
• Need for simpler auto configuration and renumbering
• Requirement for security at the IP level (IPSec)
• Need for better support for real-time delivery of data, known as quality of service
(QoS)

Need for IPv6


With 32-bit address format, IPv4 can handle a maximum 4.3 billion unique IP
addresses.
While this number may seem very large, it is not enough to sustain and scale the
rapidly rising growth of the Internet. Although improvements to IPv4, including the use of
NAT, have allowed the extended use of the protocol, address exhaustion is inevitable and
could happen as soon as 2012. With its 128-bit address format, IPv6 can support 3.4 x
1038 or 340, 282, 366, 920, 938,463,463,374,607,431,768,211,456 unique IP addresses.
This number of addresses is large enough to configure a unique address on every
node in the Internet and still have plenty of addresses left over. It is also large enough to
eliminate the need for NAT, which has its own inherent problems.
A few countries, governmental agencies, and multinational corporations have either
already deployed or mandated deployment of IPv6 in their networks and software
products. Some emerging nations have no choice but to deploy IPv6 because of the
unavailability of new IPv4 addresses.
Advantages of IPv6
Besides providing an almost limitless number of unique IP addresses for global end
to-end reachability and scalability, IPv6 has the following additional advantages:
• Simplified header format for efficient packet handling
• Larger payload for increased throughput and transport efficiency
• Hierarchical network architecture for routing efficiency
• Support for widely deployed routing protocols (OSPF, BGP, etc.)
• Auto configuration and plug-and-play support
• Elimination of need for network address translation (NAT) and application layered
gateway (ALG).
• Increased number of multicast addresses.

IPv6 Simplifications
Fixed format headers – Use extension headers instead of options
• Remove header checksum – Rely on link layer and higher layers to check integrity
of data
• Remove hop-by-hop segmentation – Fragmentation only by sender due to path
MTU discovery

IPv6 Header Format


A side-by-side comparison of the IPv4 header and the IPv6 header in figure shows
that the IPv6 header is more streamlined and efficient than the IPv4 header.

Fixed Header

Figure 4.14 IPV6 Fixed Header

An IPv6 address is 4 times larger than IPv4, but surprisingly, the header of an IPv6
address is only 2 times larger than that of IPv4. IPv6 headers have one Fixed Header and
zero or more Optional (Extension) Headers. All the necessary information that is essential
for a router is kept in the Fixed Header. The Extension Header contains optional
information that helps routers to understand how to handle a packet/flow.

Version (4-bits): It represents the version of Internet Protocol, i.e. 0110.


Traffic Class (8-bits): These 8 bits are divided into two parts. The most significant 6 bits
are used for Type of Service to let the Router Known what services should be provided
to this packet. The least significant 2 bits are used for Explicit Congestion Notification
(ECN).
Flow Label (20-bits): This label is used to maintain the sequential flow of the packets
belonging to a communication. The source labels the sequence to help the router identify
that a particular packet belongs to a specific flow of information. This field helps avoid re-
ordering of data packets. It is designed for streaming/real-time media.
Payload Length (16-bits): This field is used to tell the routers how much information a
particular packet contains in its payload. Payload is composed of Extension Headers and
Upper Layer data. With 16 bits, up to 65535 bytes can be indicated; but if the Extension
Headers contain Hop-by-Hop Extension Header, then the payload may exceed 65535
bytes and this field is set to 0.
Next Header (8-bits): This field is used to indicate either the type of Extension Header,
or if the Extension Header is not present then it indicates the Upper Layer PDU. The
values for the type of Upper Layer PDU are same as IPv4’s.
Hop Limit (8-bits): This field is used to stop packet to loop in the network infinitely. This
is same as TTL in IPv4. The value of Hop Limit field is decremented by 1 as it passes a
link (router/hop). When the field reaches 0 the packet is discarded.
Source Address (128-bits): This field indicates the address of originator of the packet.
Destination Address (128-bits): This field provides the address of intended recipient of
the packet.

Extension Headers
In IPv6, the Fixed Header contains only that much information which is necessary,
avoiding those information which is either not required or is rarely used. All such
information is put between the Fixed Header and the Upper layer header in the form of
Extension Headers. Each Extension Header is identified by a distinct value.
When Extension Headers are used, IPv6 Fixed Header’s Next Header field points
to the first Extension Header. If there is one more Extension Header, then the first
Extension Header’s ‘Next-Header’ field points to the second one, and so on. The last
Extension Header’s ‘Next-Header’ field points to the Upper Layer Header. Thus, all the
headers points to the next one in a linked list manner.
If the Next Header field contains the value 59, it indicates that there are no headers
after this header, not even Upper Layer Header.
The following Extension Headers must be supported as per RFC 2460:

Figure 4.15 Extension Header

Figure 4.16 Sequence of Extension Header


These headers:
• 1. should be processed by First and subsequent destinations.
• 2. should be processed by Final Destination.
Extension Headers are arranged one after another in a linked list manner, as depicted
in the following diagram:

Figure 4.17 Extension Headers Connected Format


4.12 Address resolution protocol
While communicating, a host needs Layer-2 (MAC) address of the destination
machine which belongs to the same broadcast domain or network. A MAC address
is physically burnt into the Network Interface Card (NIC) of a machine and it never
changes. On the other hand, IP address on the public domain is rarely changed. If
the NIC is changed in case of some fault, the MAC address also changes. This way,
for Layer-2 communication to take place, a mapping between the two is required.

Dynamically builds table of IP to physical address bindings for a local network


• Broadcast request if IP address not in table
• All learn IP address of requesting node (broadcast)
• Target machine responds with its physical address
• Table entries are discarded if not refreshed Reverse Address resolution protocol

ARP Ethernet frame format

The address resolution protocol (ARP) uses a basic message format that contains either
address resolution request or address resolution response. The ARP message size
depends on the address size of the link layer and the network layer. The message header
describes the network type used at each layer and the address size of each layer. The
message header is complete with the help of the operation code, which
is 1 for request and 2 for the response. The payload of the packet has four addresses,
these are:

o Hardware address of the sender hosts


o Hardware address of the receiver hosts
o Protocol address of the sender hosts
o Protocol address of the receiver hosts

Figure 4.18 ARP Header


HTYPE (Hardware Type) - The size of the hardware type field is 16 bit. This field
defines the network type that the local network needs to transmit the ARP message.
There are some typical values for this field, which are given below:

Hardware Type (HTYPE) Value


Ethernet 1
IEEE 802 Networks 6
ARCNET 7
Frame Relay 15
Asynchronous Transfer Mode (ATM) 16
HDLC 17
Fibre Channel 18
Asynchronous Transfer Mode (ATM) 19
Serial Line 20
Table.4 Hardware Type

PTYPE (Protocol Type) - The protocol type is a 16-bit field used to specify the type of
protocol.

HLEN (Hardware Length) - The size of the hardware length field is 8-bit. This field
specifies the length of the physical address in bytes.

Example: For this, the address length of Ethernet is 6.

PLEN (Protocol Length) - The size of the protocol length field is 8-bit long. It defines the
length of the IP address in bytes.

OPER (Operation) - It is a 16-bit field that determines the type of ARP packet. There are
two types of ARP packets, i.e., ARP request and ARP Reply. In the given table, the first
two values are used for the ARP request and reply. The values for the other ARP frame
format such as RARP, DRARP, etc. are also specified in this table.

ARP Message Type Opcode (Operation Code)


ARP Request 1
ARP Reply 2
RARP Request 3
RARP Reply 4
DRARP Request 5
DRARP Reply 6
DRARP Error 7
InARP Request 8
InARP Reply 9
Table.5 Message Type
SHA (Sender Hardware Address) - This field specifies the physical address of the
sender, and the length of this field is not fixed.

SPA (Sender Protocol Address) - This field is used to determine the logical address of
the sender, and the length of this field is not fixed.

THA (Target Hardware Address) - The target hardware address specifies the physical
address of the target. It is a variable-length field. For the ARP request packet, this field
contains all zeros because the sender does not know the physical address of the receiver.

TPA (Target Protocol Address) - This field determines the logical address of the
target. TPA is a variable-length field.

4.13 Dynamic Host Configuration Protocol (DHCP)


Is an application layer protocol which is used to provide:
1. Subnet Mask (Option 1 – e.g., 255.255.255.0)
2. Router Address (Option 3 – e.g., 192.168.1.1)
3. DNS Address (Option 6 – e.g., 8.8.8.8)
4. Vendor Class Identifier (Option 43 – e.g., ‘unifi’ = 192.168.1.9 ##where
unifi = controller)
DHCP is based on a client-server model and based on discovery, offer, request,
and ACK. DHCP port number for server is 67 and for the client is 68. It is a Client server
protocol which uses UDP services. IP address is assigned from a pool of addresses. In
DHCP, the client and the server exchange mainly 4 DHCP messages in order to make a
connection, also called DORA process, but there are 8 DHCP messages in the process.
These messages are given as below:
1. DHCP discover message –
This is a first message generated in the communication process between server and
client. This message is generated by Client host in order to discover if there is any DHCP
server/servers are present in a network or not. This message is broadcasted to all devices
present in a network to find the DHCP server. This message is 342 or 576 bytes long.
2. DHCP offer message –
The server will respond to host in this message specifying the unleased IP address and
other TCP configuration information. This message is broadcasted by server. Size of
message is 342 bytes. If there are more than one DHCP servers present in the network
then client host will accept the first DHCP OFFER message it receives. Also a server ID
is specified in the packet in order to identify the server.
3. DHCP request message –
When a client receives a offer message, it responds by broadcasting a DHCP request
message. The client will produce a gratitutous ARP in order to find if there is any other
host present in the network with same IP address. If there is no reply by other host, then
there is no host with same TCP configuration in the network and the message is
broadcasted to server showing the acceptance of IP address .A Client ID is also added
in this message.
4. DHCP acknowledgement message –
In response to the request message received, the server will make an entry with specified
client ID and bind the IP address offered with lease time. Now, the client will have the IP
address provided by server.
5. DHCP negative acknowledgement message –
Whenever a DHCP server receives a request for IP address that is invalid according to
the scopes that is configured with, it send DHCP Nak message to client. Eg-when the
server has no IP address unused or the pool is empty, then this message is sent by the
server to client.
6. DHCP decline –
If DHCP client determines the offered configuration parameters are different or invalid, it
sends DHCP decline message to the server .When there is a reply to the gratuitous ARP
by any host to the client, the client sends DHCP decline message to the server showing
the offered IP address is already in use.
7. DHCP release –
A DHCP client sends DHCP release packet to server to release IP address and cancel
any remaining lease time.
8. DHCP inform –
If a client address has obtained IP address manually then the client uses a DHCP inform
to obtain other local configuration parameters, such as domain name. In reply to the dhcp
inform message, DHCP server generates DHCP ack message with local configuration
suitable for the client without allocating a new IP address. This DHCP ack message is
unicast to the client.
Note – All the messages can be unicast also by dhcp relay agent if the server is present
in different network.

Advantages – The advantages of using DHCP include:


▪ centralized management of IP addresses
▪ ease of adding new clients to a network
▪ reuse of IP addresses reducing the total number of IP addresses that are
required
▪ simple reconfiguration of the IP address space on the DHCP server without
needing to reconfigure each client
The DHCP protocol gives the network administrator a method to configure the network
from a centralized area. With the help of DHCP, easy handling of new users and reuse of
IP address can be achieved.
Disadvantages – Disadvantage of using DHCP is: IP conflict can occur
4.13 INTERNET CONTROL MESSAGE PROTOCOL (ICMP)
IP provides unreliable connectionless datagram service, original aim being
efficient use of network resources.
Types of messages ICMP messages are divided into two broad categories:

1. Error reporting Messages. 2. Query Messages

Error reporting: ICMP was designed to compensate the shortcoming of


unreliability in IP. However ICMP does not correct errors, but only reports them.
Error reporting messages are always sent to the original source. Five types of errors
are handled:
Destination unreachable—In situations where a router cannot route a datagram or
a host cannot deliver a datagram, the datagram is discarded and the router or host
sends a destination unreachable message back to the source.
Source Quench—IP being a connectionless protocol, there is no communication
between the source host, the router and the destination host. The resulting lack of
flow control is a major hazard in the operation of source-destination delivery. And
the lack of congestion control causes major problems n the routers. The source
quench message in ICMP adds some flow control and congestion control to IP by
notifying the source of a datagram being discarded and forcing it to slow down its
transmission.
Time Exceeded—It is generated in two cases
A. A router receives a datagram with a zero value in the TTL field
B. All fragments that make up a message do not arrive at the destination host
within a certain time limit
Parameter Problem—If a router or a destination host discovers an ambiguous or
missing value in a any field of the datagram, it discards the datagram and sends a
parameter problem message back to the source.
Redirection—When a host comes up, its routing table has a limited number of
entries. It usually knows the IP address of a single default router. For this reason the
host may send a datagram to the wrong router. The router that receives the
datagram will forward it to the correct router and will send a redirection message
back to the host for routing table updating.
Query Messages:
Query messages are used to diagnose some network problems. There are
four different pairs of messages.
Echo Request/Reply messages—are designed for diagnostic purposes. Their
combination determines whether two systems can communicate with each other.
Time stamp Request/Reply messages—can be used to determine the round trip
time for an IP datagram to travel between two machines and also to synchronize the
clocks in them.
Address mask Request/Reply message—are used between the host and the
router to indicate which part of the address defines the network and the sub-network
address and which part corresponds to the host identifier.
Router Solicitation and Advertisement—are useful to inform a host that wants to
send data to a host on another network, the address of routers connected to its own
network and also their status and functioning.

Figure 4.19 Header of ICMP

4.14 Classless addressing (CIDR - Classless Inter-Domain Routing)

Classless Inter-Domain Routing (CIDR) is another name for classless addressing.


This addressing type aids in the more efficient allocation of IP addresses. This technique
assigns a block of IP addresses based on specified conditions when the user demands a
specific amount of IP addresses. This block is known as a "CIDR block", and it contains
the necessary number of IP addresses.
When allocating a block, classless addressing is concerned with the following three rules.
• Rule 1 − The CIDR block's IP addresses must all be contiguous.
• Rule 2 − The block size must be a power of two to be attractive. Furthermore, the
block's size is equal to the number of IP addresses in the block.
• Rule 3 − The block's first IP address must be divisible by the block size.
For example, assume the classless address is 192.168.1.35/27.
• The network component has a bit count of 27, whereas the host portion has a bit
count of 5. (32-27)
• The binary representation of the address is: (00100011 . 11000000 . 10101000 .
00000001).
• (11000000.10101000.00000001.00100000) is the first IP address (assigns 0 to all
host bits), that is, 192.168.1.32
• (11000000.10101000.00000001.00111111) is the most recent IP address
(assigns 1 to all host bits), that is, 192.168.1.63
• The IP address range is 192.168.1.32 to 192.168.1.63.
Difference Between Classful and Classless Addressing
• Classful addressing is a technique of allocating IP addresses that divides them into
five categories. Classless addressing is a technique of allocating IP addresses that
is intended to replace classful addressing in order to reduce IP address depletion.
• The utility of classful and classless addressing is another distinction. Addressing
without a class is more practical and helpful than addressing with a class.
• The network ID and host ID change based on the classes in classful addressing.
In classless addressing, however, there is no distinction between network ID and
host ID. As a result, another distinction between classful and classless addressing
may be made.
It was introduced in 1993 (RCF 1517) replacing the previous generation of IP
address syntax – classful networks. CIDR introduction allowed for:
▪ More efficient use of IPv4 address space
▪ Prefix aggregation, which reduced the size of routing tables

CIDR allows routers to group together to reduce the bulk of routing information
carried by the core routers. With CIDR, IP addresses and their subnet masks are
written as four octets, separated by periods, followed by a forward slash and a two-
digit number that represents the subnet mask e.g. 10.1.1.0/30, 172.16.1.16/28 and
192.168.1.32/27 etc.

CIDR / VLSM Network addressing topology example


Figure 4.20 CIDR

CIDR uses VLSM (Variable Lenght Subnet Masks) to allocate IP addresses to


subnetworks according to need rather than class. VLSM allows for subnets to be
further divided or subnetted into even smaller subnets. With CIDR, address classes
(Class A, B, and C) became meaningless. The network address was no longer
determined by the value of the first octet, but assigned prefix length (subnet mask)
address space. The number of hosts on a network, could now be assigned a specific
prefix depending upon the number of hosts needed for that network. Propagating
CIDR supernets or VLSM subnets require a classless Routing Protocols – A classless
routing protocol includes the subnet mask along with the network address in the
routing update.

Summary routes determination


Determining the summary route and subnet mask for a group of networks can
be done in three easy steps:
1. To list the networks in binary format.
2. To count the number of left-most matching bits. This will give you the prefix
length or subnet mask for the summarized route.
3. To copy the matching bits and then add zero bits to the rest of the address to
determine the summarized network address.
CIDR Advantages

With the introduction of CIDR and VLSM, ISPs could now assign one part of a
classful network to one customer and different part to another customer. With the
introduction of VLSM and CIDR, network administrators had to use additional
subnetting skills.
4.15 Network Address Translation (NAT)

To access the Internet, one public IP address is needed, but we can use a private
IP address in our private network. The idea of NAT is to allow multiple devices to access
the Internet through a single public address. To achieve this, the translation of a private
IP address to a public IP address is required. Network Address Translation (NAT) is
a process in which one or more local IP address is translated into one or more Global
IP address and vice versa in order to provide Internet access to the local hosts. Also, it
does the translation of port numbers i.e. masks the port number of the host with another
port number, in the packet that will be routed to the destination. It then makes the
corresponding entries of IP address and port number in the NAT table. NAT generally
operates on a router or firewall.

Network Address Translation (NAT) working –

Generally, the border router is configured for NAT i.e the router which has one
interface in the local (inside) network and one interface in the global (outside) network.
When a packet traverse outside the local (inside) network, then NAT converts that local
(private) IP address to a global (public) IP address. When a packet enters the local
network, the global (public) IP address is converted to a local (private) IP address. If
NAT runs out of addresses, i.e., no address is left in the pool configured then the packets
will be dropped and an Internet Control Message Protocol (ICMP) host unreachable
packet to the destination is sent.

Why mask port numbers ?

Suppose, in a network, two hosts A and B are connected. Now, both of them
request for the same destination, on the same port number, say 1000, on the host side,
at the same time. If NAT does only translation of IP addresses, then when their packets
will arrive at the NAT, both of their IP addresses would be masked by the public IP
address of the network and sent to the destination. Destination will send replies to the
public IP address of the router. Thus, on receiving a reply, it will be unclear to NAT as
to which reply belongs to which host (because source port numbers for both A and B
are the same). Hence, to avoid such a problem, NAT masks the source port number as
well and makes an entry in the NAT table.

NAT inside and outside addresses –

Inside refers to the addresses which must be translated. Outside refers to the
addresses which are not in control of an organization. These are the network Addresses
in which the translation of the addresses will be done.

Figure 4.21 Network Address Translation (NAT)

• Inside local address – An IP address that is assigned to a host on the Inside (local)
network. The address is probably not an IP address assigned by the service provider
i.e., these are private IP addresses. This is the inside host seen from the inside
network.

• Inside global address – IP address that represents one or more inside local IP
addresses to the outside world. This is the inside host as seen from the outside
network.

• Outside local address – This is the actual IP address of the destination host in the
local network after translation.

• Outside global address – This is the outside host as seen from the outside network.
It is the IP address of the outside destination host before translation.

Network Address Translation (NAT) Types – There are 3 ways to configure NAT:

1. Static NAT – In this, a single unregistered (Private) IP address is mapped with a


legally registered (Public) IP address i.e one-to-one mapping between local and
global addresses. This is generally used for Web hosting. These are not used in
organizations as there are many devices that will need Internet access and to
provide Internet access, a public IP address is needed. Suppose, if there are 3000
devices that need access to the Internet, the organization has to buy 3000 public
addresses that will be very costly.

2. Dynamic NAT – In this type of NAT, an unregistered IP address is translated into a


registered (Public) IP address from a pool of public IP addresses. If the IP address
of the pool is not free, then the packet will be dropped as only a fixed number of
private IP addresses can be translated to public addresses.
Suppose, if there is a pool of 2 public IP addresses then only 2 private IP addresses
can be translated at a given time. If 3rd private IP address wants to access the
Internet then the packet will be dropped therefore many private IP addresses are
mapped to a pool of public IP addresses. NAT is used when the number of users
who want to access the Internet is fixed. This is also very costly as the organization
has to buy many global IP addresses to make a pool.

3. Port Address Translation (PAT) – This is also known as NAT overload. In this,
many local (private) IP addresses can be translated to a single registered IP address.
Port numbers are used to distinguish the traffic i.e., which traffic belongs to which IP
address. This is most frequently used as it is cost-effective as thousands of users
can be connected to the Internet by using only one real global (public) IP address.

Advantages of NAT –

• NAT conserves legally registered IP addresses.


• It provides privacy as the device’s IP address, sending and receiving the traffic, will
be hidden.
• Eliminates address renumbering when a network evolves.

Disadvantage of NAT –

• Translation results in switching path delays.


• Certain applications will not function while NAT is enabled.
• Complicates tunneling protocols such as IPsec.
• Also, the router being a network layer device, should not tamper with port numbers
(transport layer) but it has to do so because of NAT.

You might also like